-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for AzureOpenAI #197
Comments
Hi @rajib76! Thanks for opening an issue. Are you looking for support of an Azure OpenAI deployed LLM in LangKit's OpenAIDefault kind of thing like we do in the Choosing an LLM example? or something else? I wanted to get some better support for Azure hosted models into LangKit soon, we could probably focus on changes to support the Azure OpenAI models as a first iteration if that is helpful (e.g. gpt-35-turbo, gpt4)? |
Yes, looking for support in LangKit's OpenAIDefault. Currently if I need to do hallucination checks, looks like I cannot do it using Azure Open AI. It is only supported for open ai. I am looking at using to evaluate the response from Azure Open AI for hallucination, prompt injection, contextual relevancy and all |
Ok, working on it. If you want to try an initial dev build: New class and usage looks like this:
Also need to set these new env vars:
As referenced in this example: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions#working-with-the-gpt-35-turbo-and-gpt-4-models |
Was able to run it but saw one thing. I tried the example code for hallucination score. I did not see a way to add a context and then tell the LLM to grade the response based on question and context |
I think I got how it is working. It is self check validation. The prompt is being sent to the same model to get and answer again and then we are checking back with the response. Is it possible to do below
|
Hi, @rajib76 Yes, this is exactly how
|
Thanks Felipe, for#1, it will work if we can just pass the response and the ground truth. But do we need a LLM to do the consistency check. Can we not have an option to do a semantic match with an embedding model and then put a threshold score. For #2, I am planning to implement chain of verification as I mentioned in this recording. I wanted to check if this can be out of the box from langkit |
Thanks for the reply @rajib76 For #1, yes, it should be possible to perform the semantic similarity based consistency check without the presence of an LLM. And #2 also makes a lot of sense for your and others' scenarios I created two issues to reflect both topics we are discussing:
We'll plan those changes in future sprints |
Hi @jamie256, Is Azure Open AI support now integrated with the latest version of langkit? I am trying to use the response_halucination module in the following way but am getting these error messages: Langkit response_halucination initialization
Setting environment variables from Azure Open AI generated api keys
Error tracebackresult = response_hallucination.consistency_check( Next StepsCould you let me know what the current method for integrating azure openai llms for evaluations in langkit is? Let me know if there is additional information I can provide. Thank you |
I see that langkit is not supporting Azure Open AI. When can this be supported?
The text was updated successfully, but these errors were encountered: