-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support using Embeddings Model exposed via OpenAI (compatible) API #1051
Conversation
e057838
to
275d70e
Compare
1189679
to
cc82667
Compare
275d70e
to
f6948a2
Compare
89a0e28
to
d186eed
Compare
d186eed
to
1b5826d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet! This should definitely help Khoj be more scale-ready for generating embeddings.
sounds amazing. But it doesn't seem to be actually indexing via the API. I used the MD file which is not yet merged, adding text-embedding-3-small to the name of biencoder, and inputing and API key. Shall I do something else ? How to make sure it works. ? |
Hey could you follow these instructions. Two things to remember to do:
If it's still not working, it'd be good if you can share screenshots of your OpenAI Search Model admin config page and the top level search model config page from the admin panel that lists all the search model configs |
Studying the log, I had passed https://api.openai.com/v1/embeddings and not https://api.openai.com/v1 as endpoint. |
This change adds the ability to use OpenAI's embedding models or any embedding model exposed behind an OpenAI compatible API (like Ollama, LiteLLM, vLLM etc.). This allows using commercial embedding models to index your content with Khoj.
Khoj previously only supported HuggingFace embedding models running locally on device or via HuggingFaceW inference API endpoint.