Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The status of this repository? #660

Open
dequeueing opened this issue Feb 19, 2025 · 10 comments
Open

The status of this repository? #660

dequeueing opened this issue Feb 19, 2025 · 10 comments

Comments

@dequeueing
Copy link

Hi. GPTCache is cool, but there has been no commits for 5 months. Howeve readme mentions that "repository is still under heavy development".

Can anyone explain what is the situation now for the repo?

@SimFG
Copy link
Collaborator

SimFG commented Feb 19, 2025

Are you having any problems or have any other ideas? Let's discuss it together.

@dequeueing
Copy link
Author

Are you having any problems or have any other ideas? Let's discuss it together.

OK. Is GPTCache integrated into Langchain? Readme mentions this but the link is dead.

🎉 GPTCache has been fully integrated with 🦜️🔗LangChain ! Here are detailed usage instructions.

@SimFG
Copy link
Collaborator

SimFG commented Feb 19, 2025

because the langchain doc link has changed, you can visit the langchain repo to find the doc. I think it's https://python.langchain.com/api_reference/community/cache/langchain_community.cache.GPTCache.html#gptcache

@dequeueing
Copy link
Author

because the langchain doc link has changed, you can visit the langchain repo to find the doc. I think it's https://python.langchain.com/api_reference/community/cache/langchain_community.cache.GPTCache.html#gptcache

Thanks for the link.

I wonder is there other way to support locally deployed huggingface model (e.g. llama2) , rather than using API? Could you please suggest?

@SimFG
Copy link
Collaborator

SimFG commented Feb 19, 2025

Currently, GPTCache supports the APIs of OpenAI and Cohere. You can try to package your embedding service into an API that is compatible with them. Similar to how most LLM models are now compatible with OpenAI’s SDK.

@dequeueing
Copy link
Author

Currently, GPTCache supports the APIs of OpenAI and Cohere. You can try to package your embedding service into an API that is compatible with them. Similar to how most LLM models are now compatible with OpenAI’s SDK.

Great. Thanks for the advice.

@dequeueing
Copy link
Author

Currently, GPTCache supports the APIs of OpenAI and Cohere. You can try to package your embedding service into an API that is compatible with them. Similar to how most LLM models are now compatible with OpenAI’s SDK.

Sorry I may not have thoroughly undertstood your meaning. What is the "embedding service"? Is it the "embedding generator" in your paper?

@dequeueing dequeueing reopened this Feb 20, 2025
@SimFG
Copy link
Collaborator

SimFG commented Feb 20, 2025

@dequeueing
Copy link
Author

ref: https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md#build-your-cache

Thanks. I wonder how can I manage the cache such as delete some entry in the cache. I notice that there are no such APIs in api.py. From the doc the way to achieve seperation is via Session. But I want to know can I delete some entry in the cache?

Thanks!

@SimFG
Copy link
Collaborator

SimFG commented Feb 24, 2025

You can try to learn about eviction manager., link: https://github.com/zilliztech/GPTCache/blob/main/gptcache/manager/eviction_manager.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants