Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deployed and run successfully, but no response. #19

Open
guoxiangke opened this issue Aug 22, 2024 · 4 comments
Open

deployed and run successfully, but no response. #19

guoxiangke opened this issue Aug 22, 2024 · 4 comments

Comments

@guoxiangke
Copy link

Hi, finally, I make this run in my local. but the chat give nothing, and no error.

image image

steps:

  1. make this repo works on local.
pip uninstall aiofiles graphrag chainlit -y
pip install aiofiles==23.1.0
pip install chainlit==1.1.306
pip install --no-deps graphrag
  1. copy my.txt to input folder and run "python -m graphrag.index --root ." with gpt-4o-mini (ollama + llama3.1 not work on local)

  2. then litellm --model ollama/llama3.1:8b --api_base http://localhost:11434

@guoxiangke
Copy link
Author

this also works well:
python -m graphrag.query --root . --method global "my question?"

@qib-bang
Copy link

qib-bang commented Sep 2, 2024

Have you solved the problem now?

@WangAo-0
Copy link

同样的问题,解决了吗?

@qib-bang
Copy link

qib-bang commented Oct 15, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants