Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error when # Create knowledge graph #18

Open
parimalbera7551 opened this issue Aug 19, 2024 · 11 comments
Open

error when # Create knowledge graph #18

parimalbera7551 opened this issue Aug 19, 2024 · 11 comments

Comments

@parimalbera7551
Copy link

when i run
python -m graphrag.index --root .
🚀 create_base_extracted_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
🚀 create_summarized_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
❌ create_base_entity_graph
None
⠋ GraphRAG Indexer
├── Loading Input (InputFileType.text) - 1 files loaded (14 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00
├── create_base_text_units
├── create_base_extracted_entities
├── create_summarized_entities
└── create_base_entity_graph
❌ Errors occurred during the pipeline run, see logs for more details.

@karthik-codex
Copy link
Owner

Did you look into the log file? in some cases, there might be timeout for calling the LLM. Also what llm and embedding model are you using?

@parimalbera7551
Copy link
Author

In the log files shows Error 'Invoking LLM'
i am using llama3

@Drenjy
Copy link

Drenjy commented Aug 29, 2024

Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.

@parimalbera7551
Copy link
Author

parimalbera7551 commented Aug 29, 2024 via email

@parimalbera7551
Copy link
Author

parimalbera7551 commented Sep 4, 2024 via email

@parimalbera7551
Copy link
Author

can you share it?

@DmitryKey
Copy link

DmitryKey commented Oct 21, 2024

To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.

llm:
   model: mistral-nemo

embeddings:
   model: nomic_embed_text

In the logs.json I see all of these calls have two reasons:

  1. "Request timed out."
  2. "Error code: 500 - {'error': {'message': 'unexpected server status: llm server loading model', 'type': 'api_error', 'param': None, 'code': None}}"

My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages?

@parimalbera7551
Copy link
Author

parimalbera7551 commented Oct 21, 2024 via email

@sebastianfernandezgarcia

Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.

But when I run Chainlit and ask a question:
"""
Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:
"""

@parimalbera7551
Copy link
Author

parimalbera7551 commented Oct 22, 2024 via email

@parimalbera7551
Copy link
Author

parimalbera7551 commented Nov 19, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants