-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error when # Create knowledge graph #18
Comments
Did you look into the log file? in some cases, there might be timeout for calling the LLM. Also what llm and embedding model are you using? |
In the log files shows Error 'Invoking LLM' |
Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error. |
Can you share your settings.yaml file?
…On Fri, 30 Aug, 2024, 12:35 am Samoylov Andrey, ***@***.***> wrote:
Try to change llm.model in settings.yaml. I use mistral-nemo and it's
work. When i try gemma 2, i got same error.
—
Reply to this email directly, view it on GitHub
<#18 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Can you share the settings.yml file?
On Fri, 30 Aug, 2024, 12:55 am PARIMAL BERA, ***@***.***>
wrote:
… Can you share your settings.yaml file?
On Fri, 30 Aug, 2024, 12:35 am Samoylov Andrey, ***@***.***>
wrote:
> Try to change llm.model in settings.yaml. I use mistral-nemo and it's
> work. When i try gemma 2, i got same error.
>
> —
> Reply to this email directly, view it on GitHub
> <#18 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
can you share it? |
To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.
In the logs.json I see all of these calls have two reasons:
My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages? |
tell me what can i do?
…On Mon, Oct 21, 2024 at 5:17 PM Dmitry Kan ***@***.***> wrote:
To avoid creating a new thread - I have the same issue with lots of "Error
Invoking LLM" messages in the log. Yet the indexer completes with all green
check marks.
llm:
model: mistral-nemo
embeddings:
model: nomic_embed_text
In the logs. json I see all of these calls have two reasons:
1. "Request timed out."
2. "Error code: 500 - {'error': {'message': 'unexpected server status:
llm server loading model', 'type': 'api_error', 'param': None, 'code':
None}}"
My understanding was that the indexer will call local (ollama) models - is
this the case? What could be the reason for models to return such messages?
—
Reply to this email directly, view it on GitHub
<#18 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BJSOI4DIVDUSMCGGFIDT3M3Z4TSTJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRWGQ2DIMBWGQ>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***
com>
|
Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me. But when I run Chainlit and ask a question: |
If you share this file where it is mentioned llm type and increase request
time .it is help full for me
…On Tue, 22 Oct, 2024, 7:32 pm Sebastián Fernández García, < ***@***.***> wrote:
Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as
increasing request_timeout, solves the errors for me.
But when I run Chainlit and ask a question:
"""
Replying as User_Proxy. Provide feedback to chat_manager. Press enter to
skip and use auto-reply, or type 'exit' to end the conversation:
"""
—
Reply to this email directly, view it on GitHub
<#18 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BJSOI4G4WUUVPKBYCDC7M43Z4ZLIXAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRZGM3TSMRXGU>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***
com>
|
Can you share this?
On Wed, 23 Oct, 2024, 12:22 am PARIMAL BERA, ***@***.***>
wrote:
… If you share this file where it is mentioned llm type and increase request
time .it is help full for me
On Tue, 22 Oct, 2024, 7:32 pm Sebastián Fernández García, <
***@***.***> wrote:
> Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as
> increasing request_timeout, solves the errors for me.
>
> But when I run Chainlit and ask a question:
> """
> Replying as User_Proxy. Provide feedback to chat_manager. Press enter to
> skip and use auto-reply, or type 'exit' to end the conversation:
> """
>
> —
> Reply to this email directly, view it on GitHub
> <#18 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BJSOI4G4WUUVPKBYCDC7M43Z4ZLIXAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRZGM3TSMRXGU>
> .
> You are receiving this because you modified the open/close state.Message
> ID: ***@***.***
> com>
>
|
when i run
python -m graphrag.index --root .
🚀 create_base_extracted_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
🚀 create_summarized_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
❌ create_base_entity_graph
None
⠋ GraphRAG Indexer
├── Loading Input (InputFileType.text) - 1 files loaded (14 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00
├── create_base_text_units
├── create_base_extracted_entities
├── create_summarized_entities
└── create_base_entity_graph
❌ Errors occurred during the pipeline run, see logs for more details.
The text was updated successfully, but these errors were encountered: