-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error during the execution of the graphrag after following the steps #7
Comments
have you looked at your log file? what models are you using in your settings.yaml? |
Mistral and nomic embed text only
…On Sun, Jul 21, 2024, 7:46 PM Karthik Rajan ***@***.***> wrote:
have you looked at your log file? what models are you using in your
settings.yaml?
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASIXPTDDI7S23XGILM4JLG3ZNO7CLAVCNFSM6AAAAABLGPPPRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBRGYZDMNJXGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hi, there I am running the model in the low specs machine .I am doing a prroject on the graphrag for which i need to index the below book related to medicine.Can you help me to generate the graphrag indexer files alone and attach me the zip file with the gmail? |
Hi Vidhya, sorry I do not have the resources to perform the indexing for 900 pages. I would recommend converting the pdf to markdown first using the script provided in /Utils folder. Then split the markdown into ~20-30 chunks and do the indexing sequentially. |
I am new to this ..
Is it possible to run the ollama local model?
…On Mon, Jul 22, 2024, 7:28 PM Karthik Rajan ***@***.***> wrote:
Hi Vidhya, sorry I do not have the resources to perform the indexing for
900 pages. I would recommend converting the pdf to markdown first using the
script provided in /Utils folder. Then split the markdown into ~20-30
chunks and do the indexing sequentially.
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASIXPTE6NMH4QS4CEXCWMFTZNUFYNAVCNFSM6AAAAABLGPPPRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBTGAZTIMJQGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Yes. Download and install the tool from Ollama.com. Then open cmd prompt and execute "ollama run llama3" to run Llama3 locally. |
The api embedding call is not working. whenever i start to index the
documents using graphrag. It shows the error while creating the
entities relationships.
when i looked into the log file it shows like "Error invoking the LLM".
I followed the steps as such replacing the embedding.py and
openai_llm_embedding.py file in the graphrag from the utils folder despite
it shows the error message.
Can you help..?
…On Wed, Jul 24, 2024 at 11:08 PM Karthik Rajan ***@***.***> wrote:
I am new to this .. Is it possible to run the ollama local model?
Yes. Download and install the tool from Ollama.com. Then open cmd prompt
and execute "ollama run llama3" to run Llama3 locally.
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASIXPTGWXR4BEELBUDNWKHTZN7RAPAVCNFSM6AAAAABLGPPPRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBYGU3DMNBUGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I had the same error. I had to change my settings.json file. following lines were wrong:
... I had to change to below:
|
I may have had similar issue. I think I had to revert back to nomic_embed_text and 11434/api duing local search inference. Let me know which one works. |
Ok, I'll check.
…On Thu, Aug 1, 2024 at 7:11 PM Karthik Rajan ***@***.***> wrote:
I may have had similar issue. I think I had to revert back to
nomic_embed_text and 11434/api duing local search inference. Let me know
which one works.
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASIXPTB35E5VE2JHKN3H4TTZPI3G7AVCNFSM6AAAAABLGPPPRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRTGA4DEOBZGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
How can I evaluate the graphrag performance if ask out of bound questions??? |
It may be caused by no response of LLM. I change the max tries as follows and it works. encoding_model: cl100k_base |
Thank you so much. You helped me a lot. |
I got same error. But i resolved it.
|
❌ create_summarized_entities
None
⠼ GraphRAG Indexer
├── Loading Input (text) - 1 files loaded (0 filtered) ━━━━ 100% 0:0… 0:0…
├── create_base_text_units
├── create_base_extracted_entities
└── create_summarized_entities
❌ Errors occurred during the pipeline run, see logs for more details.
The text was updated successfully, but these errors were encountered: