You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use the llama3-8b-instruct model, but when I use ollama, compared with the gpt api, a considerable part of the return format cannot be directly parsed by the following method, resulting in a considerable part (about one tenth) of the extracted_triples after openie being empty, while GPT may be less than one percent.
The text was updated successfully, but these errors were encountered:
I checked that a considerable part of the json format output by ollama's llama3:instruct contains errors, so parsing fails, resulting in an empty extracted_triples. But the llama3-8b-instruct reported in the paper can achieve the effect second only to gpt-3.5-turbo, so I am very confused, is llama3:instruct of ollama not the real llama3-8b-instruct?
I want to use the
llama3-8b-instruct
model, but when I useollama
, compared with thegpt api
, a considerable part of the return format cannot be directly parsed by the following method, resulting in a considerable part (about one tenth) of theextracted_triples
afteropenie
being empty, while GPT may be less than one percent.The text was updated successfully, but these errors were encountered: