Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

本地ollama模型无法使用 #37

Open
mrchengshunlong opened this issue Jun 29, 2024 · 1 comment
Open

本地ollama模型无法使用 #37

mrchengshunlong opened this issue Jun 29, 2024 · 1 comment

Comments

@mrchengshunlong
Copy link

“使用本地Ollama模型
如果你使用本地Ollama模型,需要配置环境变量:OLLAMA_ORIGINS=chrome-extension://bciglihaegkdhoogebcdblfhppoilclp,否则访问会出现403错误。

然后在插件配置里,apiKey随便填一个,服务器地址填http://localhost:11434,模型选自定义,然后填入自定义模型名如llama2。

但是测试发现llama2 7b模型比较弱,无法返回需要的json格式,因此总结很可能会无法解析响应而报错(但提问功能不需要解析响应格式,因此没问题)。”

已经按照上述要求进行了设置,对gemma2:9b ,codegemma:latest ,deepseek-coder-v2:latest ,phi3:14b ,llama3:latest 都使用过,全部都是“Unexpected end of JSON input” ,包括翻译,提问,要点总结等功能,全部失败。使用openai的key调用都可以成功,排除软件安装错误。

@IndieKKY
Copy link
Owner

IndieKKY commented Jul 1, 2024

对的,这些模型比较弱,无法返回正常的json格式,所以解析报错。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants