-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Cannot reliably interface with local LLM when agent is built from source #6643
Comments
Ah, it was another session. Still weird, but differently weird. 😅
LEVEL 0 LOCAL STEP 0 GLOBAL STEP 0 00:56:57 - openhands:INFO: standalone_conversation_manager.py:83 - Conversation 2f71d095de6d433b93ca58fc2e7539fb connected in 0.05562305450439453 seconds @avi12 does this happen if the only change you make is to use another model, like some hosted model? (you can use the link on the home page to get back to the last conversation) |
I went back and forth from source to Docker commands and switched between versions so I don't have the original conversation [core]
workspace_base="/mnt/c/repositories/extensions/kimai-google-calendar"
debug=true
[llm]
model="lm_studio/qwen2.5-coder-7b-instruct"
base_url="http://host.internal.docker:1234/v1"
api_key="lm-studio" https://huggingface.co/lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF |
Did you start the LM Studio server and is it on port 1234? |
Ah, okay! Can you try to enter base_url and the rest of the settings in the UI, regardless of whether they are in the toml too? |
When I manually try to set the "API key" as |
Could you please try to use the prefix "openai/" instead of lm_studio for the model name? You need to set some api key in the UI, too, I believe, but I don't think it matters which. "lm-studio" should work. |
Are you trying to make me use an OpenAI model with OH? |
Interestingly, in the logs when running from the source I see
I wonder why this discrepancy exists, considering that this exact configuration works when I run the Docker command |
No, it doesn't mean openai models. "openai/" is a prefix that litellm recognizes for any models with the meaning openai-compatible format. ("this provider at this base_url serves model with an openai-compatible format")
When running the docker command, the connection is from inside the docker app container. Now it's direct, please try to use localhost. Please check here (for ollama, but it should be similar for LM Studio): |
See for example, here:
So you can call LMStudio LLMs with the prefix /openai, the rest of the model name as in LM Studio, the correct base_url, and litellm can figure it out. |
Thus far I struggled with SmartManoj#258 (comment) |
This becomes difficult to follow, sorry. Let's look at it this way: you said in that with |
After some digging I discovered that the issue stems from me running Windows 10 (and that due to hardware limitations, I cannot run Windows 11) |
i get this issue too when using OpenAI (o3) with Openhands. so its not just related to local.
|
It seems like the latest commit (1afe7f1) is also problematic, the agent can't get ready |
That commit didn't cause anything on the local installations, though, it's only for the remote runtime which is not in use with a local docker. Maybe an older commit/issue. Did you run normally with the development mode before updating to the newest commit? |
I mean I already pulled the latest commit |
If you really want, I can use |
Never mind, it was eventually able to start but it took its time, similar to #5813 |
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
openhands_2025-02-07.log
prompt_008.log
OpenHands Installation
Other
OpenHands Version
Built from source
Operating System
WSL2 on Windows 10
Logs, Errors, Screenshots, and Additional Context
No response
The text was updated successfully, but these errors were encountered: