Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot reliably interface with local LLM when agent is built from source #6643

Open
1 task done
avi12 opened this issue Feb 6, 2025 · 22 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@avi12
Copy link

avi12 commented Feb 6, 2025

Is there an existing issue for the same bug?

  • I have checked the existing issues.

Describe the bug and reproduction steps

openhands_2025-02-07.log
prompt_008.log

OpenHands Installation

Other

OpenHands Version

Built from source

Operating System

WSL2 on Windows 10

Logs, Errors, Screenshots, and Additional Context

No response

@avi12 avi12 added the bug Something isn't working label Feb 6, 2025
@enyst
Copy link
Collaborator

enyst commented Feb 6, 2025

From the log, I don't see why does it immediately try to close the session it just opened

Ah, it was another session. Still weird, but differently weird. 😅

00:57:06 - openhands:DEBUG: agent_session.py:138 - Waiting for initialization to finish before closing session 556607007d764f6a910c42475d6947bf

LEVEL 0 LOCAL STEP 0 GLOBAL STEP 0

00:56:57 - openhands:INFO: standalone_conversation_manager.py:83 - Conversation 2f71d095de6d433b93ca58fc2e7539fb connected in 0.05562305450439453 seconds
00:56:57 - openhands:INFO: standalone_conversation_manager.py:64 - Reusing active conversation 2f71d095de6d433b93ca58fc2e7539fb
...
00:57:06 - openhands:DEBUG: agent_session.py:138 - Waiting for initialization to finish before closing session 556607007d764f6a910c42475d6947bf
...
00:57:28 - openhands:INFO: agent_controller.py:451 - [Agent Controller d7081bdd42c54dcda9606def5ca917e0] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR

@avi12 does this happen if the only change you make is to use another model, like some hosted model? (you can use the link on the home page to get back to the last conversation)

@avi12
Copy link
Author

avi12 commented Feb 6, 2025

I went back and forth from source to Docker commands and switched between versions so I don't have the original conversation
I tried to run from source with the config

[core]
workspace_base="/mnt/c/repositories/extensions/kimai-google-calendar"
debug=true

[llm]
model="lm_studio/qwen2.5-coder-7b-instruct"
base_url="http://host.internal.docker:1234/v1"
api_key="lm-studio"

https://huggingface.co/lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF
It doesn't seem like it can communicate with the local server that LM Studio sets up
I also tried to change the base_url="http://127.0.0.1:1234/v1", and then it says that it cannot communicate with Docker

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

Did you start the LM Studio server and is it on port 1234?

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

Image
Literally when I try to run OH in 0.22.0 or 0.23.0 it can reliably plug into the LM Studio server
It's only when I build from its source that it cannot

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

Ah, okay! Can you try to enter base_url and the rest of the settings in the UI, regardless of whether they are in the toml too?

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

pretty much all set
Image

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

When I manually try to set the "API key" as lm-studio, I get #6645

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

Could you please try to use the prefix "openai/" instead of lm_studio for the model name?

You need to set some api key in the UI, too, I believe, but I don't think it matters which. "lm-studio" should work.

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

Are you trying to make me use an OpenAI model with OH?

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

Interestingly, in the logs when running from the source I see

Message: 'litellm.APIError: APIError: Lm_studioException - Connection error.. Attempt #6 | You can customize retry values in the configuration.'
Arguments: ()

I wonder why this discrepancy exists, considering that this exact configuration works when I run the Docker command

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

Are you trying to make me use an OpenAI model with OH?

No, it doesn't mean openai models. "openai/" is a prefix that litellm recognizes for any models with the meaning openai-compatible format. ("this provider at this base_url serves model with an openai-compatible format")

Interestingly, in the logs when running from the source I see

Message: 'litellm.APIError: APIError: Lm_studioException - Connection error.. Attempt #6 | You can customize retry values in the configuration.'
Arguments: ()

I wonder why this discrepancy exists, considering that this exact configuration works when I run the Docker command

When running the docker command, the connection is from inside the docker app container. Now it's direct, please try to use localhost. Please check here (for ollama, but it should be similar for LM Studio):
https://docs.all-hands.dev/modules/usage/llms/local-llms

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

See for example, here:

So you can call LMStudio LLMs with the prefix /openai, the rest of the model name as in LM Studio, the correct base_url, and litellm can figure it out.

@avi12
Copy link
Author

avi12 commented Feb 7, 2025

Thus far I struggled with SmartManoj#258 (comment)

@enyst
Copy link
Collaborator

enyst commented Feb 7, 2025

This becomes difficult to follow, sorry. Let's look at it this way: you said in that with docker run things worked. Please do try to take into the account the docs linked above, on local llms / Ollama, it highlights the differences in building from source vs docker run, in particular the actual URL. I know I set up LM-Studio in the past with the same process, it exposes similar functionality.

@avi12
Copy link
Author

avi12 commented Feb 8, 2025

After some digging I discovered that the issue stems from me running Windows 10 (and that due to hardware limitations, I cannot run Windows 11)
This results in WSL2 being unable to send requests to localhost
My only workaround is to use a service like Ngrok

@Valve-too
Copy link

Valve-too commented Feb 10, 2025

i get this issue too when using OpenAI (o3) with Openhands. so its not just related to local.

docker run -it --rm --pull=always `
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE="docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik" `
    -e LOG_ALL_EVENTS=$true `
    -v "/var/run/docker.sock:/var/run/docker.sock" `
    -v "$env:USERPROFILE\.openhands:/home/openhands/.openhands" `
    -v /mnt/d/Github/projectrush:/opt/workspace_base `
    -p 3000:3000 `
    -p 9000:9000 `
    --add-host "host.docker.internal:host-gateway" `
    --name "openhands-app" `
    "docker.all-hands.dev/all-hands-ai/openhands:main"

@avi12
Copy link
Author

avi12 commented Feb 11, 2025

It seems like the latest commit (1afe7f1) is also problematic, the agent can't get ready

@enyst
Copy link
Collaborator

enyst commented Feb 11, 2025

It seems like the latest commit (1afe7f1) is also problematic, the agent can't get ready

That commit didn't cause anything on the local installations, though, it's only for the remote runtime which is not in use with a local docker. Maybe an older commit/issue.

Did you run normally with the development mode before updating to the newest commit?

@avi12
Copy link
Author

avi12 commented Feb 11, 2025

I mean I already pulled the latest commit

@avi12
Copy link
Author

avi12 commented Feb 11, 2025

If you really want, I can use git bisect to get to the latest working commit

@avi12
Copy link
Author

avi12 commented Feb 11, 2025

Never mind, it was eventually able to start but it took its time, similar to #5813

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants