Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compensate for LLMs misunderstanding a prompt that should result in a tool call #46

Open
codefromthecrypt opened this issue Sep 10, 2024 · 0 comments

Comments

@codefromthecrypt
Copy link
Contributor

It is unlikely that a prompt that should result in a tool call will misfire using gpt-4o, at least according to this blog, using gorilla to test calls. the success rate is very high.

However, even if we don't see in practice gpt-4o mistaking a tool call for a question (and returning text instead), it is certainly possible. It is much more likely in local inference, where sometimes 1/10 will misfire or will misfire based on not understanding tool or prompts exactly how gpt does.

I suggest we make an approach where, when we know there's a tool call expected, we retry when the response from the LLM is text instead. Right now, we have retries, but only on HTTP failure. This is application layer and would help make local LLMS more usable.

codefromthecrypt pushed a commit to codefromthecrypt/exchange that referenced this issue Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant