Skip to content

Tool calling with llm.chat #12557

Closed Answered by alexanderbrodko
alexanderbrodko asked this question in Q&A
Discussion options

You must be logged in to vote

My bad. I do not need tokenize when I use llm.chat instead of llm.generate. It works, the model answer is

{"name": "get_weather", "arguments": {"location": "San Francisco, CA", "unit": "celsius"}}

In fact, the model is Qwen2.5-Coder-Instruct-0.5B

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by alexanderbrodko
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant