You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I am new to the Autogen system and was playing around with its python API and got a weird error under a specific situation.
Context:
Was using Autogen with LMStudio as local host access, LMStudio works normally, LLM itself answer the question without issue
Autogen 0.7.3
LMStudio 3.9
Pyton: 3.12.7
Complete Error Message:
Traceback (most recent call last):
File "f:\LLM_Prj\demo_chat_bot.py", line 18, in <module>
reply = agent.generate_reply(messages=[{"content": "Tell me a fun fact about money.", "role": "user"}])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 2083, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1460, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\converrsable_agent.py", line 1485, in _generate_oai_reply_from_client
extracted_response = llm_client.extract_text_or_completion_objject(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\LLM_Prj\.env\Lib\site-packages\autogen\oai\client.py", line 1300, in extract_text_or_completion_object
return response.message_retrieval_function(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\LLM_Prj\.env\Lib\site-packages\autogen\oai\client.py", line 287, in message_retrieval
for choice in choices
^^^^^^^
TypeError: 'NoneType' object is not iterable
Steps taken to locate soruce of error:
Was asking Python Autogen agent to reply message:"Tell me a fun fact about money." and got the above error
Asking the same question directly to LLM on LMStudio returns normal answer
Asking Python OpenAi agent on the same question returns normal answer, which brings the main question of why it fails on Autogen agent only.
But try ask similar question of "Tell me a fun fact about monkey." (replace money with monkey) then Autogen works as normal
Main code listed in reproduction section
What did you expect to happen?
clarify if this failure is normal or a rare case and how to avoid similar situation when build complex agent.
Bebugging this kind of error will be hard and did take a while as I thought the error comes from API usage or LMStudio, but the error is from Autogen
How can we reproduce it (as minimally and precisely as possible)?
Reproduction will require messages that passes on python openai agent but fails on autogen agents, not sure if this is reproducible when using different LLM
Main code used:
import os
from autogen import ConversableAgent
llm_configInfo = {
"config_list": [{"model": "hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF",
'base_url':"http://127.0.0.1:1234/v1",
'api_key':"lm-studio"
}]}
agent = ConversableAgent(
"chatbot",
llm_config=llm_configInfo, # The Agent will use the LLM config provided to answer
human_input_mode="NEVER", # Can also be ALWAYS or TERMINATE (at end only)
)
reply = agent.generate_reply(messages=[{"content": "Tell me a fun fact about monkey.",
"role": "user"}])
print(reply)
AutoGen version
0.7.3
Which package was this bug in
AgentChat
Model used
llama-3.2-1b-instruct
Python version
3.12.7
Operating system
Windows
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered:
What happened?
Hi I am new to the Autogen system and was playing around with its python API and got a weird error under a specific situation.
Context:
Was using Autogen with LMStudio as local host access, LMStudio works normally, LLM itself answer the question without issue
Autogen 0.7.3
LMStudio 3.9
Pyton: 3.12.7
Complete Error Message:
Steps taken to locate soruce of error:
Was asking Python Autogen agent to reply message:"Tell me a fun fact about money." and got the above error
Asking the same question directly to LLM on LMStudio returns normal answer
Asking Python OpenAi agent on the same question returns normal answer, which brings the main question of why it fails on Autogen agent only.
But try ask similar question of "Tell me a fun fact about monkey." (replace money with monkey) then Autogen works as normal
Main code listed in reproduction section
What did you expect to happen?
clarify if this failure is normal or a rare case and how to avoid similar situation when build complex agent.
Bebugging this kind of error will be hard and did take a while as I thought the error comes from API usage or LMStudio, but the error is from Autogen
How can we reproduce it (as minimally and precisely as possible)?
Reproduction will require messages that passes on python openai agent but fails on autogen agents, not sure if this is reproducible when using different LLM
Main code used:
AutoGen version
0.7.3
Which package was this bug in
AgentChat
Model used
llama-3.2-1b-instruct
Python version
3.12.7
Operating system
Windows
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: