Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autogen agent_reply return "NoneType" error on normal questions #5318

Closed
alchemistkairos opened this issue Feb 1, 2025 · 1 comment
Closed

Comments

@alchemistkairos
Copy link

alchemistkairos commented Feb 1, 2025

What happened?

Hi I am new to the Autogen system and was playing around with its python API and got a weird error under a specific situation.

Context:

Was using Autogen with LMStudio as local host access, LMStudio works normally, LLM itself answer the question without issue
Autogen 0.7.3
LMStudio 3.9
Pyton: 3.12.7

Complete Error Message:

Traceback (most recent call last):
  File "f:\LLM_Prj\demo_chat_bot.py", line 18, in <module> 
    reply = agent.generate_reply(messages=[{"content": "Tell me a fun fact about money.", "role": "user"}])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 2083, in generate_reply       
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 1460, in generate_oai_reply   
    extracted_response = self._generate_oai_reply_from_client(    
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
  File "F:\LLM_Prj\.env\Lib\site-packages\autogen\agentchat\converrsable_agent.py", line 1485, in _generate_oai_reply_from_client   
    extracted_response = llm_client.extract_text_or_completion_objject(response)[0]
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\LLM_Prj\.env\Lib\site-packages\autogen\oai\client.py",  line 1300, in extract_text_or_completion_object
    return response.message_retrieval_function(response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\LLM_Prj\.env\Lib\site-packages\autogen\oai\client.py",  line 287, in message_retrieval
    for choice in choices
                  ^^^^^^^
TypeError: 'NoneType' object is not iterable

Steps taken to locate soruce of error:

Was asking Python Autogen agent to reply message:"Tell me a fun fact about money." and got the above error
Image

Asking the same question directly to LLM on LMStudio returns normal answer
Image
Image

Asking Python OpenAi agent on the same question returns normal answer, which brings the main question of why it fails on Autogen agent only.
Image

But try ask similar question of "Tell me a fun fact about monkey." (replace money with monkey) then Autogen works as normal
Image

Main code listed in reproduction section

What did you expect to happen?

clarify if this failure is normal or a rare case and how to avoid similar situation when build complex agent.
Bebugging this kind of error will be hard and did take a while as I thought the error comes from API usage or LMStudio, but the error is from Autogen

How can we reproduce it (as minimally and precisely as possible)?

Reproduction will require messages that passes on python openai agent but fails on autogen agents, not sure if this is reproducible when using different LLM

Main code used:

import os

from autogen import ConversableAgent

llm_configInfo = {
    "config_list": [{"model": "hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF", 
                     'base_url':"http://127.0.0.1:1234/v1",
                     'api_key':"lm-studio"
                     }]}


agent = ConversableAgent(
    "chatbot",
    llm_config=llm_configInfo, # The Agent will use the LLM config provided to answer
    human_input_mode="NEVER", # Can also be ALWAYS or TERMINATE (at end only)
)

reply = agent.generate_reply(messages=[{"content": "Tell me a fun fact about monkey.", 
                                        "role": "user"}])
print(reply)

AutoGen version

0.7.3

Which package was this bug in

AgentChat

Model used

llama-3.2-1b-instruct

Python version

3.12.7

Operating system

Windows

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 1, 2025

@alchemistkairos

You are using the wrong package. We don't publish to autogen -- please check the info on PyPI.

Please install autogen-agentchat and autogen-ext[openai]. The code below should work for you:

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="llama-3.2-1b-instruct",
        api_key="YOUR_API_KEY",
        base_url="http://localhost:1234/v1",
        model_info={
            "family": "llama",
            "function_calling": False,
            "json_output": False,
            "vision": False,
        },
    )

    agent = AssistantAgent(
        "assistant", 
        model_client=model_client, 
        system_message="You are a helpful assistant.", 
        model_client_stream=True,
    )

    await Console(agent.run_stream(task="Tell me a fun fact about monkey."))


asyncio.run(main())

@ekzhu ekzhu closed this as completed Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants