Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

instructor.Mode.PARALLEL_TOOLS does not handle the case when OpenAI decides to not make a function call #1157

Closed
5 tasks
chiradeep opened this issue Nov 8, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@chiradeep
Copy link

  • [ X] This is actually a bug report.
  • I am not getting good LLM Results
  • I have tried asking for help in the community on discord or discussions and have not received a response.
  • [X ] I have tried searching the documentation and have not found an answer.

What Model are you using?

  • gpt-3.5-turbo
  • gpt-4-turbo
  • gpt-4
  • [X ] gpt-4o

Describe the bug
Using the example from https://python.useinstructor.com/concepts/parallel/?h=iterable#understanding-parallel-function-calling
If the user message is something that OpenAI decides does not need a function call then the iteration results in an error

function_calls = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You must always use tools"},
        {
            "role": "user",
            "content": "Hello!",
        },
    ],
    response_model=Iterable[Weather | GoogleSearch],  
)

The exception:

File "demos/parallel.py", line 35, in <module>
    for fc in function_calls:
  File ".venv/lib/python3.10/site-packages/instructor/dsl/parallel.py", line 40, in from_response
    for tool_call in response.choices[0].message.tool_calls:
TypeError: 'NoneType' object is not iterable

To Reproduce
see above

Expected behavior
Actually not clear what is a desirable result. Even if I add a class Greeting as an item in the Iterable, it doesn't always choose to call the tool ( works most of the time, but for complex system prompts and user messages, it isn't always the case). Perhaps raise an exception whenever the choices[0]['message']['finish_reason'] == "stop" and provide the completion response in the exception.

Screenshots
If applicable, add screenshots to help explain your problem.

@github-actions github-actions bot added the bug Something isn't working label Nov 8, 2024
@chiradeep
Copy link
Author

the "finish_reson" == "stop" is not unusual as explained in OpenAI docs
https://platform.openai.com/docs/guides/function-calling#edge-cases

@ivanleomk
Copy link
Collaborator

Hmm I'm of the opinion that this is good behaviour and intended. I would honestly do some prompt engineering here to force the model to call the tool and have some exception handling to catch the error and retry the call perhaps.

You really don't want some random default object running around your application that you forget to update down the line tbh.

Going to close this issue in light of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants