Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generated tool calls should have a type key, requests with messages with tool calls without type returning a 424 error #35

Closed
charbeltabet opened this issue Feb 10, 2025 · 3 comments · Fixed by #50
Assignees
Labels
bug Something isn't working langchain-azure-ai Azure AI integration package

Comments

@charbeltabet
Copy link
Contributor

Description

When comparing the generated tool calls between ChatOpenAI and AzureAIChatCompletionsModel, we see that the tool calls of the azure integration lack the 'type' key. In the case of open ai, it has 'type': 'function'.

This is a problem because sending such a tool call back without type to azure in a messages array returns a 424 error response:

Message: Missing required parameter: 'messages[2].tool_calls[0].type'.
Image

Steps to reproduce

Note that both are using the same gpt-4o-mini-2024-07-18 model

import os
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage

llm = AzureAIChatCompletionsModel(
    endpoint=AZURE_ENDPOINT,
    credential=AZURE_CREDENTIAL,
    model_name="gpt-4o-mini",
)

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
openai_llm = ChatOpenAI(model="gpt-4o-mini")

@tool
def dummy_tool(query: str) -> str:
  """
    Dummy tool for testing purposes

    Args:
      query: The query to return

    Returns:
      str: The query
  """
  return query

dummy_tools = [
  dummy_tool
]
azure_llm_with_tools = llm.bind_tools(dummy_tools)
openai_llm_with_tools = openai_llm.bind_tools(dummy_tools)

azure_tool_calls = azure_llm_with_tools.invoke([HumanMessage(content="Test the dummy tool")])
openai_tool_calls = openai_llm_with_tools.invoke([HumanMessage(content="Test the dummy tool")])

print("Azure tool calls:", azure_tool_calls)
print("OpenAI tool calls:", openai_tool_calls)

# Azure tool calls: content='' additional_kwargs={'tool_calls': [{'id': 'call_BSGpqwvmysTuwZRs6ZnSrkV6', 'name': 'dummy_tool', 'args': {'query': 'Test query'}}]} response_metadata={'model': 'gpt-4o-mini-2024-07-18', 'token_usage': {'input_tokens': 65, 'output_tokens': 16, 'total_tokens': 81}, 'finish_reason': 'tool_calls'} id='run-cfda79c9-69b5-460e-aea5-5e1c8e7f8ca0-0' tool_calls=[{'name': 'dummy_tool', 'args': {'query': 'Test query'}, 'id': 'call_BSGpqwvmysTuwZRs6ZnSrkV6', 'type': 'tool_call'}] usage_metadata={'input_tokens': 65, 'output_tokens': 16, 'total_tokens': 81}
# OpenAI tool calls: content='' additional_kwargs={'tool_calls': [{'id': 'call_V26Ltq82s3zysorgtUsapKeW', 'function': {'arguments': '{"query":"Test query for dummy tool."}', 'name': 'dummy_tool'}, 'type': 'function'}], 'refusal': None} response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 65, 'total_tokens': 86, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-daaf6413-b793-4fa7-940b-ab36eaf0b1a5-0' tool_calls=[{'name': 'dummy_tool', 'args': {'query': 'Test query for dummy tool.'}, 'id': 'call_V26Ltq82s3zysorgtUsapKeW', 'type': 'tool_call'}] usage_metadata={'input_tokens': 65, 'output_tokens': 21, 'total_tokens': 86, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}

# Note difference in tool calls:
# Azure: [{'id': 'call_BSGpqwvmysTuwZRs6ZnSrkV6', 'name': 'dummy_tool', 'args': {'query': 'Test query'}}]
# OpenAI: [{'id': 'call_V26Ltq82s3zysorgtUsapKeW', 'function': {'arguments': '{"query":"Test query for dummy tool."}', 'name': 'dummy_tool'}, 'type': 'function'}]

Proposed solution

Looking at OpenAI's documentation on this matter, we read that:

Many of these fields are only set for the first delta of each tool call, like id, function.name, and type.

However, I think type should be included all the time because so we don't get the Message: Missing required parameter: 'messages[INDEX].tool_calls[0].type'

The function at libs/azure-ai/langchain_azure_ai/chat_models/inference.py:256 should be fixed

@charbeltabet charbeltabet changed the title Generate tool calls should have a type key, requests with messages with tool calls without type returning a 424 error Generated tool calls should have a type key, requests with messages with tool calls without type returning a 424 error Feb 10, 2025
@charbeltabet
Copy link
Contributor Author

charbeltabet commented Feb 10, 2025

UPDATE: I no longer receive this error after setting the type property correctly, see this 1-line PR: #36 which solves this issue

@santiagxf
Copy link
Collaborator

@charbeltabet thanks for reaching out. Let us take a look closer to the issue an come back to you!

@santiagxf santiagxf added bug Something isn't working langchain-azure-ai Azure AI integration package labels Feb 10, 2025
@santiagxf santiagxf self-assigned this Feb 10, 2025
@marlenezw
Copy link
Collaborator

marlenezw commented Feb 10, 2025

This issue was fixed with #36!

@santiagxf santiagxf linked a pull request Feb 25, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working langchain-azure-ai Azure AI integration package
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants