Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS ChatBedrock invoke() returns None when used with with_structured_output() method #338

Open
5 tasks done
jimthompson5802 opened this issue Jan 19, 2025 · 2 comments
Open
5 tasks done

Comments

@jimthompson5802
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

This code assumes the relevant envionment variables for Bedrock and OpenAI are defined with respective API Keys.

The code uses the Bedrock Mistral LLM with and without the use of with_structured_output() method.

Then the code uses the OpenAI got-4o-mini model with and without use of with_structured_output() method.

from langchain_aws import ChatBedrock, ChatBedrockConverse
from langchain_aws.chat_models.bedrock import convert_messages_to_prompt_mistral
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage


from typing import TypedDict
from pydantic import BaseModel


bedrock_mistral_model = ChatBedrock(model_id="mistral.mistral-7b-instruct-v0:2")

openai_model = ChatOpenAI(model="gpt-4o-mini")

prompt = [
    SystemMessage("you are a helpful assistant knowledgeale aboug geography."),
    HumanMessage("what is capital of hawaii? only return the city's name"),
]


def call_with_structured_output(this_model, this_prompt):
    class TestReturn(TypedDict):
        response_text: str

    interface_name = this_model.__class__.__name__
    model_id = this_model.model_id if hasattr(this_model, "model_id") else this_model.model_name

    if model_id.startswith("mistral"):
        prompt = convert_messages_to_prompt_mistral(this_prompt)
    else:
        prompt = this_prompt

    structured_call = this_model.with_structured_output(TestReturn)
    response = structured_call.invoke(prompt)

    print(f"\n\n{interface_name} for {model_id} WITH structured output:\n{response}")


def call_no_sturcutured_output(this_model, this_prompt):

    interface_name = this_model.__class__.__name__
    model_id = this_model.model_id if hasattr(this_model, "model_id") else this_model.model_name

    if model_id.startswith("mistral"):
        prompt = convert_messages_to_prompt_mistral(this_prompt)
    else:
        prompt = this_prompt

    response = this_model.invoke(prompt)
    print(f"\n\n{interface_name} for {model_id} NO structured output:\n{response}")


call_with_structured_output(bedrock_mistral_model, prompt)
call_no_sturcutured_output(bedrock_mistral_model, prompt)

call_with_structured_output(openai_model, prompt)
call_no_sturcutured_output(openai_model, prompt)

Error Message and Stack Trace (if applicable)

Output from the above minimal reproducible example code

The issue is the first output with "None". There should be a response from the Bedrock Mistral LLM.


$ /usr/local/bin/python /workspaces/connection_solver/src/agent_testbed/bedrock_structured_output_mre.py


ChatBedrock for mistral.mistral-7b-instruct-v0:2 WITH structured output:
None


ChatBedrock for mistral.mistral-7b-instruct-v0:2 NO structured output:
content=' Hawaii does not have a capital city. It is a state in the United States, and the entire state functions as a single entity with no separate capital city. The largest city in Hawaii is Honolulu, which is home to the state government.' additional_kwargs={'usage': {'prompt_tokens': 52, 'completion_tokens': 51, 'total_tokens': 103}, 'stop_reason': None, 'model_id': 'mistral.mistral-7b-instruct-v0:2'} response_metadata={'usage': {'prompt_tokens': 52, 'completion_tokens': 51, 'total_tokens': 103}, 'stop_reason': None, 'model_id': 'mistral.mistral-7b-instruct-v0:2'} id='run-9be77e46-0448-48c9-a01c-b0ced583fc91-0' usage_metadata={'input_tokens': 52, 'output_tokens': 51, 'total_tokens': 103}


ChatOpenAI for gpt-4o-mini WITH structured output:
{'response_text': 'Honolulu'}


ChatOpenAI for gpt-4o-mini NO structured output:
content='Honolulu' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 3, 'prompt_tokens': 35, 'total_tokens': 38, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None} id='run-995d9209-06ea-459e-8d23-b97d4c2cd400-0' usage_metadata={'input_tokens': 35, 'output_tokens': 3, 'total_tokens': 38, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}

Description

I'm trying to get structured output from a call to a Bedrock foundational model. However, with the use of with_structured_output(), instead of receiving the LLM response, I get None returned.

I'll point out there is a discussion about this situation in this thread: langchain-ai/langchain#22701

Since I don't see a documented Issue on this particular situation with Bedrock and with_structured_output() method, I'm submitting this issue.

System Info

$ python -m langchain_core.sys_info

System Information
------------------
> OS:  Linux
> OS Version:  langchain-ai/langchain#1 SMP PREEMPT_DYNAMIC Fri Nov 29 17:24:06 UTC 2024
> Python Version:  3.11.7 (main, Dec 19 2023, 20:42:30) [GCC 10.2.1 20210110]

Package Information
-------------------
> langchain_core: 0.3.30
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.11
> langchain_aws: 0.2.11
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51

Optional packages not installed
-------------------------------
> langserve

Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> boto3: 1.36.2
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
@langcarl langcarl bot added the investigate label Jan 19, 2025
@hyejungg
Copy link

hyejungg commented Jan 21, 2025

@jimthompson5802

Hi I met the same issue.

You and I have different versions, but when I look at the code, we have something in common that we are using ChatPromptTemplate.from_message().

  • my version
langchain                 0.3.15
langchain-aws             0.2.10
langchain-community       0.3.15
langchain-core            0.3.31

First, use ChatPromptTemplate.from_template() instead of ChatPromptTemplate.from_message()! Then None will not be returned.

But this is definitely a bug.

@jimthompson5802
Copy link
Author

@hyejungg thank you for the pointer. I'll consider the recommendation.

@efriis efriis transferred this issue from langchain-ai/langchain Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants