Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chat ollama with_structured_output fails with union type #28090

Open
1 task done
eyurtsev opened this issue Nov 13, 2024 · 4 comments
Open
1 task done

chat ollama with_structured_output fails with union type #28090

eyurtsev opened this issue Nov 13, 2024 · 4 comments
Labels
Ɑ: core Related to langchain-core

Comments

@eyurtsev
Copy link
Collaborator

Privileged issue

  • I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.

Issue Content

from typing import Union
from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.1", temperature=0)

# Pydantic
class Joke(BaseModel):
    """Joke to tell user."""
    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(
        default=None, description="How funny the joke is, from 1 to 10"
    )


class ConversationalResponse(BaseModel):
    """Respond in a conversational manner. Be kind and helpful."""
    response: str = Field(description="A conversational response to the user's query")


class FinalResponse(BaseModel):
    final_output: Union[Joke, ConversationalResponse]

structured_llm = llm.with_structured_output(FinalResponse)
structured_llm.invoke("Tell me a joke about cats")

raise here: #27415

@eyurtsev
Copy link
Collaborator Author

stack trace:

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
Cell In[31], line 25
     22     final_output: Union[Joke, ConversationalResponse]
     24 structured_llm = llm.with_structured_output(FinalResponse)
---> 25 structured_llm.invoke("Tell me a joke about cats")

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/base.py:3024](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=3023), in RunnableSequence.invoke(self, input, config, **kwargs)
   3022             input = context.run(step.invoke, input, config, **kwargs)
   3023         else:
-> 3024             input = context.run(step.invoke, input, config)
   3025 # finish the root run
   3026 except BaseException as e:

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/base.py:193](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/base.py#line=192), in BaseOutputParser.invoke(self, input, config, **kwargs)
    186 def invoke(
    187     self,
    188     input: Union[str, BaseMessage],
    189     config: Optional[RunnableConfig] = None,
    190     **kwargs: Any,
    191 ) -> T:
    192     if isinstance(input, BaseMessage):
--> 193         return self._call_with_config(
    194             lambda inner_input: self.parse_result(
    195                 [ChatGeneration(message=inner_input)]
    196             ),
    197             input,
    198             config,
    199             run_type="parser",
    200         )
    201     else:
    202         return self._call_with_config(
    203             lambda inner_input: self.parse_result([Generation(text=inner_input)]),
    204             input,
    205             config,
    206             run_type="parser",
    207         )

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/base.py:1927](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=1926), in Runnable._call_with_config(self, func, input, config, run_type, serialized, **kwargs)
   1923     context = copy_context()
   1924     context.run(_set_config_context, child_config)
   1925     output = cast(
   1926         Output,
-> 1927         context.run(
   1928             call_func_with_variable_args,  # type: ignore[arg-type]
   1929             func,  # type: ignore[arg-type]
   1930             input,  # type: ignore[arg-type]
   1931             config,
   1932             run_manager,
   1933             **kwargs,
   1934         ),
   1935     )
   1936 except BaseException as e:
   1937     run_manager.on_chain_error(e)

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/config.py:396](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/runnables/config.py#line=395), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
    394 if run_manager is not None and accepts_run_manager(func):
    395     kwargs["run_manager"] = run_manager
--> 396 return func(input, **kwargs)

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/base.py:194](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/base.py#line=193), in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
    186 def invoke(
    187     self,
    188     input: Union[str, BaseMessage],
    189     config: Optional[RunnableConfig] = None,
    190     **kwargs: Any,
    191 ) -> T:
    192     if isinstance(input, BaseMessage):
    193         return self._call_with_config(
--> 194             lambda inner_input: self.parse_result(
    195                 [ChatGeneration(message=inner_input)]
    196             ),
    197             input,
    198             config,
    199             run_type="parser",
    200         )
    201     else:
    202         return self._call_with_config(
    203             lambda inner_input: self.parse_result([Generation(text=inner_input)]),
    204             input,
    205             config,
    206             run_type="parser",
    207         )

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/openai_tools.py:298](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/openai_tools.py#line=297), in PydanticToolsParser.parse_result(self, result, partial)
    296             continue
    297         else:
--> 298             raise e
    299 if self.first_tool_only:
    300     return pydantic_objects[0] if pydantic_objects else None

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/openai_tools.py:293](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/langchain_core/output_parsers/openai_tools.py#line=292), in PydanticToolsParser.parse_result(self, result, partial)
    288         msg = (
    289             f"Tool arguments must be specified as a dict, received: "
    290             f"{res['args']}"
    291         )
    292         raise ValueError(msg)
--> 293     pydantic_objects.append(name_dict[res["type"]](**res["args"]))
    294 except (ValidationError, ValueError) as e:
    295     if partial:

File [~/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/pydantic/main.py:212](http://localhost:8888/home/eugene/.pyenv/versions/3.11.4/envs/core_3_11_4/lib/python3.11/site-packages/pydantic/main.py#line=211), in BaseModel.__init__(self, **data)
    210 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    211 __tracebackhide__ = True
--> 212 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
    213 if self is not validated_self:
    214     warnings.warn(
    215         'A custom validator is returning a value other than `self`.\n'
    216         "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
    217         'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
    218         category=None,
    219     )

ValidationError: 2 validation errors for FinalResponse
final_output.Joke
  Input should be a valid dictionary or instance of Joke [type=model_type, input_value='Why did the cat join a b...be the purr-cussionist!', input_type=str]
    For further information visit https://errors.pydantic.dev/2.9/v/model_type
final_output.ConversationalResponse
  Input should be a valid dictionary or instance of ConversationalResponse [type=model_type, input_value='Why did the cat join a b...be the purr-cussionist!', input_type=str]
    For further information visit https://errors.pydantic.dev/2.9/v/model_type

@dosubot dosubot bot added the Ɑ: core Related to langchain-core label Nov 13, 2024
@eyurtsev
Copy link
Collaborator Author

This is likely not an issue with the implementation, but the model failing to actually output appropriate structured_output -- so performance issue w/ the underlying model.

The TODO item is to figure out how to improve error messages so such issues are easier for the user to understand

@ENUMERA8OR
Copy link

@eyurtsev Would like to contribute to this issue. Please tell me what to do.

@jooray
Copy link

jooray commented Nov 13, 2024

What branch are you running? I get NotImplementedError, which seems correct, because:

class Ollama(BaseLLM, _OllamaCommon):

does not implement with_structured_output, so the parent class raises this error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: core Related to langchain-core
Projects
None yet
Development

No branches or pull requests

3 participants