Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The hub.pull function with the include_model_options parameter is not working as expected when used with StructuredPrompt #7592

Open
5 tasks done
KoreanThinker opened this issue Jan 25, 2025 · 2 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@KoreanThinker
Copy link
Contributor

KoreanThinker commented Jan 25, 2025

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

get-quick-replies is a prompt defined in the LangChain Hub using StructuredPrompt. However, the following code does not work as expected:

const prompt = await hub.pull("get-quick-replies", { includeModel: true });

const result = await model.invoke({
  question,
  answer,
});

console.log(result.quickReplies)
// undefined
console.log(result.content)
// "reply 1\nreply 2\n..."

As an alternative, I’m using the following approach, which works as intended:

const prompt = await hub.pull("get-quick-replies");
const model = prompt.pipe(azureOpenaiMini);

const result = await model.invoke({
  question,
  answer,
});

console.log(result.quickReplies)
// ["reply 1", "reply 2", ...]

Error Message and Stack Trace (if applicable)

No response

Description

When using the get-quick-replies prompt from LangChain Hub with the includeModel option, the structured output does not work as expected. The quickReplies property is missing from the response, and only raw content is returned. However, when the includeModel option is not used and the model is piped manually, the prompt works correctly and returns the quickReplies property as expected.

System Info

node = v22.11.0
yarn = v1.22.22
"@langchain/community": "^0.3.27",
"@langchain/core": "^0.3.35",
"@langchain/google-genai": "^0.1.6",
"@langchain/openai": "^0.4.0",
"@langchain/pinecone": "^0.1.3",
"langchain": "^0.3.13",
"langsmith": "^0.3.3",

Copy link

dosubot bot commented Jan 25, 2025

Hello, @KoreanThinker! 👋 I'm here to help you with any bugs, questions, or contributions you have in mind. Let's squash those bugs together!

The issue you're experiencing with the hub.pull function and the includeModel option in LangChain.js seems to be related to how the structured output is handled. When using includeModel, the quickReplies property is not being returned as expected, whereas manually piping the model works correctly.

The includeModel option is designed to automatically instantiate and attach a model instance to the prompt if the prompt has associated model metadata. This should streamline the process, but it seems there might be a discrepancy in how the structured output is processed when using this option [1].

One potential reason for this issue could be that the includeModel option does not fully integrate with the StructuredPrompt's schema, which is necessary for producing structured outputs like quickReplies. The manual piping approach allows you to explicitly ensure that the model and prompt are correctly configured to handle structured data [2].

As a workaround, continue using the manual piping method, which seems to work as intended. This approach gives you more control over the integration between the prompt and the model, ensuring that the structured output is correctly formatted.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jan 25, 2025
@jacoblee93
Copy link
Collaborator

Thanks for reporting. Will look into this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants