Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Missing value for input variable history with ConversationSummaryBufferMemory #6718

Open
mauriciocirelli opened this issue Sep 9, 2024 · 1 comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@mauriciocirelli
Copy link
Contributor

mauriciocirelli commented Sep 9, 2024

Dear,

I am using this exact example.

// Initialize the memory with a specific model and token limit
const memory = new ConversationSummaryBufferMemory({
  llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo-instruct", temperature: 0 }),
  maxTokenLimit: 10,
});

// Save conversation context to memory
await memory.saveContext({ input: "hi" }, { output: "whats up" });
await memory.saveContext({ input: "not much you" }, { output: "not much" });

// Load the conversation history from memory
const history = await memory.loadMemoryVariables({});
console.log({ history });

// Create a chat prompt using the conversation history
const chatPrompt = ChatPromptTemplate.fromMessages([
  SystemMessagePromptTemplate.fromTemplate(
    "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.",
  ),
  new MessagesPlaceholder("history"),
  HumanMessagePromptTemplate.fromTemplate("{input}"),
]);

// Initialize the conversation chain with the model, memory, and prompt
const chain = new ConversationChain({
  llm: new ChatOpenAI({ temperature: 0.9, verbose: true }),
  memory: memory,
  prompt: chatPrompt,
});

The history variable is not injected into the prompt, so I get the error.

Is it a bug in docs? Should we format the template to inject the history variable?

Thank you.

System Info

npm info langchain

[email protected] | MIT | deps: 14 | versions: 291
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

Copy link

dosubot bot commented Sep 9, 2024

Yes, the template should be formatted to inject the history variable. The RunnableWithMessageHistory class is designed to manage and inject message history into the chain input. The example provided in the documentation shows that the history variable is intended to be used within the prompt template:

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You're an assistant who's good at {ability}"],
  new MessagesPlaceholder("history"),
  ["human", "{question}"],
]);

Additionally, the test case in history_aware_retriever.test.ts further confirms that the history (or chat_history) variable is expected to be injected into the prompt. The test demonstrates how the chat_history is passed and utilized within the prompt template:

const questionGenPrompt = ChatPromptTemplate.fromTemplate(
  `hi! {input} {chat_history}`
);

If the history variable is not being injected, it could be due to a misconfiguration or a bug in the implementation. The documentation also provides a detailed example of how to set up and use the RunnableWithMessageHistory class to ensure that message history is correctly managed and injected:

import { RunnableWithMessageHistory } from "@langchain/core/runnables";

const chainWithHistory = new RunnableWithMessageHistory({
  runnable: chain,
  getMessageHistory: (sessionId) =>
    new UpstashRedisChatMessageHistory({
      sessionId,
      config: {
        url: process.env.UPSTASH_REDIS_REST_URL!,
        token: process.env.UPSTASH_REDIS_REST_TOKEN!,
      },
    }),
  inputMessagesKey: "question",
  historyMessagesKey: "history",
});

This setup ensures that the history variable is properly injected into the prompt template, allowing the chain to utilize historical messages effectively [1][2][3].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant