Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langgraph.errors.InvalidUpdateError: Expected dict, got conversational #37

Open
TechKemon opened this issue Oct 2, 2024 · 4 comments
Open

Comments

@TechKemon
Copy link

TechKemon commented Oct 2, 2024

Getting this error repeatedly:
InvalidUpdateError: Expected dict, got conversational

full code from

https://github.com/PeoplePlusAI/Sukoon

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, List

# Define the state
class State(TypedDict):
    messages: List[HumanMessage | AIMessage]

# Initialize OpenAI model
model = ChatOpenAI(model="gpt-4o", temperature=0.1)

# Define prompts
planner_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a planner agent that decides which specialized agent to call based on the user's input. If the query indicates a risk of suicide or self-harm, respond with 'suicide_prevention'. Otherwise, respond with 'conversational'."),
    ("human", "{input}"),
])

conversational_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an empathetic conversational agent. Provide supportive responses to help relieve student stress."),
    ("human", "{input}"),
])

suicide_prevention_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a suicide prevention agent. Apply QPR (Question, Persuade, Refer) techniques and refer to trained professionals or suicide prevention helpline. Be extremely cautious and supportive."),
    ("human", "{input}"),
])

# Define node functions
def route_query(state: State):
    messages = state["messages"]
    last_message = messages[-1]
    
    response = model.invoke([planner_prompt.format(input=last_message.content)])
    return response.content.strip().lower()

def run_conversational_agent(state: State):
    response = model.invoke([conversational_prompt] + state["messages"])
    return {"messages": state["messages"] + [AIMessage(content=response.content)]}

def run_suicide_prevention_agent(state: State):
    response = model.invoke([suicide_prevention_prompt] + state["messages"])
    return {"messages": state["messages"] + [AIMessage(content=response.content)]}

def should_continue(state: State):
    if len(state["messages"]) > 15:
        return "end"
    return "router"

# Create the graph
workflow = StateGraph(State)

# Add nodes
workflow.add_node("router", route_query)
workflow.add_node("conversational", run_conversational_agent)
workflow.add_node("suicide_prevention", run_suicide_prevention_agent)

# Add edges
workflow.add_edge(START, "router")
workflow.add_conditional_edges(
    "router",
    lambda x: x,
    {
        "conversational": "conversational",
        "suicide_prevention": "suicide_prevention"
    }
)
workflow.add_conditional_edges(
    "conversational",
    should_continue,
    {
        "router": "router",
        "end": END
    }
)
workflow.add_edge("suicide_prevention", END)

# Compile the graph
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)

# Function to run a conversation turn
def chat(message: str, config: dict):
    result = graph.invoke({"messages": [HumanMessage(content=message)]}, config=config)
    return result["messages"][-1]

# Example usage
if __name__ == "__main__":
    config = {"configurable": {"thread_id": "test"}}
    
    response = chat("Hi! I'm feeling really stressed about my exams", config)
    print("Bot:", response.content)
    
    response = chat("I don't know if I can handle this stress anymore", config)
    print("Bot:", response.content)
@shiv248
Copy link
Contributor

shiv248 commented Oct 3, 2024

Hey @TechKemon,
a couple of things, I would recommend formatting with code blocks in markdown format so it's easier for users to read and follow along.

just from a quick glance, I have a feeling the error has something to do with

workflow.add_conditional_edges(
    "router",
    lambda x: x,
    {
        "conversational": "conversational",
        "suicide_prevention": "suicide_prevention"
    }
)

I would look into formatting conditional edges, your second conditional edge is correct.

if you prefer to use lambda, you need some way for the route_query function to "save" the response of which route the graph should be directed to, into the state so that the lambda function can call lambda state: state["route"] to reference it. But honestly, it'll end up being harder to read/understand for someone at first glance.

generally, routing is done like this example in cell 3 (### Router), might want to look into that approach.
if you're looking for a parallel node execution (unclear with the direction based on your current code), I would look into this how-to.

and also I would direct you to join the community slack for general langgraph questions.

@TechKemon
Copy link
Author

Thanks Shiv for prompt answer. Will try this and let you know

@TechKemon
Copy link
Author

TechKemon commented Oct 3, 2024

Hey I'm getting these two errors repeatedly:
InvalidUpdateError: Expected dict, got conversational
and
Unhashable Dict

Code:

def route_query(state: State):
    messages = state["messages"]
    last_message = messages[-1]

    # Format the planner prompt
    formatted_messages = planner_prompt.format_messages(input=last_message.content)
    response = model.invoke(formatted_messages)
    print(response)

    # Append the response to messages as an AIMessage
    state["messages"].append(AIMessage(content=response.content))
    # messages = [AIMessage(content=response.content)]
    # state["summary"] = response.content
    # Determine the route based on the response content
    final = response.content.strip().lower()
    if "suicide prevention agent" in final:
        state["route"] = final
    elif "conversational agent" in final:
        state["route"] = final
    else:
        # Handle unexpected cases if necessary
        state["route"] = "unknown"

    # Return the updated route in the state
    return {"messages": response}

Add nodes

workflow.add_node("router", route_query)
workflow.add_conditional_edges(
"router",
lambda state: state.get("route", "unknown"),
{
"suicide_prevention": "suicide_prevention",
"conversational": "conversational",
"unknown": END # Or handle 'unknown' as needed
}
)

@shiv248
Copy link
Contributor

shiv248 commented Oct 3, 2024

the unhasable dict could be due to a return value giving a dict but the conditional expects just a value. And the InvalidUpdateError: Expected dict, got conversational stems from the lambda usage in the conditional edge.

from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from pydantic import BaseModel, Field
from langgraph.graph.message import AnyMessage, add_messages
from typing import Literal, Annotated
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, List

# Define the state
class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

# Initialize OpenAI model
model = llm

# Define prompts
planner_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a planner agent that decides which specialized agent to call based on the user's input. If the query indicates a risk of suicide or self-harm, respond with 'suicide_prevention'. Otherwise, respond with 'conversational'."),
    ("human", "{input}"),
])

conversational_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an empathetic conversational agent. Provide supportive responses to help relieve student stress."),
    ("human", "{input}"),
])

suicide_prevention_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a suicide prevention agent. Apply QPR (Question, Persuade, Refer) techniques and refer to trained professionals or suicide prevention helpline. Be extremely cautious and supportive."),
    ("human", "{input}"),
])

# Define router
def route_query(state: State):
    class RouteQuery(BaseModel):
          """Route a user query to the most relevant datasource."""
          route: Literal["conversational", "suicide_prevention"] = Field(
              ...,
              description="Given a user question choose to route it to normal conversation or a suicide prevention.",
          )
    structured_llm_router = model.with_structured_output(RouteQuery)
    question_router = planner_prompt | structured_llm_router
    last_message = state["messages"][-1]
    resp = question_router.invoke({"input": last_message})
    return resp.route

def run_conversational_agent(state: State):
    print("Running conversational agent")
    convo_model = conversational_prompt | model
    response = convo_model.invoke(state["messages"])
    return {"messages": response}

def run_suicide_prevention_agent(state: State):
    print("Running suicide prevention agent")
    concern_model = suicide_prevention_prompt | model
    response = concern_model.invoke(state["messages"])
    return {"messages": response}

# Create the graph
workflow = StateGraph(State)

# Add nodes
workflow.add_node("conversational", run_conversational_agent)
workflow.add_node("suicide_prevention", run_suicide_prevention_agent)

# Add edges
workflow.add_conditional_edges(
    START,
    route_query,
     {
        "conversational": "conversational",
        "suicide_prevention": "suicide_prevention"
     },
)
workflow.add_edge("conversational", END)
workflow.add_edge("suicide_prevention", END)

# Compile the graph
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)

# Function to run a conversation turn
def chat(message: str, config: dict):
    print("User:", message)
    result = graph.invoke({"messages": [HumanMessage(content=message)]}, config=config)
    return result["messages"][-1]

config = {"configurable": {"thread_id": "test"}}

response = chat("Hi! I'm feeling really stressed about my exams", config)
print("Bot:", response.content)

response = chat("I don't know if I can handle this stress anymore", config)
print("Bot:", response.content)

here is the adjusted code without any errors.

things I changed:

  • in each node rather than appending each message before invoking you can look into adding a reducer to the state that will handle message merging for you. Here is the source code for the reducer if you wanted to look into making your own down the line.
  • As mentioned above if we were to reference CELL 3 in this example, we can use a BaseModel for our llm to conform the response into using with_structured_output.
  • we also can use LCEL to parse each system prompt with the model to be a callable runnable.
  • and since its just a routing function it doesn't need to be a node, we can call it within a conditional entry point.
  • because we didn't use lambda, we didn't need to create a new route value in the state.
  • generally, you don't want to cycle through the graph multiple times, unless you want the model to "think" (otherwise you will hit a max recursion error) so we can just call the end to the graph from either conditional and reinvoke the graph with the new human message.
  • Due to using threads and MemorySaver checkpointer the conversation is persisted between invocations of the graph.

that should fix all the errors and set you up for flexibility for future iterations with a LangGraphy approach, I didn't use lambda since it adds complexities and is harder to read/follow. I recommend looking into all the links I referenced. They will help you semantically understand the reasoning behind the changes with accepted paradigm approaches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants