Skip to content

Commit

Permalink
Merge pull request #4 from 2016bgeyer/patch-1
Browse files Browse the repository at this point in the history
fix: Fixed typo in README.md
  • Loading branch information
hinthornw committed Oct 8, 2024
2 parents 0d25f37 + 9e596a2 commit ca88b6e
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,8 @@ All these memories need to go somewhere reliable. All LangGraph deployments come

You can learn more about Storage in LangGraph [here](https://langchain-ai.github.io/langgraph/how-tos/memory/shared-state/).

In our case, we are saving all memories namespaced by `user_id` and by the memory scheam you provide. That way you can easily search for memories for a given user and of a particular type. This diagram shows how these pieces fit together:
In our case, we are saving all memories namespaced by `user_id` and by the memory schema you provide. That way you can easily search for memories for a given user and of a particular type. This diagram shows how these pieces fit together:


![Memory types](./static/memory_types.png)

Expand Down
2 changes: 1 addition & 1 deletion src/chatbot/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ async def schedule_memories(state: ChatState, config: RunnableConfig) -> None:
multitask_strategy="enqueue",
# This lets us "debounce" repeated requests to the memory graph
# if the user is actively engaging in a conversation. This saves us $$ and
# can help reduce the occurence of duplicate memories.
# can help reduce the occurrence of duplicate memories.
after_seconds=configurable.delay_seconds,
# Specify the graph and/or graph configuration to handle the memory processing
assistant_id=configurable.mem_assistant_id,
Expand Down

0 comments on commit ca88b6e

Please sign in to comment.