Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
  • Loading branch information
rlancemartin committed Oct 2, 2024
1 parent 02533a9 commit 0460aee
Show file tree
Hide file tree
Showing 2 changed files with 127 additions and 28 deletions.
70 changes: 68 additions & 2 deletions docs/docs/concepts/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,9 +162,75 @@ graph.get_state(config)

Persistence is critical sustaining a long-running chat sessions. For example, a chat between a user and an AI assistant may have interruptions. Persistence ensures that a user can continue that particular chat session at any later point in time. However, what happens if a user initiates a new chat session with an assistant? This spawns a new thread, and the information from the previous session (thread) is not retained. This motivates the need for a memory that can maintain data across chat sessions (threads).

For this, we can use LangGraph's `Store` interface to save and retrieve information across threads. Shared information can be namespaced by, for example, `user_id` to retain user-specific information across threads. See more detail in the [persistence conceptual guide](https://langchain-ai.github.io/langgraph/concepts/persistence/#persistence) and this [how-to guide on shared state](../how-tos/memory/shared-state.ipynb).
For this, we can use LangGraph's `Store` interface to save and retrieve information across threads. Shared information can be namespaced by, for example, `user_id` to retain user-specific information across threads. Let's show how to use the `Store` interface to save and retrieve information.

## Meta-prompting
```python
from langgraph.store.memory import InMemoryStore
in_memory_store = InMemoryStore()

# Namespace for memories
user_id = "1"
namespace_for_memory = (user_id, "memories")

# Save memories
memory_id = str(uuid.uuid4())
memory = {"food_preference" : "I like pizza"}
in_memory_store.put(namespace_for_memory, memory_id, memory)

# Retrieve memories
memories = in_memory_store.search(namespace_for_memory)
memories[-1].dict()
{'value': {'food_preference': 'I like pizza'},
'key': '07e0caf4-1631-47b7-b15f-65515d4c1843',
'namespace': ['1', 'memories'],
'created_at': '2024-10-02T17:22:31.590602+00:00',
'updated_at': '2024-10-02T17:22:31.590605+00:00'}
```

The `store` can be used in LangGraph to save or retrieve memories in any graph node. The compile the graph with a checkpointer and store.

```python
# Compile the graph with the checkpointer and store
graph = graph.compile(checkpointer=checkpointer, store=in_memory_store)

# Invoke the graph
user_id = "1"
config = {"configurable": {"thread_id": "1", "user_id": user_id}}

# First let's just say hi to the AI
for update in graph.stream(
{"messages": [{"role": "user", "content": "hi"}]}, config, stream_mode="updates"
):
print(update)
```

Then, we can access the store in any node of the graph by passing `store: BaseStore` as a node argument.

```python
def update_memory(state: MessagesState, config: RunnableConfig, *, store: BaseStore):

# Get the user id from the config
user_id = config["configurable"]["user_id"]

# Namespace the memory
namespace = (user_id, "memories")

# ... Analyze conversation and create a new memory

# Create a new memory ID
memory_id = str(uuid.uuid4())

# We create a new memory
store.put(namespace, memory_id, {"memory": memory})
```

Anything saved to the store persists across graph executions (threads), allowing for information, such as user preferences or information, to be retained across threads.

The store is also built into the LangGraph API, making it accessible when using LangGraph Studio locally or when deploying to the LangGraph Cloud.

See more detail in the [persistence conceptual guide](https://langchain-ai.github.io/langgraph/concepts/persistence/#persistence) and this [how-to guide on shared state](../how-tos/memory/shared-state.ipynb).

## Update own instructions

Meta-prompting uses an LLM to generate or refine its own prompts or instructions. This approach allows the system to dynamically update and improve its own behavior, potentially leading to better performance on various tasks. This is particularly useful for tasks where the instructions are challenging to specify a priori.

Expand Down
85 changes: 59 additions & 26 deletions docs/docs/concepts/persistence.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,28 +225,60 @@ A state schema specifies a set of keys / channels that are populated as a graph
But, what if we want to retrain some information *across threads*? Consider the case of a chatbot where we want to retain specific information about the user across *all* chat conversations (e.g., threads) with that user!

With checkpointers alone, we cannot share information across threads. This motivates the need for the `Store` interface. As an illustration, we can define an `InMemoryStore` to store information about a user across threads. We simply compile our graph with a checkpointer, as before, and will our new in_memory_store.
First, let's showcase this in isolation without using LangGraph.

```python
from langgraph.checkpoint.memory import MemorySaver
from langgraph.store.memory import InMemoryStore
in_memory_store = InMemoryStore()
```

Memories are namespaced by a `tuple`, which in our case will be `(<user_id>, "memories")`. We can think about this namespace as a directory, where each `user_id` can have various sub-directories of things that we want to store (e.g., `memories`, `preferences`, etc.).

```python
user_id = "1"
namespace_for_memory = (user_id, "memories")
```

We use the `store.put` to save memories to our namespace in the store. When we do this, we specify the namespace, as defined above, and a key-value pair for the memory: the key is simply a unique identifier for the memory (`memory_id`) and the value (a dictionary) is the memory itself.

```python
memory_id = str(uuid.uuid4())
memory = {"food_preference" : "I like pizza"}
in_memory_store.put(namespace_for_memory, memory_id, memory)
```

We can read out memories in our namespace using `store.search`, which will return all memories for a given user as a list. The most recent memory is the last in the list.

```python
memories = in_memory_store.search(namespace_for_memory)
memories[-1].dict()
{'value': {'food_preference': 'I like pizza'},
'key': '07e0caf4-1631-47b7-b15f-65515d4c1843',
'namespace': ['1', 'memories'],
'created_at': '2024-10-02T17:22:31.590602+00:00',
'updated_at': '2024-10-02T17:22:31.590605+00:00'}
```

With this all in place, we use the `in_memory_store` in LangGraph. The `in_memory_store` works hand-in-hand with the checkpointer: the checkpointer saves state to threads, as discussed above, and the the `in_memory_store` allows us to store arbitrary information for access *across* threads. We compile the graph with both the checkpointer and the `in_memory_store` as follows.

```python
from langgraph.checkpoint.memory import MemorySaver

# We need this because we want to enable threads (conversations)
checkpointer = MemorySaver()

# This is the in memory store needed to save the memories (i.e. user preferences) across threads
in_memory_store = InMemoryStore()

# ... Define the graph ...

# Compile the graph with the checkpointer and store
graph = graph.compile(checkpointer=checkpointer, store=in_memory_store)
```

Now, let's assume that we want to store memories for each user. We invoke the graph with a `thread_id`, as before, and also with a `user_id`, which we'll use to namespace our memories to this particular user.
We invoke the graph with a `thread_id`, as before, and also with a `user_id`, which we'll use to namespace our memories to this particular user as we showed above.

```python
# Invoke the graph
config = {"configurable": {"thread_id": "1", "user_id": "1"}}
user_id = "1"
config = {"configurable": {"thread_id": "1", "user_id": user_id}}

# First let's just say hi to the AI
for update in graph.stream(
Expand All @@ -255,29 +287,39 @@ for update in graph.stream(
print(update)
```

Our memory store is accessible in *any node*. We simply pass `store: BaseStore` to the node as an argument and we can access the store from there. We can write to it using `store.put`. We want to write memories associated with the user, which is specified by `user_id` in the config.

Each memory is namespaced by a `tuple`, which in our case will be `("memories", <user_id>)`. And we call `store.put` with a key, value pair. The key is UUID for the memory the value is a dictionary with whatever we want to store.
We can access the `in_memory_store` and the `user_id` in *any node* by passing `store: BaseStore` and `config: RunnableConfig` as node arguments. Just as we saw above, simply use the `put` method to save memories to the store.

```python
def update_memory(state: MessagesState, config: RunnableConfig, *, store: BaseStore):

# Get the user id from the config
user_id = config["configurable"]["user_id"]

# Create a new memory ID
memory_id = str(uuid.uuid4())
# Namespace the memory
namespace = (user_id, "memories")

# ... Analyze conversation and create a new memory

# Create a new memory ID
memory_id = str(uuid.uuid4())

# We create a new memory
store.put(("memories", user_id), memory_id, {
"memory": memory,
})
store.put(namespace, memory_id, {"memory": memory})

```

So we've written memories namespaced to `("memories", <user_id>)`. We can read them back using `store.search`, which will return all memories for a given user as a list.
As we showed above, we can also access the store in any node and use `search` to get memories. Recall the the memories are returned as a list, with each object being a dictionary with the `key` (memory_id) and `value` (the memory itself) along with some metadata.

```python
memories[-1].dict()
{'value': {'food_preference': 'I like pizza'},
'key': '07e0caf4-1631-47b7-b15f-65515d4c1843',
'namespace': ['1', 'memories'],
'created_at': '2024-10-02T17:22:31.590602+00:00',
'updated_at': '2024-10-02T17:22:31.590605+00:00'}
```

We can access the memories and use them in our model call.

```python
def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore):
Expand All @@ -292,7 +334,7 @@ def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore
# ... Use memories in the model call
```

If we create a new thread, we can still access the same memories so long as the user_id is the same.
If we create a new thread, we can still access the same memories so long as the `user_id` is the same.

```python
# Invoke the graph
Expand All @@ -305,16 +347,7 @@ for update in graph.stream(
print(update)
```

In addition, we can always access the store outside of the graph execution.

```python
for memory in in_memory_store.search(("memories", "1")):
print(memory.value)
```

When we use the LangGraph API, either locally (e.g., in LangGraph Studio) or with LangGraph Cloud, the memory store is available to use by default and does not need to be specified during graph compilation.

See our [how-to guide on shared state](../how-tos/memory/shared-state.ipynb) for a detailed example!.
When we use the LangGraph API, either locally (e.g., in LangGraph Studio) or with LangGraph Cloud, the memory store is available to use by default and does not need to be specified during graph compilation. See our [how-to guide on shared state](../how-tos/memory/shared-state.ipynb) for a detailed example!.

## Checkpointer libraries

Expand Down

0 comments on commit 0460aee

Please sign in to comment.