Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple different outputs in video and notebook (Module 3 - Lesson 3: Editing State and Human Feedback) #39

Open
labdmitriy opened this issue Oct 7, 2024 · 8 comments

Comments

@labdmitriy
Copy link

Hi @rlancemartin,

I am trying to reproduce Lesson 3 in Module 3, and found strange difference in outputs:

  • In video you showed that when we insert additional human message and then resume it, after the first graph.stream(None, ...) we have AI and Tool message, and after the second one - only AI message, and it seems consistent with the graph structure:
    video
  • But if I try to reproduce it using notebook, for the second run I will have 2 events:
    • Tool message (the same one that was the last in the first run, I checked the event - the event is exactly the same)
    • AI message
      notebook

The first case (in video) seems to be intuitive and correct way to resume, but I can't understand, why do I have duplicated event with Tool message for the second run?

Environment information:

System Information
------------------
> OS:  Linux
> OS Version:  #132~20.04.1-Ubuntu SMP Fri Aug 30 15:50:07 UTC 2024
> Python Version:  3.11.5 (main, Sep 11 2023, 13:32:41) [GCC 9.4.0]

Package Information
-------------------
> langchain_core: 0.3.9
> langchain: 0.3.2
> langchain_community: 0.3.1
> langsmith: 0.1.131
> langchain_anthropic: 0.2.3
> langchain_experimental: 0.3.2
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.21
> langgraph: 0.2.34
> langserve: 0.3.0

Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.35.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> sse-starlette: Installed. No version info available.
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240712
> typing-extensions: 4.12.2

Thank you.

@labdmitriy
Copy link
Author

Also in Studio section we have the following statement:

Our agent is defined in assistant/agent.py.

However there is no path like specified, does it mean that we need to use studio/agent.py agent that we use in Breakpoints and Streaming sections before?

@labdmitriy
Copy link
Author

And also I noticed that in the video after the first run we have AI and Tool message, but in the notebook (and also on the second screenshot above) we have also Human message that we inserted as the first message in the stream.
It seems like after each new graph.stream(None, ...) we have the first message in the current stream as the duplicate of the last message in the previous stream.

@labdmitriy
Copy link
Author

labdmitriy commented Oct 7, 2024

Also I found that the math example in this notebook like No, actually multiply 3 and 3! is not stable across different version of models - to minimize the cost, I am using gpt-4o-mini model and sometimes it calculates 3! as factorial of 3 and then the results in video and notebook are also different here.

The same issue was for gpt-4o-mini in the Chain lesson of Module 1, and your last commit changed the code, but not the output:
factorial

@labdmitriy
Copy link
Author

labdmitriy commented Oct 7, 2024

I found older package versions where I don't have such issues with event duplicates:


System Information
------------------
> OS:  Linux
> OS Version:  #132~20.04.1-Ubuntu SMP Fri Aug 30 15:50:07 UTC 2024
> Python Version:  3.11.5 (main, Sep 11 2023, 13:32:41) [GCC 9.4.0]

Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.131
> langchain_anthropic: 0.1.23
> langchain_experimental: 0.0.65
> langchain_openai: 0.1.25
> langchain_text_splitters: 0.2.4
> langchainhub: 0.1.21
> langgraph: 0.2.5
> langserve: 0.2.3

Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.35.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> langgraph-checkpoint: 1.0.12
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pyproject-toml: 0.0.10
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> sse-starlette: Installed. No version info available.
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240712
> typing-extensions: 4.12.2

notebook_2

Is it expected change in streaming behavior or not?

Thank you.

@labdmitriy labdmitriy changed the title Different output in video and notebook (Module 3 - Lesson 3: Editing State and Human Feedback) Different output in video and notebook - duplicated stream events (Module 3 - Lesson 3: Editing State and Human Feedback) Oct 8, 2024
@labdmitriy labdmitriy changed the title Different output in video and notebook - duplicated stream events (Module 3 - Lesson 3: Editing State and Human Feedback) Multiple different outputs in video and notebook (Module 3 - Lesson 3: Editing State and Human Feedback) Oct 8, 2024
@labdmitriy
Copy link
Author

labdmitriy commented Oct 8, 2024

Also for the second case (Awaiting user input) I found that there are different diagrams in the video and notebook.
Video:
video

Notebook (we have extra conditional edge from assistant to human_feedback):
notebook

The second case is strange because we have the same code for the conditional edge with tools_condition from assistant node, but somehow we have 3 conditional edges from assistant, not 2 edges as usual.

@labdmitriy
Copy link
Author

It seems that there are many changes in langgraph logic and structure for the last month and there are multiple inconsistencies in previous and current behavior (or there is documentation about such changes but I didn't find information about any of these changes).

Could you please help with it?

@labdmitriy
Copy link
Author

Hi @rlancemartin,

What is interesting that duplicated events do not appear for stream_mode="updates", got this issue for stream_mode="values".
Maybe this will help.

Thank you.

@labdmitriy
Copy link
Author

Hi @rlancemartin,

I am watching "Lesson 4: Research Assistant" from Module 4 and found that when you resume the graph execution there using stream_mode="values", then you already have the same behavior of this recent stream_mode implementation that I described above, and which is different from the behavior in all of your previous videos in the course.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant