Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(AI Agent Node): Move model retrieval into try/catch to fix continueOnFail handling #13165

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

OlegIvaniv
Copy link
Contributor

@OlegIvaniv OlegIvaniv commented Feb 10, 2025

Summary

This PR moves the retrieval of language models and other connections inside the item processing loop for several AI nodes. This ensures errors are properly caught when continueOnFail is enabled.

Changes:

  • Move getInputConnectionData calls inside item loop for:
    • Tools Agent
    • Basic LLM Chain
    • QA Chain
    • Summarization Chain
  • Refactor Tools Agent code into smaller focused functions for better readability
  • Add unit tests for Tools Agent helper functions
  • Add execution cancel signal to chain invocations

The change allows errors during model/connection retrieval to be caught and handled according to the continueOnFail setting, rather than failing the entire execution.

Related Linear tickets, Github issues, and Community forum posts

AI-657

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)

@OlegIvaniv OlegIvaniv force-pushed the ai-657-continue-on-error-not-working-in-ai-node branch from 466a897 to ce64cca Compare February 10, 2025 11:00
Copy link

codecov bot commented Feb 10, 2025

@n8n-assistant n8n-assistant bot added the n8n team Authored by the n8n team label Feb 10, 2025
Copy link
Contributor

@jeanpaul jeanpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I have added a few comments with nit-picks about naming things etc.
  • Getting the models now feels a bit repetitive across the different root-nodes that interact with models. Is there a way that we could abstract that?
  • Thanks for adding the tests! 🎉

* @param ctx - The execution context
* @returns The validated chat model
*/
export async function retrieveChatModel(ctx: IExecuteFunctions) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you specify the return type of the function here too?

// We add the agent scratchpad last, so that the agent will not run in loops
// by adding binary messages between each interaction

// Add the agent scratchpad at the end.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is either a bit redundant or not specific enough -- the previous comment specified why the scratchpad had to be added to the end; this seems relevant for future editors of the function :-)

const memory = await retrieveMemory(this);

// Retrieve the output parser (if any) and tools.
const outputParsers = await getOptionalOutputParsers(this);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You call this function getOptionalOutputParser, but the memory is retrieveMemory -- this feels inconsistent: maybe use retrieveOutputParser here too (remove the Optional part, or put it in all function names that retrieve optional things :-))?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
n8n team Authored by the n8n team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants