-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(AI Agent Node): Move model retrieval into try/catch to fix continueOnFail handling #13165
base: master
Are you sure you want to change the base?
fix(AI Agent Node): Move model retrieval into try/catch to fix continueOnFail handling #13165
Conversation
466a897
to
ce64cca
Compare
Codecov ReportAttention: Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I have added a few comments with nit-picks about naming things etc.
- Getting the models now feels a bit repetitive across the different root-nodes that interact with models. Is there a way that we could abstract that?
- Thanks for adding the tests! 🎉
* @param ctx - The execution context | ||
* @returns The validated chat model | ||
*/ | ||
export async function retrieveChatModel(ctx: IExecuteFunctions) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should you specify the return type of the function here too?
// We add the agent scratchpad last, so that the agent will not run in loops | ||
// by adding binary messages between each interaction | ||
|
||
// Add the agent scratchpad at the end. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is either a bit redundant or not specific enough -- the previous comment specified why the scratchpad had to be added to the end; this seems relevant for future editors of the function :-)
const memory = await retrieveMemory(this); | ||
|
||
// Retrieve the output parser (if any) and tools. | ||
const outputParsers = await getOptionalOutputParsers(this); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You call this function getOptionalOutputParser
, but the memory is retrieveMemory
-- this feels inconsistent: maybe use retrieveOutputParser
here too (remove the Optional
part, or put it in all function names that retrieve optional things :-))?
Summary
This PR moves the retrieval of language models and other connections inside the item processing loop for several AI nodes. This ensures errors are properly caught when
continueOnFail
is enabled.Changes:
getInputConnectionData
calls inside item loop for:The change allows errors during model/connection retrieval to be caught and handled according to the
continueOnFail
setting, rather than failing the entire execution.Related Linear tickets, Github issues, and Community forum posts
AI-657
Review / Merge checklist
release/backport
(if the PR is an urgent fix that needs to be backported)