You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
first of all, congrats on the great work !
I was wondering, if you could clarify a few points on "long-term memory" for me.
Q1: As I understand it, you do not have an explicit long-term memory module (as e.g. in Agent Workflow Memory),
it's rather distributed across its neural network parameters. Is that correct ?
And as a follow-up: in this issue you mention, you're using 'history 5' for multi step tasks and give the following example:
Q2: Does this mean, the agent never sees a full action history (not even the textual representation), but maximum the last 5 time steps? Q3: If this is the case, do you think the agent "long-term memory" would benefit from seeing and thereby connecting whole workflows with task execution ? Q4: In the given example, does
{
"type": "text",
"text": previous_actions[1],
},
contain the full prediction (thought + action) ?
The text was updated successfully, but these errors were encountered:
We truly appreciate your attention to our work. Here are the answers to you question.
A1: Yes, you're correct. It's distributed across its neural network parameters.
A2: Yes, at most 5 history images are given.
A3: Yes, we believe that seeing all historical images is beneficial. However, the history5 approach is designed to balance computational efficiency and performance.
Hi,
first of all, congrats on the great work !
I was wondering, if you could clarify a few points on "long-term memory" for me.
Q1: As I understand it, you do not have an explicit long-term memory module (as e.g. in Agent Workflow Memory),
it's rather distributed across its neural network parameters. Is that correct ?
And as a follow-up: in this issue you mention, you're using 'history 5' for multi step tasks and give the following example:
Q2: Does this mean, the agent never sees a full action history (not even the textual representation), but maximum the last 5 time steps?
Q3: If this is the case, do you think the agent "long-term memory" would benefit from seeing and thereby connecting whole workflows with task execution ?
Q4: In the given example, does
contain the full prediction (thought + action) ?
The text was updated successfully, but these errors were encountered: