You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to do this the first time I added llm support, but I ran into the issue that you cannot easily fetch a chain of replies from a message. This probably means that we will have to keep our own cache of contexts, and possibly associate each new message to an existing one, if it is replying to a message that was "in the conversation". Or, we could just store a universal context for the day, and remove it every midnight or something (this might be pretty resource intensive though)
The text was updated successfully, but these errors were encountered:
Regarding storage format. A way to distinguish between users could be something like this (in the list of messages):
{{"role": "user","content": "(agent_e11 <@0123456789>) Hello, this is my message."},// (user_name <@user_id>){"role": "assistant","content": "Hello, this is my message."}// No markup}
And then add something like this to the system prompt:
The user messages are prefixed with (user_name <@user_id>),
where the user's name is user_name and the user's id is user_id.
I tried to do this the first time I added llm support, but I ran into the issue that you cannot easily fetch a chain of replies from a message. This probably means that we will have to keep our own cache of contexts, and possibly associate each new message to an existing one, if it is replying to a message that was "in the conversation". Or, we could just store a universal context for the day, and remove it every midnight or something (this might be pretty resource intensive though)
The text was updated successfully, but these errors were encountered: