-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent the bot from replying multiple text messages #40
base: main
Are you sure you want to change the base?
Conversation
- Implement asynchronous call in textgen.py - Use asyncio cancel the current task and start a new one each time a text message is received
- Bot can now see 2nd last user reply by storing last message before send another reply - Avoid `NoneType` error in messagehandler.py, I forgor
- I literally forgot
- implementation carried over to koboldAI API - so far tested only with koboldcpp
👀I like what I see. I am going to play around with this more and possibly merge soon. |
- Made a `_stop` function in koboldai.py - By invoking `/api/extra/abort` - Official KoboldAI doesn't work
@ausboss Thanks for the reply. I only had a short time testing it, but so far there's no problem in my end. However, I do notice that it supports multiple channel IDs, so this may pose some problems. (it will likely cancel the reply from other channel) So when cancelling the task, it should check the channel ID as well. I'll look into it later for the solution, but I'm not a full time programmer though. (I'm a graphic designer lol) |
I also am not a full time programmer so I fully understand! You are doing some great stuff, extra kudos for being able to decipher my spaghetti code. I have tried to address these types of issues with a while loop that checks messages and appends them and it was ok but had issues. I'm liking this approach and the modifications you are adding. In that function it already has the channel_id being passed in so maybe we can use that. With history and stop_sequences I use the dictionaries self.histories and self.stop_sequences. self.histories uses channel_id as the key and value is the memory object. self.stop_sequences does the same thing with the channel_id as the key but the value is a list of stop_strings thats dynamically created for that channel and adds names to it. My thought is that maybe we could create another dictionary for the messages to be handled in a similar fashion so it will be able to check the channel before cancelling it. |
I think I know what you meant by that. Maybe if I use FYI, I actually solved the multiple message issue a long time ago, but I didn't make a PR (you can look at the date in the screenshot haha) because I heavily modified your code. Back then I changed it to OpenAI API compatible which defeats the purpose of this project, and that was before you implemented Langchain. |
Concedo (the main developer from koboldcpp) released an API documentation: https://lite.koboldai.net/koboldcpp_api#/ This might be useful to check if the endpoint is the official KoboldAI or koboldcpp Update: Right now, I've been too busy doing stuff IRL, so updating this PR might take a while. |
- Using dict for current_tasks, with channel IDs as a key - Perform a check if the endpoint is using koboldcpp or KoboldAI - TODO: support koboldcpp with multi-user requests
- using dict to generate unique keys for each of channel IDs - `_stop` function rolled back to normal request POST instead of aiohttp session
Currently, the bot will reply to multiple people all at once, which could be unintended effect, especially when you have multiple people in the text channel. (assuming ALWAYS_REPLY is set to true)
For example:
This PR "should" fix this problem, where the bot will only reply once. The current AI model nowadays are pretty great at summarize what happened in the chat room (evident by just using listen-only mode).
The intended way for bot to reply (note that I sent replies to the bot 2 times instead of one):
To-do checklist
Currently, this PR is untested for multiple channel IDs, so if you want to test this implementation, you can pull my main repo instead.