-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using the "AzureOpenAIChatCompletionClient", a user triggering a content violation policy causes an exception within Autogen 0.4.6 that cannot be caught #5569
Comments
A possible solution I think would be for you to create a decorator that handles each type of error in the way you want (maybe reinvoking the client with the proper fallback strategy). Have you tried or considered this approach? |
Looks like we need to have some auto recovery settings for model clients. Link this as a sub issue for: #3632 |
This happens to be in the AzureOpenAIChatCompletionClient though. The content warning json payload is an expected payload from the Azure OpenAI service. It isn't an error that should be retried or auto-recovery, but more something that should be passed back to the conversation for parsing. Right now what happens is the exception is thrown within the autogen framework, somewhere in the _processMessages. Once the framework throws the exception, the message chain stops. I just read your issue #3632 ekzhu and yes, something along those lines. In my case just having the message bubble all the way back up to the conversation would allow me to have a ContentPolicyViolation agent that reviews the tasks and requests and modifies the chat as needed to repair the content policy violation. @Ispinheiro I have considered that, but due to the fact that the exception is thrown deeper in the autogen framework, I am unable to capture the exception back on the application layer. |
You are absolutely right that this is not really type of error that can be auto-recovered from the model client level. It needs to be bubled up to the agent level and handled as a feedback. Do you have a suggestion? To handle this type of error requires the agent to potentially retry but adding the error as a feedback to its context. One way I can think of is doing the error check from outside of the agen: catch this error, and then provide a feedback to the agent as a new message to ask it to revise. Something like:
|
What happened?
I have a scenario where sometimes during processing documents (with my model client being the Azure OpenAI model), where content violations are triggered. I would like to handle these content violations within my code properly instead of just having autogen throw an exception and die un-gracefully.
The below error occurs and control returns to my code within the code block "if isinstance(message, TaskResult):" with a Task Result with no stop reason.
Error processing publish message for Planner/ae9eb67d-f3c2-48c4-9a81-c6d20811a69c
Traceback (most recent call last):
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_core_single_threaded_agent_runtime.py", line 505, in _on_message
return await agent.on_message(
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_core_base_agent.py", line 113, in on_message
return await self.on_message_impl(message, ctx)
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_agentchat\teams_group_chat_sequential_routed_agent.py", line 48, in on_message_impl
return await super().on_message_impl(message, ctx)
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_core_routed_agent.py", line 485, in on_message_impl
return await h(self, message, ctx)
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_core_routed_agent.py", line 268, in wrapper
return_value = await func(self, message, ctx) # type: ignore
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_agentchat\teams_group_chat_chat_agent_container.py", line 53, in handle_request
async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_agentchat\agents_assistant_agent.py", line 416, in on_messages_stream
model_result = await self._model_client.create(
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\autogen_ext\models\openai_openai_client.py", line 534, in create
result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create
return await self._post(
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\openai_base_client.py", line 1856, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\openai_base_client.py", line 1550, in request return await self._request(
File "d:\Users\patri\OneDrive\Documents\WCB\git_ats\Examples\Autocoding_Blogwriter\venv\lib\site-packages\openai_base_client.py", line 1651, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'jailbreak': {'filtered': False, 'detected': False}, 'self_harm': {'filtered': True, 'severity': 'medium'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': True, 'severity': 'medium'}}}}}
system: Termination message = None
What did you expect to happen?
I'd really like to be able to deal with the exception and continue the conversation. I.e. the user asks for something inappropriate like "Describe in detail how to create a dangerous weapon.", I want the chat to be able to determine a content violation occurred and be able to explain to the user the type of violation (as the type is contained within the Azure OpenAI response) and allow them to correct it and continue the chat if possible.
I also want to log the times someone does set off the content violation policy as we are within an enterprise organization, so I need this info bubbled up and not discarded somewhere within the autogen framework.
If raising an exception would break the framework, at least add the content violation policy to the stop reason or something along those lines.
How can we reproduce it (as minimally and precisely as possible)?
import asyncio
import os
from dotenv import load_dotenv
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.base import TaskResult
async def main():
asyncio.run(main())
AutoGen version
0.4.6
Which package was this bug in
Core
Model used
gpt-4o and gpt-4o-mini
Python version
3.10.11 and 3.11
Operating system
Windows 11
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: