-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws[patch]: add guardrails support #82
Conversation
@supreetkt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baskaryan
Thanks for making these changes. Some minor comments about the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baskaryan
Looks good. We should go ahead and merge.
One thing to note is that the usage from ChatBedrock is different which uses guardrails
to set this config. We might want to align with the changes here and make guardrail_config
as primary attribute, with guardrails
as an alias there. But we can handle that in a separate PR. Thanks again for working on this.
hello Team, if the entry point will be ChatBedrock with converse api -- what will it mean for Guardrails config ? will it work as is ? |
@rsgrewal-aws def test_guardrails() -> None:
params = {
"region_name": "us-west-2",
"model_id": "anthropic.claude-3-sonnet-20240229-v1:0",
"guardrails": {
"guardrailIdentifier": "e7esbceow153",
"guardrailVersion": "1",
"trace": "enabled",
},
"beta_use_converse_api": True
}
chat_model = ChatBedrock(**params) # type: ignore[arg-type]
messages = [
HumanMessage(
content=[
"Create a playlist of 2 heavy metal songs.",
{
"guardContent": {
"text": {"text": "Only answer with a list of songs."}
}
},
]
)
]
response = chat_model.invoke(messages)
assert (
response.content == "Sorry, I can't answer questions about heavy metal music."
)
assert response.response_metadata["stopReason"] == "guardrail_intervened"
assert response.response_metadata["trace"] is not None
stream = chat_model.stream(messages)
response = next(stream)
for chunk in stream:
response += chunk
assert (
response.content[0]["text"] # type: ignore[index]
== "Sorry, I can't answer questions about heavy metal music."
)
assert response.response_metadata["stopReason"] == "guardrail_intervened"
assert response.response_metadata["trace"] is not None Error
|
@baskaryan |
Maybe this can be a part of a separate PR, but would advise adding docstrings to all util methods. Otherwise LGTM. |
"max_tokens": 100, | ||
"stop": [], | ||
"guardrail_config": { | ||
"guardrailIdentifier": "e7esbceow153", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this specific to a user or will it work with our creds as well?
No description provided.