Developed by | Guardrails AI |
---|---|
Date of development | Aug 15, 2024 |
Validator type | Moderation |
License | Apache 2 |
Input/Output | Output |
guardrails configure
This validator is for usage for moderating both user prompts and LLM output responses to prevent harmful topics from surfacing in both scenarios. It is based on ShieldGemma 2B which is in turn based on Gemma 2. Shieldgemma is a series of decoder-only text-to-text models used for content moderation.
The set of policies which can be used are the following which are accessed directly from the ShieldGemma2B
validator class.
ShieldGemma2B.POLICY__NO_DANGEROUS_CONTENT
ShieldGemma2B.POLICY__NO_HARASSMENT
ShieldGemma2B.POLICY__NO_HATE_SPEECH
ShieldGemma2B.POLICY__NO_SEXUAL_CONTENT
Multiple policies are not officially supported it is recommended to use one policy at a time. Refer to: https://huggingface.co/google/shieldgemma-2b/discussions/11
- Dependencies:
- guardrails-ai>=0.4.0
$ guardrails hub install hub://guardrails/shieldgemma_2b
or
from guardrails import install
install("hub://guardrails/shieldgemma_2b")
In this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import ShieldGemma2B
from guardrails import Guard, OnFailAction
guard = Guard().use(
ShieldGemma2B,
policies=[ShieldGemma2B.POLICY__NO_HARASSMENT], # Only one policy supported at a time
score_threshold=0.5,
on_fail=OnFailAction.EXCEPTION
)
guard.validate("People are great") # Validation passes
try:
guard.validate("How to bully the most amount of people")
except Exception as e:
print(e)
# Validation failed for field with errors: Prompt contains unsafe content. Classification: unsafe, Score: 0.970687747001648
__init__(self, on_fail="noop")
-
Initializes a new instance of the ShieldGemma2B class.
policies
(List[str]): A list of policies that can be eitherShieldGemma2B.POLICY__NO_DANGEROUS_CONTENT
,ShieldGemma2B.POLICY__NO_HARASSMENT
,ShieldGemma2B.POLICY__NO_HATE_SPEECH
, andShieldGemma2B.POLICY__NO_SEXUAL_CONTENT
.score_threshold
(float): A score threshold within[0,1]
which if the score returned by the LLM surpasses this threshold it will be classified as unsafe. Default is0.5
on_fail
(str, Callable): The policy to enact when a validator fails. Ifstr
, must be one ofreask
,fix
,filter
,refrain
,noop
,exception
orfix_reask
. Otherwise, must be a function that is called when the validator fails.
Parameters
validate(self, value, metadata) -> ValidationResult
-
Validates the given `value` using the rules defined in this validator, relying on the `metadata` provided to customize the validation process. This method is automatically invoked by `guard.parse(...)`, ensuring the validation logic is applied to the input data.
- This method should not be called directly by the user. Instead, invoke
guard.parse(...)
where this method will be called internally for each associated Validator. - When invoking
guard.parse(...)
, ensure to pass the appropriatemetadata
dictionary that includes keys and values required by this validator. Ifguard
is associated with multiple validators, combine all necessary metadata into a single dictionary. value
(Any): The input value to validate.metadata
(dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.
Note:
Parameters