Skip to content

guardrails-ai/shieldgemma-2b

Repository files navigation

Overview

Developed by Guardrails AI
Date of development Aug 15, 2024
Validator type Moderation
License Apache 2
Input/Output Output

Description

Intended Use

⚠️ This validator is a remote inference only validator so remote inferencing must be enabled during guardrails configure

This validator is for usage for moderating both user prompts and LLM output responses to prevent harmful topics from surfacing in both scenarios. It is based on ShieldGemma 2B which is in turn based on Gemma 2. Shieldgemma is a series of decoder-only text-to-text models used for content moderation.

The set of policies which can be used are the following which are accessed directly from the ShieldGemma2B validator class.

  • ShieldGemma2B.POLICY__NO_DANGEROUS_CONTENT
  • ShieldGemma2B.POLICY__NO_HARASSMENT
  • ShieldGemma2B.POLICY__NO_HATE_SPEECH
  • ShieldGemma2B.POLICY__NO_SEXUAL_CONTENT

Multiple policies are not officially supported it is recommended to use one policy at a time. Refer to: https://huggingface.co/google/shieldgemma-2b/discussions/11

Requirements

  • Dependencies:
    • guardrails-ai>=0.4.0

Installation

$ guardrails hub install hub://guardrails/shieldgemma_2b

or

from guardrails import install
install("hub://guardrails/shieldgemma_2b")

Usage Examples

Validating string output via Python

In this example, we apply the validator to a string output generated by an LLM.

# Import Guard and Validator
from guardrails.hub import ShieldGemma2B
from guardrails import Guard, OnFailAction

guard = Guard().use(
    ShieldGemma2B, 
    policies=[ShieldGemma2B.POLICY__NO_HARASSMENT], # Only one policy supported at a time
    score_threshold=0.5,
    on_fail=OnFailAction.EXCEPTION
)

guard.validate("People are great") # Validation passes  

try:
	guard.validate("How to bully the most amount of people")  
except Exception as e:
	print(e)
	 
#  Validation failed for field with errors: Prompt contains unsafe content. Classification: unsafe, Score: 0.970687747001648

API Reference

__init__(self, on_fail="noop")

    Initializes a new instance of the ShieldGemma2B class.

    Parameters

    • policies (List[str]): A list of policies that can be either ShieldGemma2B.POLICY__NO_DANGEROUS_CONTENT, ShieldGemma2B.POLICY__NO_HARASSMENT, ShieldGemma2B.POLICY__NO_HATE_SPEECH, and ShieldGemma2B.POLICY__NO_SEXUAL_CONTENT.
    • score_threshold (float): A score threshold within [0,1] which if the score returned by the LLM surpasses this threshold it will be classified as unsafe. Default is 0.5
    • on_fail (str, Callable): The policy to enact when a validator fails. If str, must be one of reask, fix, filter, refrain, noop, exception or fix_reask. Otherwise, must be a function that is called when the validator fails.

validate(self, value, metadata) -> ValidationResult

    Validates the given `value` using the rules defined in this validator, relying on the `metadata` provided to customize the validation process. This method is automatically invoked by `guard.parse(...)`, ensuring the validation logic is applied to the input data.

    Note:

    1. This method should not be called directly by the user. Instead, invoke guard.parse(...) where this method will be called internally for each associated Validator.
    2. When invoking guard.parse(...), ensure to pass the appropriate metadata dictionary that includes keys and values required by this validator. If guard is associated with multiple validators, combine all necessary metadata into a single dictionary.

    Parameters

    • value (Any): The input value to validate.
    • metadata (dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published