Skip to content

models Relevance Evaluator

github-actions[bot] edited this page Nov 14, 2024 · 6 revisions

Relevance-Evaluator

Overview

Score range Integer [1-5]: 1 is the lowest quality and 5 is the highest quality.
What is this metric? Coherence measures the logical and orderly presentation of ideas in a response, allowing the reader to easily follow and understand the writer's train of thought. A coherent response directly addresses the question with clear connections between sentences and paragraphs, using appropriate transitions and a logical sequence of ideas.
How does it work? The coherence metric is calculated by instructing a language model to follow the definition (in the description) and a set of grading rubrics, evaluate the user inputs, and output a score on a 5-point scale (higher means better quality). Learn more about our definition and grading rubrics.
When to use it? The recommended scenario is generative business writing such as summarizing meeting notes, creating marketing materials, and drafting email.
What does it need as input? Query, Response

Version: 4

Tags

hiddenlayerscanned

View in Studio: https://ml.azure.com/registries/azureml/models/Relevance-Evaluator/version/4

Properties

is-promptflow: True

is-evaluator: True

show-artifact: True

_default-display-file: ./relevance.prompty

Clone this wiki locally