-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doc Improvement #1225
Comments
@SamyAteia I didn't want to deter users and make them feel like LLM evaluation is not for them because they don't have a labelled dataset. |
True, another solution would be to just fill in the example for the custom metric so users wouldn't reuse the one from the steps before. |
@SamyAteia We try to make it as easy as possible - if a user reuses the test case and uses a metric that requires the expected output the error message will tell you you'll need the expected output for the test case: https://github.com/confident-ai/deepeval/blob/main/deepeval/metrics/utils.py#L177 |
I believe this function is not used when following the quickstart to implement the custom rouge metric. |
Maybe add the "expected_output" field to the first test case definition in the quickstart guide: https://docs.confident-ai.com/docs/getting-started#create-your-first-test-case
This would prevent errors when the user later on tries to test custom metrics example code that relies on this field and reuses the first test case instead of copying the test case without text from the custom metrics example.
The text was updated successfully, but these errors were encountered: