Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc Improvement #1225

Open
SamyAteia opened this issue Dec 13, 2024 · 4 comments
Open

Doc Improvement #1225

SamyAteia opened this issue Dec 13, 2024 · 4 comments

Comments

@SamyAteia
Copy link

Maybe add the "expected_output" field to the first test case definition in the quickstart guide: https://docs.confident-ai.com/docs/getting-started#create-your-first-test-case

This would prevent errors when the user later on tries to test custom metrics example code that relies on this field and reuses the first test case instead of copying the test case without text from the custom metrics example.

@penguine-ip
Copy link
Contributor

penguine-ip commented Dec 17, 2024

@SamyAteia I didn't want to deter users and make them feel like LLM evaluation is not for them because they don't have a labelled dataset.

@SamyAteia
Copy link
Author

True, another solution would be to just fill in the example for the custom metric so users wouldn't reuse the one from the steps before.

@penguine-ip
Copy link
Contributor

@SamyAteia We try to make it as easy as possible - if a user reuses the test case and uses a metric that requires the expected output the error message will tell you you'll need the expected output for the test case: https://github.com/confident-ai/deepeval/blob/main/deepeval/metrics/utils.py#L177

@SamyAteia
Copy link
Author

I believe this function is not used when following the quickstart to implement the custom rouge metric.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants