-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on how metrics are calculated #19
Comments
Hi @elephaint, The results for each model are standardized by seasonal naive results and then we take the geometric mean across datasets for each model. You can find the details in the source code for the leaderboard ](https://huggingface.co/spaces/Salesforce/GIFT-Eval/tree/main/src) |
Ah, thanks, completely overlooked that, scrolling down further it becomes obvious when seeing SeasonalNaive at 1.0. Thanks! Tiny thing: I noticed that in your |
No worries, I am glad it clarifies the confusion! In the notebook we actually use Naive because the predictor is set to NaivePredictor. We use seasonal naive as the fallback model. But the same notebook can be easily adapted for seasonal naive too. One would just need to create a So the terms Naive and Seasonal Naive on the leaderboard and in the repository represent their respective models. |
If I run the following:
the output is:
whereas the leaderboard here states:
Can you explain / detail how the leaderboard is calculated or point me to where it is explained?
The text was updated successfully, but these errors were encountered: