-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation bottle necks #166
Comments
SWG Notes: Possible solutions (brain storming, have many pros/cons):
AI(Jacob) - Present thoughts from HPC on this next week. |
Can someone provide an example of a benchmark where third-party code is used for serial evaluation and becomes a bottleneck? I've run into this issue with the translation and image-classification benchmarks, but haven't made it far enough along in porting of the other benchmarks to know which ones are most problematic. |
SWG Notes: We believe Maskrcnn and SSD with coco evaluation is at the top of the list. |
SWG Notes: Long term we'd like a unified solution to this, but for v0.6 it will be up to submitters to optimization evaluation code themselves if they deem it necessary. We intend to revisit this issue in the future to reduce effort submitters have to put into evaluation. |
This is backlogged not a rec. |
Sometimes evaluation can be very time consuming and done by 3rd party code. How can we reduce the influence of 3rd party evaluation code performance on the benchmark scores and engineering burden?
The text was updated successfully, but these errors were encountered: