Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation bottle necks #166

Open
bitfort opened this issue Jan 10, 2019 · 5 comments
Open

Evaluation bottle necks #166

bitfort opened this issue Jan 10, 2019 · 5 comments
Labels
Backlog An issue to be discussed in a future Working Group, but not the immediate next one.

Comments

@bitfort
Copy link

bitfort commented Jan 10, 2019

Sometimes evaluation can be very time consuming and done by 3rd party code. How can we reduce the influence of 3rd party evaluation code performance on the benchmark scores and engineering burden?

@bitfort bitfort added the Next Meeting Item to be discussed in the next Working Group label Jan 10, 2019
@bitfort
Copy link
Author

bitfort commented Jan 17, 2019

SWG Notes:

Possible solutions (brain storming, have many pros/cons):

  • Not time evaluation
  • Fewer evaluation
  • Provide/choose optimized implementations for evaluation
  • Let submitters figure out how to handle it

AI(Jacob) - Present thoughts from HPC on this next week.
AI(all submitters) - This is a call for proposal :)

@jbalma
Copy link

jbalma commented Jan 24, 2019

Can someone provide an example of a benchmark where third-party code is used for serial evaluation and becomes a bottleneck?

I've run into this issue with the translation and image-classification benchmarks, but haven't made it far enough along in porting of the other benchmarks to know which ones are most problematic.

@bitfort bitfort added the AI There is an action item here. label Jan 24, 2019
@bitfort
Copy link
Author

bitfort commented Jan 24, 2019

SWG Notes:

We believe Maskrcnn and SSD with coco evaluation is at the top of the list.

@bitfort bitfort added Rec: Rules Change A recommendation has been issued by the Working Group. and removed AI There is an action item here. Next Meeting Item to be discussed in the next Working Group labels Apr 11, 2019
@bitfort
Copy link
Author

bitfort commented Apr 11, 2019

SWG Notes:

Long term we'd like a unified solution to this, but for v0.6 it will be up to submitters to optimization evaluation code themselves if they deem it necessary. We intend to revisit this issue in the future to reduce effort submitters have to put into evaluation.

@petermattson petermattson added Backlog An issue to be discussed in a future Working Group, but not the immediate next one. and removed Rec: Rules Change A recommendation has been issued by the Working Group. labels May 29, 2020
@petermattson
Copy link
Contributor

This is backlogged not a rec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Backlog An issue to be discussed in a future Working Group, but not the immediate next one.
Projects
None yet
Development

No branches or pull requests

3 participants