-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Docker image with stable CK for CK+TF+MLPerf #97
Comments
We now have quite a few Docker images for TFLite, ArmNN-TFLite and TF-C++:
|
One of the issues with Docker images (however incredibly helpful they are in expanding our testing capability) is the sheer size of them, especially when there contents is not reused. Of course, on the same machine layers are likely to be reused, so that:
doesn't mean 9 GB of space, as:
|
Still, it would nice to consider creating "ensembles" of CK Docker images. Perhaps we could place CK components (libraries, datasets) on virtual drives, and seamlessly map them from inside Docker images? |
Yes. That's the problem with Docker. Should we upload all of them to the Docker Hub? |
For example, for On the other hand, |
By rearranging the order of statements in the
|
Here's the size of all things installed under
|
After another rearrangement (by essentially sharing
|
I was able to cut the source of Boost as follows:
The only niggle is an
(Similarly to the COCO originals and training annotations.)
|
As @bellycat77 has suggested a couple of times, we could cut some fat from I'll also see if we can use the installed TFLite source as the TF source required by ArmNN. |
One could try cunningly do:
instead of:
Unfortunately, a detector bug produces duplicate entries for TF:
|
But this works:
I guess |
@gfursin Yes, I guess you can upload them one by one! Ping me when you are done, so I can update the corresponding webpages. |
Ok. I downloaded them all (I believe): https://hub.docker.com/u/ctuning ! |
You mean, uploaded? :) |
TF-C++ is huge:
|
Ups sorry. Yes, I pushed all of these images to docker/ctuning ... |
I've provided descriptions for the following image classification images: |
The following object-detection images need to be pushed:
under, respectively: The massive reduction in the shared image size (2.962GB to 1.839GB) is thanks to noticing that the dependency on the TensorFlow Object Detection API was not in fact needed for these C++-based variants, and removing it from the SSD-MobileNet model and programs. |
I've also created a cool "dashboard" Docker image to benchmark the host machine on On the screenshot above, you can see four points corresponding to benchmarking 4 models on a Xeon laptop:
As expected, quantized MobileNet is slightly less accurate that non-quantized MobileNet, and ResNet with and without ArgMax are nearly identical (within experimental error margins). Unexpectedly, quantized MobileNet is 3x slower than non-quantized MobileNet. (But we know that TFLite is not optimized for x86.) Incidentally, ResNet is also 3x times slower than non-quantized MobileNet, while being only 1% more accurate (on the first 500 images of ImageNet 2012). |
Let's create a Docker image with stable CK repositories for TensorFlow and our reference MLPerf workflows. It shouldn't be very difficult and allow the community to use:
a) latest CK workflows for reference MLPerf implementation (may sometimes fail in latest environment)
b) stable implementation which should always work but may not use latest frameworks and environments.
The text was updated successfully, but these errors were encountered: