Skip to content

Commit

Permalink
Upload
Browse files Browse the repository at this point in the history
  • Loading branch information
SHUO-HUAI committed Dec 24, 2021
1 parent ab5073a commit 6dc5066
Show file tree
Hide file tree
Showing 57 changed files with 23,515 additions and 1 deletion.
304 changes: 304 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,304 @@
# Created by .ignore support plugin (hsz.mobi)

### Python template

# Byte-compiled / optimized / DLL files

__pycache__/

*.py[cod]

*$py.class



# C extensions

*.so



# Distribution / packaging

.Python

env/

build/

develop-eggs/

dist/

downloads/

eggs/

.eggs/

lib/

lib64/

parts/

sdist/

var/

*.egg-info/

.installed.cfg

*.egg



# PyInstaller

# Usually these files are written by a python script from a template

# before PyInstaller builds the exe, so as to inject date/other infos into it.

*.manifest

*.spec



# Installer logs

pip-log.txt

pip-delete-this-directory.txt



# Unit test / coverage reports

htmlcov/

.tox/

.coverage

.coverage.*

.cache

nosetests.xml

coverage.xml

*,cover

.hypothesis/



# Translations

*.mo

*.pot



# Django stuff:

*.log

local_settings.py



# Flask stuff:

instance/

.webassets-cache



# Scrapy stuff:

.scrapy



# Sphinx documentation

docs/_build/



# PyBuilder

target/



# IPython Notebook

.ipynb_checkpoints



# pyenv

.python-version



# celery beat schedule file

celerybeat-schedule



# dotenv

.env



# virtualenv

venv/

ENV/



# Spyder project settings

.spyderproject



# Rope project settings

.ropeproject

### VirtualEnv template

# Virtualenv

# http://iamzed.com/2009/05/07/a-primer-on-virtualenv/

.Python

[Bb]in

[Ii]nclude

[Ll]ib

[Ll]ib64

[Ll]ocal


pyvenv.cfg

.venv

pip-selfcheck.json

### JetBrains template

# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm

# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839



# User-specific stuff:

.idea/workspace.xml

.idea/tasks.xml

.idea/dictionaries

.idea/vcs.xml

.idea/jsLibraryMappings.xml



# Sensitive or high-churn files:

.idea/dataSources.ids

.idea/dataSources.xml

.idea/dataSources.local.xml

.idea/sqlDataSources.xml

.idea/dynamic.xml

.idea/uiDesigner.xml



# Gradle:

.idea/gradle.xml

.idea/libraries



# Mongo Explorer plugin:

.idea/mongoSettings.xml



.idea/



## File-based project format:

*.iws



## Plugin-specific files:



# IntelliJ

/out/



# mpeltonen/sbt-idea plugin

.idea_modules/



# JIRA plugin

atlassian-ide-plugin.xml



# Crashlytics plugin (for Android Studio and IntelliJ)

com_crashlytics_export_strings.xml

crashlytics.properties

crashlytics-build.properties

fabric.properties




89 changes: 88 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,88 @@
# EDLAB
# EDLAB (Edge Deep Learning Accelerator Benchmark)

EDLAB is a benchmarking evalution tool, developed by the team of Weichen Liu at Nanyang Technological University with the collabration of HP Inc, to automatically evaluate different edge deep learning platforms. So far, EDLAB supports the following devices:
- Desktop GPU
- Desktop CPU
- NVIDIA Jetson TX2
- Google Edge TPU
- Intel Neural Compute Stick 2

![EDLAB](docs/edlab.png "EDLAB")

## How to Use
Clone this tool into your devices (PC, TX2, TPU/NCS2 host, etc.).
``` shell script
git clone https://github.com/HPInc/EDLAB.git
```
Execute the *run.sh* script.
``` shell script
cd ./EDLAB
bash run.sh <devices_name> <model_name>
```
Devices name list can be found in the *run.sh* and model name list is written down in this [config.properties](config.properties).
The left term of each model first line is the model short name, which is feed to the *run.sh*.

## Add model
You can add a model to this tool by the following steps.

First, create the folder: EDLAB/models/<model_name>, and put the *frozen.pb* file into this folder.
``` shell script
cd ./EDLAB
mkdir ./models/<model_name>
mv PATH/TO/frozen.pb ./models/<model_name>
```
Second, write the information of this model into [config.properties](config.properties).
```properties
<model_name>="classification/detection"
<model_name>_name="name_of_your_frozen.pb"
<model_name>_dataset="<dataset_name>"
# For classification:
<model_name>_preprocessing="inception/vgg"
<model_name>_labelsoffset="0/1"
```
Last, create the folder: EDLAB/dataset/<dataset_name> and put all images and ground truth in your data set into this folder.
```shell script
cd ./EDLAB
mkdir ./dataset/<dataset_name>
mv PATH/TO/ALLIMGS/* ./dataset/<dataset_name>
mv PATH/TO/GROUND_TURTH ./dataset/<dataset_name>.gtruth
```
## Results
After your command,
```shell script
bash run.sh <devices_name> <model_name>
```
all results will be saved in this [result.csv](result.csv), in which EDP means the energy-delay product and LEDP mean loss-energy-delay product. Loss equals claimed accuracy minus the accuracy we got, which aims to find the accuracy loss of each edge accelerator under the same original model.

The models we provided comes from [here for classification](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models) and [here for detection](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models), where you can find the claimed accuracy of each model.


The **imagenet_test** data set we provided is the first 5000 images from [IMAGENET_2012](http://image-net.org/) and the **coco2014_test** data set comes from [COCO 14 minival set](https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_minival_ids.txt).

![Example results](docs/results.png "Example results")

## Project Information

Copyright (c) HP-NTU Digital Manufacturing Corporate Lab, Nanyang Technological University, Singapore.

If you use the tool or adapt the tool in your works or publications, you are required to cite the following reference:
```bib
@article{kong2021edlab,
title={EDLAB: A Benchmark for Edge Deep Learning Accelerators},
author={Kong, Hao and Huai, Shuo and Liu, Di and Zhang, Lei and Chen, Hui and Zhu, Shien and Li, Shiqing and Liu, Weichen and Rastogi, Manu and Subramaniam, Ravi and Athreya, Madhu and Lewis, M. Anthony},
journal={IEEE Design \& Test},
year={2021},
publisher={IEEE}
}
```
**Contributors:**
Hao Kong, Shuo Huai, Di Liu, Lei Zhang, Hui Chen, Shien Zhu, Shiqing Li, Weichen Liu, Manu Rastogi (HP), Ravi Subramaniam (HP), Madhu Athreya (HP), M. Anthony Lewis (HP).

If you have any comments, questions, or suggestions please create an issue on github or contact us via email.

Hao Kong <kong [DOT] hao [AT] ntu [DOT] edu [DOT] sg>

<!-- Updated by KongHao -->

**It is a contribution made from the HP-NTU Corp Lab.
2 public mirror repositories for it: [HP Inc.](https://github.com/HPInc/EDLAB), [ntuliuteam](https://github.com/ntuliuteam/EDLAB).**
Loading

0 comments on commit 6dc5066

Please sign in to comment.