System Environment:
System: Ubuntu 18.04
Opencv: opencv 3.2
Tensorflow: 1.13.1
Instructions:
- Write label.pbtxt under the path models/research/object_detection/mobilenet_ssd_v2_train/dataset/ The format like this:
item {
id: 1
name: 'person'
}
item {
id: 2
name: 'car'
}
-
Convert yout images and annotations to tfrecord by models datatool api.
Commands:
cd models/research/
export PYTHONPATH=$PYTHONPATH:path to /models/research/:path to/models/research/slim
protoc object_detection/protos/*.proto --python_out=.
python object_detection/dataset_tools/create_coco_tf_record.py --image_dir=/path_to/img/ --ann_dir=/path_to/ann/ --output_path=/path_to/train.record --label_map_path=/path_to/demo/label.pbtxt
- Download the 'mobilenet ssd v2 quantized' model from model zoo, and replace it with models/research/object_detection/mobilenet_ssd_v2_train/pretrained_models(the origin one is the 'mobilenet ssd v2 quantized' as well)
- Modify the data path in pipeline.config. The data is the tf record you generated.
train
input_path: "/path_to/train.record"
test
input_path: "/path_to/train.record"
- Training
cd models/research/
export PYTHONPATH=$PYTHONPATH: path to /models/research/:path to/models/research/slim
protoc object_detection/protos/*.proto --python_out=.
python object_detection/legacy/train.py --train_dir=path to /models/research/object_detection/mobilenet_ssd_v2_train/CP/ --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/pipeline.config
- Using Tensorboard to visualize your training process
tensorboard --logdir=/path_to/mobilenet_ssd_v2_train/CP
- Export your model to frozen graph, which can cheeck the results with demo.py.
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=/path_to/pipleline.config --trained_checkpoint_prefix=/path_to/mobilenet_ssd_v2_train/CP/model.ckpt-xxxxxx --output_directory=/path_to/mobilenet_ssd_v2_train/IG/
- Add label_map.json.
{
"1": "person"
}
{
"2": "car"
}
- Run demo.py.
python demo.py PATH_TO_FROZEN_GRAPH cam_dir js_file
- Convert frozen_graph to tf_lite Use export_tflite_ssd_graph.py generate tflite_graph.pb
python object_detection/export_tflite_ssd_graph.py --input_type=image_tensor --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/pipeline.config --trained_checkpoint_prefix=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/model.ckpt --output_directory=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite --add_postprocessing_op=true
- Convert tflite_graph.pb to model.tflite
tflite_convert --output_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/model.tflite --graph_def_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/tflite_graph.pb --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --input_shape=1,300,300,3 --allow_custom_ops --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127
- Modify the line 89 at demo.cpp
tflite::FlatBufferModel::BuildFromFile("../model.tflite");
-
Modify the labelmap.txt with you annotation if you fine tuned your model.
-
Run demo.cpp on x86 unbuntu, make sure opencv and bazel are installed.
- Build libtensorflowlite.so, under the tensorflow directory.
bazel build -c opt //tensorflow/lite:libtensorflowlite.so --fat_apk_cpu=arm64-v8a
- Move .so to tensorflow_object_detection_tflite/lib
- Change find_library(TFLITE_LIBRARY tensorflow-lite "lib") to find_library(TFLITE_LIBRARY tensorflowlite "lib") in CMakeLists.txt.
- Build cmake
mkdir build cd build cmake .. make -j ./demo
-
Run demo.cpp on arm64-v8a ubuntu.
- Intall opencv on your arm64 motherboard.
- Build libtensorflow-lite.a, followed by the tensorflow tutorial https://www.tensorflow.org/lite/guide/build_arm64. Careful about the arm version, v7 or v8.
- Move .a to tensorflow_object_detection_tflite/lib
- keep find_library(TFLITE_LIBRARY tensorflow-lite "lib") unchanged.
- Build cmake
mkdir build cd build cmake .. make -j ./demo
-
If there is a flatbuffers error, you should build flatbuffers on your desktop, and use its header files and .a lib file, put and replace them into tensorflow_object_detection_tflite/include and tensorflow_object_detection_tflite/lib, respectively. You can check here to know how to build. google/flatbuffers#5569 (comment)
-
Result image