Based on a demo app by GoogleCloudPlatform, using TensorFlow with pre-trained models to implement a dockerized general object detection service.
See also the Google Solution.
The default version (non-GPU):
docker build -t object-detection-app:latest .
docker run --rm -p 8000:8000 object-detection-app:latest
The GPU version (requires NVIDIA-Docker):
docker build -t object-detection-app:gpu .
docker run --runtime=nvidia --rm -p 8000:8000 object-detection-app:gpu
Once the container is up and running, access the app on localhost:8000
(replace localhost
with the Docker Machine IP, if using Docker Machine).
Wait for something similar to the following lines:
2017-12-18 18:04:07.558019: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
* Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
To run pre-built images from Docker Hub:
docker run --rm -p 8000:8000 itamarost/object-detection-app:1.0-py3
# or, using nvidia-docker
docker run --runtime=nvidia --rm -p 8000:8000 itamarost/object-detection-app:1.0-py3-gpu
To run the app on Kubernetes (assuming configured kubectl
):
kubectl apply -f k8s-deploy.yaml
To utilize a GPU, for Kubernetes clusters with available Nvidia GPU cards (alpha at the moment, may break due to Kubernetes API changes):
kubectl apply -f k8s-deploy-gpu.yaml
Feel free to tailor the YAML to your needs (deployed image, fronting service type, namespace, etc.).