This repository contains pipelines to obtain object detections on TfL JamCam feeds. Running these pipelines accomplish:
- Downloading TfL videos and storing metadata to a MongoDB database
- Generate object detections and store detection data to a MongoDB database
- Serve this data through a REST API
The data generated may be used to get some cool insights from London's traffic cameras! Here a plot showing the level of cars in London over 3 days.
- Linux distribution - any which support NVIDIA docker. See https://github.com/NVIDIA/nvidia-docker
- NVIDIA GPU
Install:
- Docker version 19.03.
- NVIDIA container toolkit. Follow the instructions here: https://github.com/NVIDIA/nvidia-docker
To enable execute permissions on all files in shell scripts directory
chmod -R +x shell\ scripts/
Build pipeline containers
./shell\ scripts/build_containers.sh
In some cases, your GPU will need a different darknet images. If it has Tensor cores, change the image being used
to ganhuanmin/gpudarknet:tensor_core
in the detect
scripts. You can verify is you are using the right image
by viewing the log output of the running gpu_darknet container. If no output is being given, swap images.
mp4 file named after its checksum are stored in ~/TfL_videos
directory on the host.
Metadata and Detections are stored in the MongoDB.
- Setup the MongoDB database to run locally on default port 27017
- Run simple file server making it available http://0.0.0.0:8000/
- cd to
/
directory. python3 -m http.server 8000
- Run file server
- cd to
- Setup the REST API on the local machine:
cd
into detectionAPIvirtualenv -p python3 env
- Create python virtual environmentsource env/bin/activate
- Start virtual environmentpip install -r requirements.txt
- Install requirements- Exec
python main.py
to start the API
- Run shell script which continually pings endpoint to update cached object
cd
into shell scripts and runscheduled_job_on_local_API.sh
- Run fill_video_store_local.sh script
./shell\ scripts/deploy\ local\ scripts/fill_video_store_local.sh
which will query the TFL api and add it download the video and create metadata
- Run
detect_on_JamCam_detections_API_local.sh
. Note this DOES NOT detect every video in the store, but rather every 20 or so minutes. It is limited by detector speed and cannot ping the internal API quick enough to get all videos. If darknet is not outputting any detections (you can check withdocker logs -f gpu_darknet
), try using the other gpudarknet image.
fill_detect_once.sh
- Get freshest feeds from TFL and perform detections on them
mp4 file named after its checksum are stored in ~/tfl-mp4-videos
bucket in GCP.
Metadata and Detections are stored in MongoDB
- Setup the MongoDB database and get its connection URL
- Change the Mongo connection URL in detectionAPI/config.py
- Setup the REST API on the local machine:
cd detectionAPI
gcloud config set project jamcam-detections-api
- Set the Google cloud project to deploy to its app enginegcloud app deploy
- Deploy the API, takes several minutesgcloud app deploy cron.yaml
- Deploy scheduled cron job to update cached response object
- Run fill_video_store_local.sh script
./shell\ scripts/deploy\ cloud\ scripts/fill_video_store_bucket.sh
which will query the TFL api and add it download the video and create metadata
- Run
detect_on_JamCam_detections_API_bucket.sh
. Note this DOES NOT detect every video in the store, but rather every 20 or so minutes. It is limited by detector speed and cannot ping the internal API quick enough to get all videos. If darknet is not outputting any detections (you can check withdocker logs -f gpu_darknet
), try using the other gpudarknet image.
fill_detect_once.sh
- Get freshest feeds from TFL and perform detections on them
To generate the jpg detections of all images within a directory, run
docker run --rm -d --name debug -it --mount type=bind,source="$(pwd)"/debug_pictures,target=/debug_pictures --entrypoint "/bin/bash" output_detected_video
docker cp shell\ scripts/Scripts\ for\ debugging/improving_darknet.sh debug:/darknet/
docker attach debug
./improving_darknet.sh
To generate detections of all the videos in a database, run
detect_all_videos.sh
To run the debug mp4 pipeline, run. Exit the container when finished and the results will be in subdirectories in debug_pictures
create_debug_mp4s.sh
download_video_to_volume
Download a JamCam video to the specified volume
gpu_darknet
YOLO darknet container for accelerated gpu computation
gpu_darknet_tensorcore
YOLO darknet container for accelerated gpu computation for GPUs with tensor cores
input_from_API
Send input to darknet container from deployed detectionAPI
input_from_db
Send input to darknet container from database
output_detected_video
Create a detected version of a video
parse_dn_logs
Receive input from STDOUT of darknet container and send detections to API
parse_dn_db
Receive input from STDOUT of darknet container and send detections to database
store_videos
Sole container for pipeline to download videos and store them into a database
upload_video
Upload the videos from target directory
Uses public sector information licensed under the Open Government Licence v3.0.