This is a two-stage vehicle orientation detection algorithm.
- the camera calibration stage based on vanishing point detecting
- the vehicle orientation detection stage based on keypoints detect and Perspective Transformation
- all used python packages are displayed in
requirement.txt
- torch 1.10.2 + CUDA 10.2
- python 3.7
- TensorRT 8.2
-
beginning with
app.py
- all parameters
source
(required) the video pathengine
(required) the path of yolov5 TensorRT engineclasses
(optional) the json file containing all detect targetsroi
(optional) whether select the ROI of the scenarios,the default value is FalsecaliFlag
(required) the True value represents calibration stage, the False value represents orientation detection stagecalibration
(required when caliFlag is False) the utilized calibration file in second stagethreshold
(optional) the threshold filtering vehicle edgelets, the default value is 0.5visualize
(optional) whether visualize the process of getting vehicle edgelets, the default value is FalsesavePath
(required when caliFlag is True) the saving path of calibration file in calibration stage
- all parameters
-
camera calibration stage
python app.py --source ${video path} --engine ${tensorRT engine} --caliFlag True --savePath ${calibration savePath}
- orientation detect stage
python app.py --source ${video path} --engine ${tensorRT engine} --caliFlag False --calibration ${calibration path}
detection_model
the main modules of vehicle orientation detection(calibration and detection)calibration_yolo_model.py
the entire model based on yolov5 detector in workflowcalibration_ssd_model.py
the entire model based on SSD detector in workflowdiamondSpace.py
the module implement a transformation from Cartesian coordinate system to Diamond Spaceedgelets.py
the module provides function for detecting vehicle edges
SSD
SSD detectors and inference apiyolov5
yolov5 detectors optimized by TensorRT and inference apiweights
yolov5 onnx model and TensorRT engine
results
experiment result dataIPM
bird-view transformation and the evaluation of calibration stagetest
some test samples and script for test experimentdataset
Experimental dataset folderimage
images and tables generated during the experimentapp.py
the entrance of entire program
- the roadside surveillance video captured by Guangzhou East Campus of Sun Yat-sen University
- containing four kinds of data captured by different focal length and camera position camera
- dataset structure
calibrate
folder. A piece of surveillance video used to calibrate.eval
folder. A piece of surveillance video used to detect the orientation of vehicle.
-
Camera Calibration Based on Vanishing Point
-
Vanishing Points Detect
- first VP
- Get the ROI of image by yolov5 object detector, Then detect the harris point of the region as feature points.
- track the vehicles by using KLT tracker(optical flow method) to track feature points.
- map all tracks gathered in last step to DiamondSpace to get the intersections(which is considered as ** First VP**)
- Get the first vanishing point by voting algorithm
- second VP
- detect the high quality vehicle edgelets.
- utilize the same method applied before to get the second VP.
- first VP
-
KeyPoint Detect
- Based on an 2D human position estimation network project openpifpaf
- we get a pair of keypoints which can represent the orientation of vehicle(such as car lights)
-
Get Bird-View image plane
- utilize the calibration result to map all pixel to bird-view image.
- calculate the slope of key point connection as the orientation of vehicle.
According to the evaluation metrics mentioned in the thesis. We get the experiment result as following.