Skip to content

Latest commit

 

History

History
61 lines (46 loc) · 2.78 KB

README_en.md

File metadata and controls

61 lines (46 loc) · 2.78 KB

Yolov5-DeepSORT-TensorRT

English | 简体中文

Introduction

  • This project is a C++ implementation of Yolo-DeepSORT using TensorRT for inference;
  • Provides a Dockerfile for quick setup of the development environment;
  • Simply provide the ONNX file, and when creating the model instance, it will automatically parse the ONNX and serialize the engine file (*.trtmodel) to the workspace directory;
  • My other PyTorch version implementation, including line-crossing detection for pedestrian counting: Yolov5_Deepsort_Person_Count

Quick Start

💻 Environment Setup

Refer to the README to use the Docker container, or configure manually as follows:

  • python: 3.8
  • cuda: 11.2
  • cudnn: 8.2.2.26
  • tensorRT: 8.0.3.4
  • protobuf: 3.11.4

📥 Download Video & Models

Create a workspace directory:

mkdir workspace

Download files to the workspace directory:

File Link
onnx, test.mp4 Download (code: zxao)

🏃‍ Run

Modify the relevant header and library paths in the MakeFile (if you are using a container created from the provided Dockerfile, this step is not needed), then execute:

make run

Inference results will be displayed during runtime, press ESC to exit.

Currently, on a GeForce RTX 2060, the inference speed for test.mp4 is approximately 40 ms/frame.

File Descriptions

  • Infer, Yolo, and DeepSORT are encapsulated using interface mode and RAII:

    • infer.hyolo.hdeepsort.h only expose create_* and inference interfaces.
    • Use create_* to create object instances, which will automatically parse the ONNX file, generate, and load the engine.
  • infer.cpp: Divides into four threads, with producer-consumer relationships between each pair:

Alt text

References