Skip to content

How to deploy open source models using DeepStream and Triton Inference Server

License

Notifications You must be signed in to change notification settings

NVIDIA-AI-IOT/deepstream_triton_model_deploy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

------------------------------------------------------

This sample application is no longer maintained

------------------------------------------------------

Deploying an open source model using NVIDIA DeepStream and Triton Inference Server

This repository contains contains the the code and configuration files required to deploy sample open source models video analytics using Triton Inference Server and DeepStream SDK 5.0.

Getting Started

Prerequisites:

DeepStream SDK 5.0 or use docker image (nvcr.io/nvidia/deepstream:5.0.1-20.09-triton) for x86 and (nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples) for NVIDIA Jetson.

The following models have been deployed on DeepStream using Triton Inference Server.

For further details, please see each project's README.

TensorFlow Faster RCNN Inception V2 : README

The project shows how to deploy TensorFlow Faster RCNN Inception V2 network trained on MSCOCO dataset for object detection. faster_rcnn_output

ONNX CenterFace : README

The project shows how to deploy ONNX CenterFace network for face detection and alignment. centerface_output

Additional resources:

Developer blog: Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0

Learn more about Triton Inference Server

Post your questions or feedback in the DeepStream SDK developer forums