This repository contains examples for deep learning inference deployment using AI accelerators: * Amazon EC2 G4 instances with NVIDIA T4 GPUs and NVIDIA TensorRT * Amazon EC2 Inf1 instances with AWS Inferentia and AWS Neuron SDK * Amazon EC2 CPU instances with Amazon Elastic Inference * Amazon SageMaker deployment hosting for CPUs, GPUs and AWS Inferentia