From a37d10b1372879becf35fb4f943328636756769c Mon Sep 17 00:00:00 2001 From: Josh Bailey Date: Fri, 17 Nov 2023 16:36:15 +1300 Subject: [PATCH] Create README.md --- README.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..ab9ec2b --- /dev/null +++ b/README.md @@ -0,0 +1,15 @@ +# TorchServe + +This repository provides Docker containers for [TorchServe](https://github.com/pytorch/serve) (an inference server for [PyTorch](https://github.com/pytorch/pytorch) models) for multiple hardware platforms. + +For an example use, see https://github.com/iqtlabs/gamutrf/tests/test_torchserve.sh, or see [TorchServe's examples](https://github.com/pytorch/serve/tree/master/examples/object_detector/yolo/yolov8). + +## Platforms + +* [iqtlabs/torchserve](https://hub.docker.com/r/iqtlabs/torchserve): CPU only, for arm64 (includes Pi4 and Apple) and amd64. +* [iqtlabs/cuda-torchserve](https://hub.docker.com/r/iqtlabs/cuda-torchserve): CUDA accelerated for amd64 only. +* [iqtlabs/orin-torchserve](https://hub.docker.com/r/iqtlabs/orin-torchserve): [Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/), arm64 only. + +## Apple MPS support + +Currently, [Docker does not support access to Apple MPS devices](https://github.com/pytorch/pytorch/issues/81224), so inference will be CPU only. However, [PyTorch itself does support MPS](https://developer.apple.com/metal/pytorch/), and so TorchServe could be run with MPS support outside a Docker container.