Skip to content

Commit

Permalink
Update README and versions for 24.09
Browse files Browse the repository at this point in the history
  • Loading branch information
pvijayakrish committed Sep 7, 2024
1 parent 86708e6 commit dac6093
Show file tree
Hide file tree
Showing 11 changed files with 18 additions and 130 deletions.
8 changes: 4 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.

ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.08-py3
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.08-py3-sdk
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.09-py3
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.09-py3-sdk

ARG MODEL_ANALYZER_VERSION=1.44.0dev
ARG MODEL_ANALYZER_CONTAINER_VERSION=24.09dev
ARG MODEL_ANALYZER_VERSION=1.44.0
ARG MODEL_ANALYZER_CONTAINER_VERSION=24.09
FROM ${TRITONSDK_BASE_IMAGE} as sdk

FROM $BASE_IMAGE
Expand Down
114 changes: 1 addition & 113 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,116 +20,4 @@ limitations under the License.

> [!Warning]
>
> ##### LATEST RELEASE
>
> You are currently on the `main` branch which tracks under-development progress towards the next release. <br>
> The latest release of the Triton Model Analyzer is 1.43.0 and is available on branch
> [r24.08](https://github.com/triton-inference-server/model_analyzer/tree/r24.08).
Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
<br><br>

# Features

### Search Modes

- [Optuna Search](docs/config_search.md#optuna-search-mode) **_-ALPHA RELEASE-_** allows you to search for every parameter that can be specified in the model configuration, using a hyperparameter optimization framework. Please see the [Optuna](https://optuna.org/) website if you are interested in specific details on how the algorithm functions.

- [Quick Search](docs/config_search.md#quick-search-mode) will **sparsely** search the [Max Batch Size](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size),
[Dynamic Batching](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher), and
[Instance Group](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#instance-groups) spaces by utilizing a heuristic hill-climbing algorithm to help you quickly find a more optimal configuration

- [Automatic Brute Search](docs/config_search.md#automatic-brute-search) will **exhaustively** search the
[Max Batch Size](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size),
[Dynamic Batching](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher), and
[Instance Group](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#instance-groups)
parameters of your model configuration

- [Manual Brute Search](docs/config_search.md#manual-brute-search) allows you to create manual sweeps for every parameter that can be specified in the model configuration

### Model Types

- [Ensemble](docs/model_types.md#ensemble): Model Analyzer can help you find the optimal
settings when profiling an ensemble model

- [BLS](docs/model_types.md#bls): Model Analyzer can help you find the optimal
settings when profiling a BLS model

- [Multi-Model](docs/model_types.md#multi-model): Model Analyzer can help you
find the optimal settings when profiling multiple concurrent models

- [LLM](docs/model_types.md#llm): Model Analyzer can help you
find the optimal settings when profiling Large Language Models

### Other Features

- [Detailed and summary reports](docs/report.md): Model Analyzer is able to generate
summarized and detailed reports that can help you better understand the trade-offs
between different model configurations that can be used for your model.

- [QoS Constraints](docs/config.md#constraint): Constraints can help you
filter out the Model Analyzer results based on your QoS requirements. For
example, you can specify a latency budget to filter out model configurations
that do not satisfy the specified latency threshold.
<br><br>

# Examples and Tutorials

### **Single Model**

See the [Single Model Quick Start](docs/quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple PyTorch model.

### **Multi Model**

See the [Multi-model Quick Start](docs/mm_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on two models running concurrently on the same GPU.

### **Ensemble Model**

See the [Ensemble Model Quick Start](docs/ensemble_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple Ensemble model.

### **BLS Model**

See the [BLS Model Quick Start](docs/bls_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple BLS model.
<br><br>

# Documentation

- [Installation](docs/install.md)
- [Model Analyzer CLI](docs/cli.md)
- [Launch Modes](docs/launch_modes.md)
- [Configuring Model Analyzer](docs/config.md)
- [Model Analyzer Metrics](docs/metrics.md)
- [Model Config Search](docs/config_search.md)
- [Model Types](docs/model_types.md)
- [Checkpointing](docs/checkpoints.md)
- [Model Analyzer Reports](docs/report.md)
- [Deployment with Kubernetes](docs/kubernetes_deploy.md)
<br><br>

# Terminology

Below are definitions of some commonly used terms in Model Analyzer:

- **Model Type** - Category of model being profiled. Examples of this include single, multi, ensemble, BLS, etc..
- **Search Mode** - How Model Analyzer explores the possible configuration space when profiling. This is either exhaustive (brute) or heuristic (quick/optuna).
- **Model Config Search** - The cross product of model type and search mode.
- **Launch Mode** - How the Triton Server is deployed and used by Model Analyzer.

# Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this
project. When help with code is needed, follow the process outlined in
the Stack Overflow (https://stackoverflow.com/help/mcve)
document. Ensure posted examples are:

- minimal – use as little code as possible that still produces the
same problem

- complete – provide all parts needed to reproduce the problem. Check
if you can strip external dependency and still show the problem. The
less time we spend on reproducing problems the more time we have to
fix it

- verifiable – test the code you're about to provide to make sure it
reproduces the problem. Remove all other problems that are not
related to your request/question.
> You are currently on the `r24.08` branch which tracks under-development progress towards the next release. <br>
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.44.0dev
1.44.0
4 changes: 2 additions & 2 deletions docs/bls_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -59,7 +59,7 @@ docker run -it --gpus 1 \
--shm-size 2G \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:24.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
Expand Down
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ cpu_only_composing_models: <comma-delimited-string-list>
[ reload_model_disable: <bool> | default: false]
# Triton Docker image tag used when launching using Docker mode
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:24.08-py3 ]
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:24.09-py3 ]
# Triton Server HTTP endpoint url used by Model Analyzer client"
[ triton_http_endpoint: <string> | default: localhost:8000 ]
Expand Down
4 changes: 2 additions & 2 deletions docs/ensemble_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ mkdir examples/quick-start/ensemble_add_sub/1
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -65,7 +65,7 @@ docker run -it --gpus 1 \
--shm-size 1G \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:24.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ images:
triton:
image: nvcr.io/nvidia/tritonserver
tag: 24.08-py3
tag: 24.09-py3
```

The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.
Expand Down
4 changes: 2 additions & 2 deletions docs/mm_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:24.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

## `Step 3:` Profile both models concurrently
Expand Down
4 changes: 2 additions & 2 deletions docs/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:24.08-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:24.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:24.09-py3-sdk
```

## `Step 3:` Profile the `add_sub` model
Expand Down
2 changes: 1 addition & 1 deletion helm-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 24.08-py3
tag: 24.09-py3
2 changes: 1 addition & 1 deletion model_analyzer/config/input/config_defaults.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
DEFAULT_REQUEST_RATE_SEARCH_ENABLE = False
DEFAULT_CONCURRENCY_SWEEP_DISABLE = False
DEFAULT_TRITON_LAUNCH_MODE = "local"
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:24.08-py3"
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:24.09-py3"
DEFAULT_TRITON_HTTP_ENDPOINT = "localhost:8000"
DEFAULT_TRITON_GRPC_ENDPOINT = "localhost:8001"
DEFAULT_TRITON_METRICS_URL = "http://localhost:8002/metrics"
Expand Down

0 comments on commit dac6093

Please sign in to comment.