Skip to content

Commit

Permalink
Pre commit error fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
pskiran1 committed Aug 28, 2023
1 parent 23416f7 commit 1094a7f
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 12 deletions.
8 changes: 2 additions & 6 deletions docs/bls_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,19 +64,15 @@ docker run -it --gpus 1 \
```

**Replacing** `<path-to-output-model-repo>` with the
**_absolute_ _path_** to the directory where the output model repository
will be located.
This ensures the Triton SDK container has access to the model
config variants that Model Analyzer creates.<br><br>
**_absolute_ _path_** to the directory where the output model repository will be located. This ensures the Triton SDK container has access to the model config variants that Model Analyzer creates.<br><br>
**Important:** You must ensure the absolutes paths are identical on both sides of the mounts (or else Tritonserver cannot load the model)<br><br>
**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you need to increase the shared memory size accordingly<br><br>

## `Step 3:` Profile the `bls` model

---

The [examples/quick-start](../examples/quick-start) directory is an example
[Triton Model Repository](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md) that contains the BLS model `bls` which calculates the sum of two inputs using `add` model.
The [examples/quick-start](../examples/quick-start) directory is an example [Triton Model Repository](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md) that contains the BLS model `bls` which calculates the sum of two inputs using `add` model.

An example model analyzer YAML config that performs a BLS model search

Expand Down
8 changes: 2 additions & 6 deletions docs/ensemble_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,19 +70,15 @@ docker run -it --gpus 1 \
```

**Replacing** `<path-to-output-model-repo>` with the
**_absolute_ _path_** to the directory where the output model repository
will be located.
This ensures the Triton SDK container has access to the model
config variants that Model Analyzer creates.<br><br>
**_absolute_ _path_** to the directory where the output model repository will be located. This ensures the Triton SDK container has access to the model config variants that Model Analyzer creates.<br><br>
**Important:** You must ensure the absolutes paths are identical on both sides of the mounts (or else Tritonserver cannot load the model)<br><br>
**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>

## `Step 3:` Profile the `ensemble_add_sub` model

---

The [examples/quick-start](../examples/quick-start) directory is an example
[Triton Model Repository](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md) that contains the ensemble model `ensemble_add_sub`, which calculates the sum and difference of two inputs using `add` and `sub` models.
The [examples/quick-start](../examples/quick-start) directory is an example [Triton Model Repository](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md) that contains the ensemble model `ensemble_add_sub`, which calculates the sum and difference of two inputs using `add` and `sub` models.

Run the Model Analyzer `profile` subcommand inside the container with:

Expand Down

0 comments on commit 1094a7f

Please sign in to comment.