Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding new options for LLM #768

Merged
merged 4 commits into from
Oct 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.08-py3
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why these changes are showing up after I rebased, but these and the docs changes are safe to ignore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can try this to remove the unwanted commit from PR: https://stackoverflow.com/a/51400593

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not too concerned. They match what is on the add-llm-mode branch and will go away when this subbranch is pushed. Just wanted to call it out so you wouldn't review it.

ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.08-py3-sdk
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.09-py3
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.09-py3-sdk

ARG MODEL_ANALYZER_VERSION=1.33.0dev
ARG MODEL_ANALYZER_CONTAINER_VERSION=23.10dev
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ limitations under the License.
>**LATEST RELEASE:**<br>
You are currently on the `main` branch which tracks
under-development progress towards the next release. <br>The latest
release of the Triton Model Analyzer is 1.31.0 and is available on
release of the Triton Model Analyzer is 1.32.0 and is available on
branch
[r23.08](https://github.com/triton-inference-server/model_analyzer/tree/r23.08).
[r23.09](https://github.com/triton-inference-server/model_analyzer/tree/r23.09).

Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
<br><br>
Expand Down
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ cpu_only_composing_models: <comma-delimited-string-list>
[ reload_model_disable: <bool> | default: false]

# Triton Docker image tag used when launching using Docker mode
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.08-py3 ]
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.09-py3 ]

# Triton Server HTTP endpoint url used by Model Analyzer client"
[ triton_http_endpoint: <string> | default: localhost:8000 ]
Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 23.08-py3
tag: 23.09-py3
```

The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.
Expand Down
4 changes: 2 additions & 2 deletions docs/mm_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
```

## `Step 3:` Profile both models concurrently
Expand Down
4 changes: 2 additions & 2 deletions docs/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.08-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
```

## `Step 3:` Profile the `add_sub` model
Expand Down
2 changes: 1 addition & 1 deletion helm-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 23.08-py3
tag: 23.09-py3
Loading
Loading