Skip to content

Commit

Permalink
G analytics (#2449)
Browse files Browse the repository at this point in the history
* Adding Google Analytics

Adding Google Analytics to TorchServe pages for metrics.
Removing indexOld.md file

* Update large_model_inference.md

* Spellcheck update

Adding chatGPT to wordslist.txt

* Update conf.py

* Update conf.py

* Update conf.py
  • Loading branch information
sekyondaMeta authored Jul 12, 2023
1 parent 4d7dc64 commit d31b6c3
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 102 deletions.
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
#
# All configuration values have a default; values that are commented out
# serve to show the default.

#
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
Expand Down Expand Up @@ -131,6 +131,7 @@
"collapse_navigation": True,
"display_version": True,
"logo_only": True,
"analytics_id": "GTM-T8XT4PS",
}

html_logo = "_static/img/pytorch-logo-dark.svg"
Expand Down Expand Up @@ -239,7 +240,6 @@ def setup(app):

# Register custom directives


rst.directives.register_directive("devices", SupportedDevices)
rst.directives.register_directive("properties", SupportedProperties)
rst.directives.register_directive("customcardstart", CustomCardStart)
Expand Down
98 changes: 0 additions & 98 deletions docs/indexOLD.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/large_model_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In addition to this default behavior, TorchServe provides the flexibility for us

Using Pippy integration as an example, the image below illustrates the internals of the TorchServe large model inference.

![ts-lmi-internal](images/ts-lmi-internal.png)
![ts-lmi-internal](https://raw.githubusercontent.com/pytorch/serve/master/docs/images/ts-lmi-internal.png)

## PiPPy (PyTorch Native solution for large model inference)

Expand Down Expand Up @@ -186,7 +186,7 @@ torch-model-archiver --model-name bloom --version 1.0 --handler deepspeed_handle
#### Tune "[responseTimeout](https://github.com/pytorch/serve/blob/5ee02e4f050c9b349025d87405b246e970ee710b/docs/configuration.md?plain=1#L216)" (see [model config YAML file](https://github.com/pytorch/serve/blob/5ee02e4f050c9b349025d87405b246e970ee710b/model-archiver/README.md?plain=1#L164)) if high model loading or inference latency causes response timeout.

#### Tune torchrun parameters
User is able to tune torchrun parameters in [model config YAML file](https://github.com/pytorch/serve/blob/2f1f52f553e83703b5c380c2570a36708ee5cafa/model-archiver/README.md?plain=1#L179). The supported parameters are defined at [here](https://github.com/pytorch/serve/blob/2f1f52f553e83703b5c380c2570a36708ee5cafa/frontend/archive/src/main/java/org/pytorch/serve/archive/model/ModelConfig.java#L329). For example, by default, `OMP_NUMNER_T?HREADS` is 1. It can be modified in the YAML file.
User is able to tune torchrun parameters in [model config YAML file](https://github.com/pytorch/serve/blob/2f1f52f553e83703b5c380c2570a36708ee5cafa/model-archiver/README.md?plain=1#L179). The supported parameters are defined at [here](https://github.com/pytorch/serve/blob/2f1f52f553e83703b5c380c2570a36708ee5cafa/frontend/archive/src/main/java/org/pytorch/serve/archive/model/ModelConfig.java#L329). For example, by default, `OMP_NUMBER_THREADS` is 1. It can be modified in the YAML file.
```yaml
#frontend settings
torchrun:
Expand Down
1 change: 1 addition & 0 deletions ts_scripts/spellcheck_conf/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1062,3 +1062,4 @@ XLA
inferentia
ActionSLAM
statins
chatGPT

0 comments on commit d31b6c3

Please sign in to comment.