Skip to content

Commit

Permalink
update docs to djl 0.22.1 (#664)
Browse files Browse the repository at this point in the history
  • Loading branch information
siddvenk authored Apr 25, 2023
1 parent 8fb9c87 commit e5c2113
Show file tree
Hide file tree
Showing 6 changed files with 21 additions and 21 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,20 +50,20 @@ brew services stop djl-serving
For Ubuntu

```
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.22.0-1_all.deb
sudo dpkg -i djl-serving_0.22.0-1_all.deb
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.22.1-1_all.deb
sudo dpkg -i djl-serving_0.22.1-1_all.deb
```

For Windows

We are considering to create a `chocolatey` package for Windows. For the time being, you can
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.22.0.zip).
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.22.1.zip).

```
curl -O https://publish.djl.ai/djl-serving/serving-0.22.0.zip
unzip serving-0.22.0.zip
curl -O https://publish.djl.ai/djl-serving/serving-0.22.1.zip
unzip serving-0.22.1.zip
# start djl-serving
serving-0.22.0\bin\serving.bat
serving-0.22.1\bin\serving.bat
```

### Docker
Expand Down
16 changes: 8 additions & 8 deletions benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,25 +43,25 @@ sudo snap alias djlbench djl-bench
- Or download .deb package from S3

```
curl -O https://publish.djl.ai/djl-bench/0.22.0/djl-bench_0.22.0-1_all.deb
sudo dpkg -i djl-bench_0.22.0-1_all.deb
curl -O https://publish.djl.ai/djl-bench/0.22.1/djl-bench_0.22.1-1_all.deb
sudo dpkg -i djl-bench_0.22.1-1_all.deb
```

For macOS, centOS or Amazon Linux 2

You can download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.0/benchmark-0.22.0.zip).
You can download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip).

```
curl -O https://publish.djl.ai/djl-bench/0.22.0/benchmark-0.22.0.zip
unzip benchmark-0.22.0.zip
rm benchmark-0.22.0.zip
sudo ln -s $PWD/benchmark-0.22.0/bin/benchmark /usr/bin/djl-bench
curl -O https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip
unzip benchmark-0.22.1.zip
rm benchmark-0.22.1.zip
sudo ln -s $PWD/benchmark-0.22.1/bin/benchmark /usr/bin/djl-bench
```

For Windows

We are considering to create a `chocolatey` package for Windows. For the time being, you can
download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.0/benchmark-0.22.0.zip).
download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip).

Or you can run benchmark using gradle:

Expand Down
4 changes: 2 additions & 2 deletions engines/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ The javadocs output is generated in the `build/doc/javadoc` folder.
## Installation
You can pull the Python engine from the central Maven repository by including the following dependency:

- ai.djl.python:python:0.22.0
- ai.djl.python:python:0.22.1

```xml
<dependency>
<groupId>ai.djl.python</groupId>
<artifactId>python</artifactId>
<version>0.22.0</version>
<version>0.22.1</version>
<scope>runtime</scope>
</dependency>
```
Expand Down
6 changes: 3 additions & 3 deletions serving/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.0
docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1
```

### GPU
Expand All @@ -42,7 +42,7 @@ mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --runtime=nvidia -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:pytorch-cu118-nightly
docker run -it --runtime=nvidia -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1-pytorch-cu118
```

### AWS Inferentia
Expand All @@ -52,5 +52,5 @@ mkdir models
cd models

curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_inf1.tar.gz
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.0-inf1
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1-inf1
```
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ public void testInstallDependency() throws IOException {
dm.installEngine("OnnxRuntime");
dm.installEngine("XGBoost");

dm.installDependency("ai.djl.pytorch:pytorch-jni:2.0.0-0.22.0");
dm.installDependency("ai.djl.pytorch:pytorch-jni:2.0.0-0.22.1");

Assert.assertThrows(() -> dm.installDependency("ai.djl.pytorch:pytorch-jni"));
}
Expand Down
2 changes: 1 addition & 1 deletion wlm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can pull the server from the central Maven repository by including the follo
<dependency>
<groupId>ai.djl.serving</groupId>
<artifactId>wlm</artifactId>
<version>0.22.0</version>
<version>0.22.1</version>
</dependency>
```

0 comments on commit e5c2113

Please sign in to comment.