@@ -123,6 +125,10 @@ commands:
- python fine-tuning/qlora/train.py
```
+!!! info "Docker and Docker Compose"
+ All backends except `runpod`, `vastai` and `kubernetes` also allow to use [Docker and Docker Compose](../../guides/protips.md#docker-and-docker-compose)
+ inside `dstack` runs.
+
### Resources { #_resources }
If you specify memory size, you can either specify an explicit size (e.g. `24GB`) or a
diff --git a/docs/examples/misc/docker-compose/index.md b/docs/examples/misc/docker-compose/index.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/examples/misc/docker-compose/.dstack.yml b/examples/misc/docker-compose/.dstack.yml
new file mode 100644
index 000000000..50529c42c
--- /dev/null
+++ b/examples/misc/docker-compose/.dstack.yml
@@ -0,0 +1,13 @@
+type: dev-environment
+name: vscode-dind
+
+privileged: true
+image: dstackai/dind
+ide: vscode
+init:
+ - start-dockerd
+
+spot_policy: auto
+
+resources:
+ gpu: 1
diff --git a/examples/misc/docker-compose/README.md b/examples/misc/docker-compose/README.md
new file mode 100644
index 000000000..e82c4a150
--- /dev/null
+++ b/examples/misc/docker-compose/README.md
@@ -0,0 +1,185 @@
+# Docker Compose
+
+All backends except `runpod`, `vastai` and `kubernetes` allow to use Docker and Docker Compose
+inside `dstack` runs.
+
+This example shows how to deploy Hugging Face [Chat UI :material-arrow-top-right-thin:{ .external }](https://huggingface.co/docs/chat-ui/index){:target="_blank"}
+with [TGI :material-arrow-top-right-thin:{ .external }](https://huggingface.co/docs/text-generation-inference/en/index){:target="_blank"}
+serving [Llama-3.2-3B-Instruct :material-arrow-top-right-thin:{ .external }](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct){:target="_blank"}
+using [Docker Compose :material-arrow-top-right-thin:{ .external }](https://docs.docker.com/compose/){:target="_blank"}.
+
+??? info "Prerequisites"
+ Once `dstack` is [installed](https://dstack.ai/docs/installation), go ahead clone the repo, and run `dstack init`.
+
+
+
+ ```shell
+ $ git clone https://github.com/dstackai/dstack
+ $ cd dstack
+ $ dstack init
+ ```
+
+
+
+## Deployment
+
+### Running as a task
+
+=== "`task.dstack.yml`"
+
+
+
+ ```yaml
+ type: task
+ name: chat-ui-task
+
+ privileged: true
+ image: dstackai/dind
+ env:
+ - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct
+ - HF_TOKEN
+ commands:
+ - start-dockerd
+ - docker compose up
+ ports:
+ - 9000
+
+ # Use either spot or on-demand instances
+ spot_policy: auto
+
+ resources:
+ # Required resources
+ gpu: "NVIDIA:16GB.."
+ ```
+
+
+
+=== "`compose.yml`"
+
+
+
+ ```yaml
+ services:
+ app:
+ image: ghcr.io/huggingface/chat-ui:sha-bf0bc92
+ command:
+ - bash
+ - -c
+ - |
+ echo MONGODB_URL=mongodb://db:27017 > .env.local
+ echo MODELS='`[{
+ "name": "${MODEL_ID?}",
+ "endpoints": [{"type": "tgi", "url": "http://tgi:8000"}]
+ }]`' >> .env.local
+ exec ./entrypoint.sh
+ ports:
+ - 127.0.0.1:9000:3000
+ depends_on:
+ - tgi
+ - db
+
+ tgi:
+ image: ghcr.io/huggingface/text-generation-inference:sha-704a58c
+ volumes:
+ - tgi_data:/data
+ environment:
+ HF_TOKEN: ${HF_TOKEN?}
+ MODEL_ID: ${MODEL_ID?}
+ PORT: 8000
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+
+ db:
+ image: mongo:latest
+ volumes:
+ - db_data:/data/db
+
+ volumes:
+ tgi_data:
+ db_data:
+ ```
+
+
+
+### Deploying as a service
+
+If you'd like to deploy Chat UI as an auto-scalable and secure endpoint,
+use the service configuration. You can find it at [`examples/misc/docker-compose/service.dstack.yml` :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/misc/docker-compose/service.dstack.yml)
+
+### Running a configuration
+
+To run a configuration, use the [`dstack apply`](https://dstack.ai/docs/reference/cli/index.md#dstack-apply) command.
+
+
+
+```shell
+$ HUGGING_FACE_HUB_TOKEN=...
+
+$ dstack apply -f examples/examples/misc/docker-compose/task.dstack.yml
+
+ # BACKEND REGION RESOURCES SPOT PRICE
+ 1 runpod CA-MTL-1 18xCPU, 100GB, A5000:24GB yes $0.12
+ 2 runpod EU-SE-1 18xCPU, 100GB, A5000:24GB yes $0.12
+ 3 gcp us-west4 27xCPU, 150GB, A5000:24GB:2 yes $0.23
+
+Submit the run chat-ui-task? [y/n]: y
+
+Provisioning...
+---> 100%
+```
+
+
+
+## Persisting data
+
+To persist data between runs, create a [volume](https://dstack.ai/docs/concepts/volumes/) and attach it to the run
+configuration.
+
+
+
+```yaml
+type: task
+name: chat-ui-task
+
+privileged: true
+image: dstackai/dind
+env:
+ - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct
+ - HF_TOKEN
+commands:
+ - start-dockerd
+ - docker compose up
+ports:
+ - 9000
+
+# Use either spot or on-demand instances
+spot_policy: auto
+
+resources:
+ # Required resources
+ gpu: "NVIDIA:16GB.."
+
+volumes:
+ - name: my-dind-volume
+ path: /var/lib/docker
+```
+
+
+
+With this change, all Docker data—pulled images, containers, and crucially, volumes for database and model storage—will
+be persisted.
+
+## Source code
+
+The source-code of this example can be found in
+[`examples/misc/docker-compose` :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/misc/docker-compose).
+
+## What's next?
+
+1. Check [dev environments](https://dstack.ai/docs/dev-environments), [tasks](https://dstack.ai/docs/tasks),
+ [services](https://dstack.ai/docs/services), and [protips](https://dstack.ai/docs/protips).
diff --git a/examples/misc/docker-compose/compose.yaml b/examples/misc/docker-compose/compose.yaml
new file mode 100644
index 000000000..b63b4c78b
--- /dev/null
+++ b/examples/misc/docker-compose/compose.yaml
@@ -0,0 +1,43 @@
+services:
+ app:
+ image: ghcr.io/huggingface/chat-ui:sha-bf0bc92
+ command:
+ - bash
+ - -c
+ - |
+ echo MONGODB_URL=mongodb://db:27017 > .env.local
+ echo MODELS='`[{
+ "name": "${MODEL_ID?}",
+ "endpoints": [{"type": "tgi", "url": "http://tgi:8000"}]
+ }]`' >> .env.local
+ exec ./entrypoint.sh
+ ports:
+ - 127.0.0.1:9000:3000
+ depends_on:
+ - tgi
+ - db
+
+ tgi:
+ image: ghcr.io/huggingface/text-generation-inference:sha-704a58c
+ volumes:
+ - tgi_data:/data
+ environment:
+ HF_TOKEN: ${HF_TOKEN?}
+ MODEL_ID: ${MODEL_ID?}
+ PORT: 8000
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+
+ db:
+ image: mongo:latest
+ volumes:
+ - db_data:/data/db
+
+volumes:
+ tgi_data:
+ db_data:
diff --git a/examples/misc/docker-compose/service.dstack.yml b/examples/misc/docker-compose/service.dstack.yml
new file mode 100644
index 000000000..8cfe8174e
--- /dev/null
+++ b/examples/misc/docker-compose/service.dstack.yml
@@ -0,0 +1,25 @@
+type: service
+name: chat-ui-service
+
+privileged: true
+image: dstackai/dind
+env:
+ - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct
+ - HF_TOKEN
+commands:
+ - start-dockerd
+ - docker compose up
+port: 9000
+auth: false
+
+# Use either spot or on-demand instances
+spot_policy: auto
+
+resources:
+ # Required resources
+ gpu: "NVIDIA:16GB.."
+
+# Uncomment to persist data
+#volumes:
+# - name: my-dind-volume
+# path: /var/lib/docker
\ No newline at end of file
diff --git a/examples/misc/docker-compose/task.dstack.yml b/examples/misc/docker-compose/task.dstack.yml
new file mode 100644
index 000000000..2e7b6bae0
--- /dev/null
+++ b/examples/misc/docker-compose/task.dstack.yml
@@ -0,0 +1,25 @@
+type: task
+name: chat-ui-task
+
+privileged: true
+image: dstackai/dind
+env:
+ - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct
+ - HF_TOKEN
+commands:
+ - start-dockerd
+ - docker compose up
+ports:
+ - 9000
+
+# Use either spot or on-demand instances
+spot_policy: auto
+
+resources:
+ # Required resources
+ gpu: "NVIDIA:16GB.."
+
+# Uncomment to persist data
+#volumes:
+# - name: my-dind-volume
+# path: /var/lib/docker
diff --git a/examples/misc/docker-compose/volume.dstack.yml b/examples/misc/docker-compose/volume.dstack.yml
new file mode 100644
index 000000000..f29576d69
--- /dev/null
+++ b/examples/misc/docker-compose/volume.dstack.yml
@@ -0,0 +1,8 @@
+type: volume
+name: my-dind-volume
+
+backend: aws
+region: eu-west-1
+
+# Required size
+size: 100GB
\ No newline at end of file
diff --git a/mkdocs.yml b/mkdocs.yml
index ff3b0d9d2..06a6db1a2 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -241,6 +241,8 @@ nav:
- LLMs:
- Llama 3.1: examples/llms/llama31/index.md
- Llama 3.2: examples/llms/llama32/index.md
+ - Misc:
+ - Docker Compose: examples/misc/docker-compose/index.md
- Backends: backends.md
- Blog:
- blog/index.md