diff --git a/docs/assets/stylesheets/extra.css b/docs/assets/stylesheets/extra.css index 9b3a43f33..b056f96c7 100644 --- a/docs/assets/stylesheets/extra.css +++ b/docs/assets/stylesheets/extra.css @@ -1040,9 +1040,9 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { .md-typeset .tabbed-labels--linked>label>a code { /*MKDocs Insiders fix*/ - /*background: initial;*/ - font-weight: 600; - color: var(--md-primary-fg-color); + background: initial; + font-weight: 700; + color: var(--md-typeset-color); } .md-typeset .highlight :is(.nd,.ni,.nl,.nt), diff --git a/docs/docs/guides/protips.md b/docs/docs/guides/protips.md index 09f2e1e97..47bc91bd3 100644 --- a/docs/docs/guides/protips.md +++ b/docs/docs/guides/protips.md @@ -97,6 +97,76 @@ This allows you to access the remote `8501` port on `localhost:8501` while the C production-grade service deployment not offered by tasks, such as HTTPS domains and auto-scaling. If you run a web app as a task and it works, go ahead and run it as a service. +## Docker and Docker Compose + +All backends except `runpod`, `vastai` and `kubernetes` allow to use Docker and Docker Compose +inside `dstack` runs. To do that, additional configuration steps are required: + +1. Set the `privileged` property to `true`. +2. Set the `image` property to `dstackai/dind` (or another DinD image). +3. For tasks and services, add `start-dockerd` as the first command. For dev environments, add `start-dockerd` as the first comand + in the `init` property. +Note, `start-dockerd` is a part of `dstackai/dind` image, if you use a different DinD image, +replace it with a corresponding command to start Docker daemon. + +=== "Task" +
+ + ```yaml + type: task + name: task-dind + + privileged: true + image: dstackai/dind + + commands: + - start-dockerd + - docker compose up + ``` + +
+ +=== "Dev environment" +
+ + ```yaml + type: dev-environment + name: vscode-dind + + privileged: true + image: dstackai/dind + + ide: vscode + + init: + - start-dockerd + ``` + +
+ +??? info "Volumes" + + To persist Docker data between runs (e.g. images, containers, volumes, etc), create a `dstack` [volume](../concepts/volumes.md) + and add attach it in your run configuration: + + ```yaml + type: dev-environment + name: vscode-dind + + privileged: true + image: dstackai/dind + ide: vscode + + init: + - start-dockerd + + volumes: + - name: docker-volume + path: /var/lib/docker + ``` + +See more Docker examples [here](https://github.com/dstackai/dstack/tree/master/examples/misc/docker-compose). + ## Environment variables If a configuration requires an environment variable that you don't want to hardcode in the YAML, you can define it diff --git a/docs/docs/reference/dstack.yml/dev-environment.md b/docs/docs/reference/dstack.yml/dev-environment.md index b6c6137da..3e1bd0b5b 100644 --- a/docs/docs/reference/dstack.yml/dev-environment.md +++ b/docs/docs/reference/dstack.yml/dev-environment.md @@ -46,7 +46,9 @@ ide: vscode ide: vscode ``` -### Docker image +### Docker + +If you want, you can specify your own Docker image via `image`.
@@ -82,6 +84,10 @@ ide: vscode ide: vscode ``` +!!! info "Docker and Docker Compose" + All backends except `runpod`, `vastai` and `kubernetes` also allow to use [Docker and Docker Compose](../../guides/protips.md#docker-and-docker-compose) + inside `dstack` runs. + ### Resources { #_resources } If you specify memory size, you can either specify an explicit size (e.g. `24GB`) or a diff --git a/docs/docs/reference/dstack.yml/service.md b/docs/docs/reference/dstack.yml/service.md index 4bfefdd03..5d638ec41 100644 --- a/docs/docs/reference/dstack.yml/service.md +++ b/docs/docs/reference/dstack.yml/service.md @@ -58,7 +58,9 @@ port: 8000
-### Docker image +### Docker + +If you want, you can specify your own Docker image via `image`.
@@ -102,6 +104,10 @@ port: 8000 port: 8000 ``` +!!! info "Docker and Docker Compose" + All backends except `runpod`, `vastai` and `kubernetes` also allow to use [Docker and Docker Compose](../../guides/protips.md#docker-and-docker-compose) + inside `dstack` runs. + ### Model gateway { #model-mapping } By default, if you run a service, its endpoint is accessible at `https://.`. diff --git a/docs/docs/reference/dstack.yml/task.md b/docs/docs/reference/dstack.yml/task.md index 7c4eb7ec7..e2e052968 100644 --- a/docs/docs/reference/dstack.yml/task.md +++ b/docs/docs/reference/dstack.yml/task.md @@ -82,7 +82,9 @@ When running it, `dstack run` forwards `6000` port to `localhost:6000`, enabling [//]: # (See [tasks](../../tasks.md#configure-ports) for more detail.) -### Docker image +### Docker + +If you want, you can specify your own Docker image via `image`.
@@ -123,6 +125,10 @@ commands: - python fine-tuning/qlora/train.py ``` +!!! info "Docker and Docker Compose" + All backends except `runpod`, `vastai` and `kubernetes` also allow to use [Docker and Docker Compose](../../guides/protips.md#docker-and-docker-compose) + inside `dstack` runs. + ### Resources { #_resources } If you specify memory size, you can either specify an explicit size (e.g. `24GB`) or a diff --git a/docs/examples/misc/docker-compose/index.md b/docs/examples/misc/docker-compose/index.md new file mode 100644 index 000000000..e69de29bb diff --git a/examples/misc/docker-compose/.dstack.yml b/examples/misc/docker-compose/.dstack.yml new file mode 100644 index 000000000..50529c42c --- /dev/null +++ b/examples/misc/docker-compose/.dstack.yml @@ -0,0 +1,13 @@ +type: dev-environment +name: vscode-dind + +privileged: true +image: dstackai/dind +ide: vscode +init: + - start-dockerd + +spot_policy: auto + +resources: + gpu: 1 diff --git a/examples/misc/docker-compose/README.md b/examples/misc/docker-compose/README.md new file mode 100644 index 000000000..e82c4a150 --- /dev/null +++ b/examples/misc/docker-compose/README.md @@ -0,0 +1,185 @@ +# Docker Compose + +All backends except `runpod`, `vastai` and `kubernetes` allow to use Docker and Docker Compose +inside `dstack` runs. + +This example shows how to deploy Hugging Face [Chat UI :material-arrow-top-right-thin:{ .external }](https://huggingface.co/docs/chat-ui/index){:target="_blank"} +with [TGI :material-arrow-top-right-thin:{ .external }](https://huggingface.co/docs/text-generation-inference/en/index){:target="_blank"} +serving [Llama-3.2-3B-Instruct :material-arrow-top-right-thin:{ .external }](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct){:target="_blank"} +using [Docker Compose :material-arrow-top-right-thin:{ .external }](https://docs.docker.com/compose/){:target="_blank"}. + +??? info "Prerequisites" + Once `dstack` is [installed](https://dstack.ai/docs/installation), go ahead clone the repo, and run `dstack init`. + +
+ + ```shell + $ git clone https://github.com/dstackai/dstack + $ cd dstack + $ dstack init + ``` + +
+ +## Deployment + +### Running as a task + +=== "`task.dstack.yml`" + +
+ + ```yaml + type: task + name: chat-ui-task + + privileged: true + image: dstackai/dind + env: + - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct + - HF_TOKEN + commands: + - start-dockerd + - docker compose up + ports: + - 9000 + + # Use either spot or on-demand instances + spot_policy: auto + + resources: + # Required resources + gpu: "NVIDIA:16GB.." + ``` + +
+ +=== "`compose.yml`" + +
+ + ```yaml + services: + app: + image: ghcr.io/huggingface/chat-ui:sha-bf0bc92 + command: + - bash + - -c + - | + echo MONGODB_URL=mongodb://db:27017 > .env.local + echo MODELS='`[{ + "name": "${MODEL_ID?}", + "endpoints": [{"type": "tgi", "url": "http://tgi:8000"}] + }]`' >> .env.local + exec ./entrypoint.sh + ports: + - 127.0.0.1:9000:3000 + depends_on: + - tgi + - db + + tgi: + image: ghcr.io/huggingface/text-generation-inference:sha-704a58c + volumes: + - tgi_data:/data + environment: + HF_TOKEN: ${HF_TOKEN?} + MODEL_ID: ${MODEL_ID?} + PORT: 8000 + deploy: + resources: + reservations: + devices: + - driver: nvidia + count: all + capabilities: [gpu] + + db: + image: mongo:latest + volumes: + - db_data:/data/db + + volumes: + tgi_data: + db_data: + ``` + +
+ +### Deploying as a service + +If you'd like to deploy Chat UI as an auto-scalable and secure endpoint, +use the service configuration. You can find it at [`examples/misc/docker-compose/service.dstack.yml` :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/misc/docker-compose/service.dstack.yml) + +### Running a configuration + +To run a configuration, use the [`dstack apply`](https://dstack.ai/docs/reference/cli/index.md#dstack-apply) command. + +
+ +```shell +$ HUGGING_FACE_HUB_TOKEN=... + +$ dstack apply -f examples/examples/misc/docker-compose/task.dstack.yml + + # BACKEND REGION RESOURCES SPOT PRICE + 1 runpod CA-MTL-1 18xCPU, 100GB, A5000:24GB yes $0.12 + 2 runpod EU-SE-1 18xCPU, 100GB, A5000:24GB yes $0.12 + 3 gcp us-west4 27xCPU, 150GB, A5000:24GB:2 yes $0.23 + +Submit the run chat-ui-task? [y/n]: y + +Provisioning... +---> 100% +``` + +
+ +## Persisting data + +To persist data between runs, create a [volume](https://dstack.ai/docs/concepts/volumes/) and attach it to the run +configuration. + +
+ +```yaml +type: task +name: chat-ui-task + +privileged: true +image: dstackai/dind +env: + - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct + - HF_TOKEN +commands: + - start-dockerd + - docker compose up +ports: + - 9000 + +# Use either spot or on-demand instances +spot_policy: auto + +resources: + # Required resources + gpu: "NVIDIA:16GB.." + +volumes: + - name: my-dind-volume + path: /var/lib/docker +``` + +
+ +With this change, all Docker data—pulled images, containers, and crucially, volumes for database and model storage—will +be persisted. + +## Source code + +The source-code of this example can be found in +[`examples/misc/docker-compose` :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/misc/docker-compose). + +## What's next? + +1. Check [dev environments](https://dstack.ai/docs/dev-environments), [tasks](https://dstack.ai/docs/tasks), + [services](https://dstack.ai/docs/services), and [protips](https://dstack.ai/docs/protips). diff --git a/examples/misc/docker-compose/compose.yaml b/examples/misc/docker-compose/compose.yaml new file mode 100644 index 000000000..b63b4c78b --- /dev/null +++ b/examples/misc/docker-compose/compose.yaml @@ -0,0 +1,43 @@ +services: + app: + image: ghcr.io/huggingface/chat-ui:sha-bf0bc92 + command: + - bash + - -c + - | + echo MONGODB_URL=mongodb://db:27017 > .env.local + echo MODELS='`[{ + "name": "${MODEL_ID?}", + "endpoints": [{"type": "tgi", "url": "http://tgi:8000"}] + }]`' >> .env.local + exec ./entrypoint.sh + ports: + - 127.0.0.1:9000:3000 + depends_on: + - tgi + - db + + tgi: + image: ghcr.io/huggingface/text-generation-inference:sha-704a58c + volumes: + - tgi_data:/data + environment: + HF_TOKEN: ${HF_TOKEN?} + MODEL_ID: ${MODEL_ID?} + PORT: 8000 + deploy: + resources: + reservations: + devices: + - driver: nvidia + count: all + capabilities: [gpu] + + db: + image: mongo:latest + volumes: + - db_data:/data/db + +volumes: + tgi_data: + db_data: diff --git a/examples/misc/docker-compose/service.dstack.yml b/examples/misc/docker-compose/service.dstack.yml new file mode 100644 index 000000000..8cfe8174e --- /dev/null +++ b/examples/misc/docker-compose/service.dstack.yml @@ -0,0 +1,25 @@ +type: service +name: chat-ui-service + +privileged: true +image: dstackai/dind +env: + - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct + - HF_TOKEN +commands: + - start-dockerd + - docker compose up +port: 9000 +auth: false + +# Use either spot or on-demand instances +spot_policy: auto + +resources: + # Required resources + gpu: "NVIDIA:16GB.." + +# Uncomment to persist data +#volumes: +# - name: my-dind-volume +# path: /var/lib/docker \ No newline at end of file diff --git a/examples/misc/docker-compose/task.dstack.yml b/examples/misc/docker-compose/task.dstack.yml new file mode 100644 index 000000000..2e7b6bae0 --- /dev/null +++ b/examples/misc/docker-compose/task.dstack.yml @@ -0,0 +1,25 @@ +type: task +name: chat-ui-task + +privileged: true +image: dstackai/dind +env: + - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct + - HF_TOKEN +commands: + - start-dockerd + - docker compose up +ports: + - 9000 + +# Use either spot or on-demand instances +spot_policy: auto + +resources: + # Required resources + gpu: "NVIDIA:16GB.." + +# Uncomment to persist data +#volumes: +# - name: my-dind-volume +# path: /var/lib/docker diff --git a/examples/misc/docker-compose/volume.dstack.yml b/examples/misc/docker-compose/volume.dstack.yml new file mode 100644 index 000000000..f29576d69 --- /dev/null +++ b/examples/misc/docker-compose/volume.dstack.yml @@ -0,0 +1,8 @@ +type: volume +name: my-dind-volume + +backend: aws +region: eu-west-1 + +# Required size +size: 100GB \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index ff3b0d9d2..06a6db1a2 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -241,6 +241,8 @@ nav: - LLMs: - Llama 3.1: examples/llms/llama31/index.md - Llama 3.2: examples/llms/llama32/index.md + - Misc: + - Docker Compose: examples/misc/docker-compose/index.md - Backends: backends.md - Blog: - blog/index.md