Run a hardware accelerated KDE desktop in a container. This image is heavily influenced by Selkies Project and combines their GLX and EGL containers to provide an accelerated desktop environment for NVIDIA, AMD and Intel machines.
Note
This container image was designed to work with vast.ai and runpod.io but will also work locally or with other GPU cloud services, however support for other services is limited.
You may connect to the container through the Selkies-gstreamer WebRTC interface (default port 6100
) or through the KasmVNC client (default port 6200
)
A turn server is bundled with the image to ensure connectivity is possible in most circumstances. You should always prefer the WebRTC interface and use the VNC client only if you are unable to establish a WebRTC connection.
When running with an NVIDIA GPU, the container will attempt to download the relevant graphics driver and start a GLX enabled Xorg session. For all other systems an Xvfb instance will be launched for VirtualGL rendering.
In situations where the NVIDIA Xorg instance cannot be launched, the container will fall back to VirtualGL rendering if possible. This may happen where the drivers are incapable of handling a headless display or if there is already a physical display attached to the GPU.
In the worst case scenario a desktop will still be launched but rendered with llvmpipe if there is no NVIDIA driver present or if /dev/dri
or /dev/kfd
are not available within the container.
Docker images are built automatically through a GitHub Actions workflow and hosted at the GitHub Container Registry.
An incremental build process is used to avoid needing a huge cache - The following images are used to provide functionality:
- nvidia/cuda / ubuntu ↴
- ai-dock/base-image ↴
- ai-dock/linux-desktop
The :latest
tag points to :latest-cuda
Tags follow these patterns:
-
:cuda-[x.x.x]{-cudnn[x]}-[base|runtime|devel]-[ubuntu-version]
-
:latest-cuda
→:cuda-11.8.0-runtime-22.04
-
:rocm-[x.x.x]-[core|runtime|devel]-[ubuntu-version]
-
:latest-rocm
→:rocm-5.6-runtime-22.04
ROCm builds are experimental. Please give feedback.
-
:cpu-[ubuntu-version]
-
:latest-cpu
→:cpu-22.04
Browse here for an image suitable for your target environment.
Supported Desktop Environments: KDE Plasma
Supported Platforms: NVIDIA CUDA
, AMD ROCm
, CPU/iGPU
You can self-build from source by editing docker-compose.yaml
or .env
and running docker compose build
.
It is a good idea to leave the source tree alone and copy any edits you would like to make into build/COPY_ROOT_EXTRA/...
. The structure within this directory will be overlayed on /
at the end of the build process.
As this overlaying happens after the main build, it is easy to add extra files such as ML models and datasets to your images. You will also be able to rebuild quickly if your file overrides are made here.
Any directories and files that you add into opt/storage
will be made available in the running container at $WORKSPACE/storage
.
This directory is monitored by inotifywait
. Any items appearing in this directory will be automatically linked to the application directories as defined in /opt/ai-dock/storage_monitor/etc/mappings.sh
. This is particularly useful if you need to run several applications that each need to make use of the stored files.
A 'feature-complete' docker-compose.yaml
file is included for your convenience. All features of the image are included - Simply edit the environment variables in .env
, save and then type docker compose up
.
If you prefer to use the standard docker run
syntax, the command to pass is init.sh
.
This image should be compatible with any GPU cloud platform. You simply need to pass environment variables at runtime.
Note
Please raise an issue on this repository if your provider cannot run the image.
Container Cloud
Container providers don't give you access to the docker host but are quick and easy to set up. They are often inexpensive when compared to a full VM or bare metal solution.
All images built for ai-dock are tested for compatibility with both vast.ai and runpod.io.
See a list of pre-configured templates here
Warning
Container cloud providers may offer both 'community' and 'secure' versions of their cloud. If your usecase involves storing sensitive information (eg. API keys, auth tokens) then you should always choose the secure option.
VM Cloud
Running docker images on a virtual machine/bare metal server is much like running locally.
You'll need to:
- Configure your server
- Set up docker
- Clone this repository
- Edit
.env
anddocker-compose.yml
- Run
docker compose up
Find a list of compatible VM providers here.
All services listen for connections at 0.0.0.0
. This gives you some flexibility in how you interact with your instance:
Expose the Ports
This is fine if you are working locally but can be dangerous for remote connections where data is passed in plaintext between your machine and the container over http.
SSH Tunnel
You will only need to expose port 22
(SSH) which can then be used with port forwarding to allow secure connections to your services.
If you are unfamiliar with port forwarding then you should read the guides here and here.
Cloudflare Tunnel
You can use the included cloudflared
service to make secure connections without having to expose any ports to the public internet. See more below.
Variable | Description |
---|---|
CF_TUNNEL_TOKEN |
Cloudflare zero trust tunnel token - See documentation. |
CF_QUICK_TUNNELS |
Create ephemeral Cloudflare tunnels for web services (default false ) |
COTURN_USER |
Username for Coturn auth. Default user |
COTURN_PASSWORD |
Password for Coturn auth. Auto-generated by default. |
COTURN_LISTEN_ADDRESS |
Override the default listening address. Uses external IP by default. |
COTURN_PORT_HOST |
Default is 3478 |
DIRECT_ADDRESS |
IP/hostname for service portal direct links (default localhost ) |
DIRECT_ADDRESS_GET_WAN |
Use the internet facing interface for direct links (default false ) |
ENABLE_COTURN |
Enable the turn server, Default false |
GPU_COUNT |
Limit the number of available GPUs |
PROVISIONING_SCRIPT |
URL of a remote script to execute on init. See note. |
RCLONE_* |
Rclone configuration - See rclone documentation |
SSH_PORT_LOCAL |
Set a non-standard port for SSH (default 22 ) |
SSH_PUBKEY |
Your public key for SSH |
TURN_HOST |
Turn server address if not using builtin Coturn server |
TURN_PORT |
Turn server address if not using builtin Coturn server |
TURN_USERNAME |
Turn server username |
TURN_PASSWORD |
Turn server password |
WEB_ENABLE_AUTH |
Enable password protection for web services (default true ) |
WEB_USER |
Username for web services (default user ) |
WEB_PASSWORD |
Password for web services (default password ) |
WEBRTC_ENABLE_RESIZE |
Enable resize. Default false |
WEBRTC_ENCODER |
Default nvh264enc . Available options: vah264enc , x264enc , vp8enc , vp9enc |
WORKSPACE |
A volume path. Defaults to /workspace/ |
WORKSPACE_SYNC |
Move mamba environments and services to workspace if mounted (default true ) |
Environment variables can be specified by using any of the standard methods (docker-compose.yaml
, docker run -e...
). Additionally, environment variables can also be passed as parameters of init.sh
.
Passing environment variables to init.sh is usually unnecessary, but is useful for some cloud environments where the full docker run
command cannot be specified.
Example usage: docker run -e STANDARD_VAR1="this value" -e STANDARD_VAR2="that value" init.sh EXTRA_VAR="other value"
All ai-dock containers are interactive and will not drop root privileges. You should ensure that your docker daemon runs as an unprivileged user.
A system user will be created at startup. The UID will be either 1000 or will match the UID of the $WORKSPACE
bind mount.
The user will share the root user's ssh public key.
Some processes may start in the user context for convenience only.
By default, all exposed web services are protected by a single login form at :1111/login
.
The default username is user
and the password is auto generated unless you have passed a value in the environment variable WEB_PASSWORD
. To find the auto-generated password and related tokens you should type env | grep WEB_
from inside the container.
You can set your credentials by passing environment variables as shown above.
If you are running the image locally on a trusted network, you may disable authentication by setting the environment variable WEB_ENABLE_AUTH=false
.
If you need to connect programmatically to the web services you can authenticate using either Bearer $WEB_TOKEN
or Basic $WEB_PASSWORD_B64
.
The security measures included aim to be as secure as basic authentication, i.e. not secure without HTTPS. Please use the provided cloudflare connections wherever possible.
Note
You can use set-web-credentials.sh <username> <password>
to change the username and password in a running container.
It can be useful to perform certain actions when starting a container, such as creating directories and downloading files.
You can use the environment variable PROVISIONING_SCRIPT
to specify the URL of a script you'd like to run.
The URL must point to a plain text file - GitHub Gists/Pastebin (raw) are suitable options.
If you are running locally you may instead opt to mount a script at /opt/ai-dock/bin/provisioning.sh
.
Warning
Only use scripts that you trust and which cannot be changed without your consent.
Data inside docker containers is ephemeral - You'll lose all of it when the container is destroyed.
You may opt to mount a data volume at /workspace
- This is a directory that ai-dock images will look for to make downloaded data available outside of the container for persistence.
This is usually of importance where large files are downloaded at runtime or if you need a space to save your work. This is the ideal location to store any code you are working on.
You can define an alternative path for the workspace directory by passing the environment variable WORKSPACE=/my/alternative/path/
and mounting your volume there. This feature will generally assist where cloud providers enforce their own mountpoint location for persistent storage.
The provided docker-compose.yaml will mount the local directory ./workspace
at /workspace
.
As docker containers generally run as the root user, new files created in /workspace will be owned by uid 0(root).
To ensure that the files remain accessible to the local user that owns the directory, the docker entrypoint will set a default ACL on the directory by executing the commamd setfacl -d -m u:${WORKSPACE_UID}:rwx /workspace
.
If you do not want this, you can set the environment variable SKIP_ACL=true
.
This image will spawn multiple processes upon starting a container because some of our remote environments do not support more than one container per instance.
All processes are managed by supervisord and will restart upon failure until you either manually stop them or terminate the container.
Note
Some of the included services would not normally be found inside of a container. They are, however, necessary here as some cloud providers give no access to the host; Containers are deployed as if they were a virtual machine.
This provides the RTC interface for accessing the desktop through a web browser.
The service will bind to port 6100
.
See the project page for more information.
This provides the VNC fallback interface for accessing the desktop through a web browser.
The service will bind to port 6200
.
See the project page for more information.
This service relays the desktop on display :0
to the VNC on display :1
Learn about kasmxproxy here.
KDE plasma desktop environment. Restarting this service will also restart the currently running X server.
Either an Xorg server when running on NVIDIA hardware or Xvfb for VirtualGL rendering.
Fcitx [ˈfaɪtɪks] is an input method framework with extension support.
See the project page for more information.
Provides audio support for the WebRTC interface. Audio is not supported over VNC.
This is a simple webserver acting as a reverse proxy.
Caddy is used to enable basic authentication for all sensitive web services.
To make changes to the caddy configuration inside a runing container you should edit /opt/caddy/share/base_config
followed by supervisorctl restart caddy
.
This is a simple list of links to the web services available inside the container.
The service will bind to port 1111
.
For each service, you will find a direct link and, if you have set CF_QUICK_TUNNELS=true
, a link to the service via a fast and secure Cloudflare tunnel.
A simple web-based log viewer and process manager are included for convenience.
The Cloudflare tunnel daemon will start if you have provided a token with the CF_TUNNEL_TOKEN
environment variable.
This service allows you to connect to your local services via https without exposing any ports.
You can also create a private network to enable remote connecions to the container at its local address (172.x.x.x
) if your local machine is running a Cloudflare WARP client.
If you do not wish to provide a tunnel token, you could enable CF_QUICK_TUNNELS
which will create a throwaway tunnel for your web services.
Full documentation for Cloudflare tunnels is here.
Note
Cloudflared is included so that secure networking is available in all cloud environments.
Warning
You should only provide tunnel tokens in secure cloud environments.
A SSH server will be started if at least one valid public key is found inside the running container in the file /root/.ssh/authorized_keys
The server will bind to port 22
unless you specify variable SSH_PORT
.
There are several ways to get your keys to the container.
-
If using docker compose, you can paste your key in the local file
config/authorized_keys
before starting the container. -
You can pass the environment variable
SSH_PUBKEY
with your public key as the value. -
Cloud providers often have a built-in method to transfer your key into the container
If you choose not to provide a public key then the SSH server will not be started.
To make use of this service you should map port 22
to a port of your choice on the host operating system.
See this guide by DigitalOcean for an excellent introduction to working with SSH servers.
Note
SSHD is included because the end-user should be able to know the version prior to deloyment. Using a providers add-on, if available, does not guarantee this.
Warning
You should only provide auth tokens in secure cloud environments.
This script follows and prints the log files for each of the above services to stdout. This allows you to follow the progress of all running services through docker's own logging system.
If you are logged into the container you can follow the logs by running logtail.sh
in your shell.
This service detects changes to files in $WORKSPACE/storage
and creates symbolic links to the application directories defined in /opt/ai-dock/storage_monitor/etc/mappings.sh
Some ports need to be exposed for the services to run or for certain features of the provided software to function
Open Port | Service / Description |
---|---|
22 |
SSH server |
1111 |
Service portal web UI |
3478 |
Coturn turn server |
6100 |
Selkies WebRTC interface |
6200 |
KASMVNC Interface |
Vast.ai
Runpod.io
Note
These templates are configured to use the latest
tag but you are free to change to any of the available Linux-Desktop CUDA tags listed here
Images that do not require a GPU will run anywhere - Use an image tagged :*-cpu-xx.xx
Where a GPU is required you will need either :*cuda*
or :*rocm*
depending on the underlying hardware.
A curated list of VM providers currently offering GPU instances:
The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs