Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Update to Ubuntu 24.04 and use new variable names from docker stacks #157

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 19 additions & 9 deletions .build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

# Use NVIDIA CUDA as base image and run the same installation as in the other packages.
# The version of cuda must match those of the packages installed in src/Dockerfile.gpulibs
FROM nvidia/cuda:12.5.1-cudnn-runtime-ubuntu22.04
FROM nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04
LABEL authors="Christoph Schranz <[email protected]>"

# This is a concatenated Dockerfile, the maintainers of subsequent sections may vary.
Expand All @@ -24,9 +24,9 @@ RUN apt-get update && \
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.

# Ubuntu 22.04 (jammy)
# https://hub.docker.com/_/ubuntu/tags?page=1&name=jammy
ARG ROOT_CONTAINER=ubuntu:22.04
# Ubuntu 24.04 (noble)
# https://hub.docker.com/_/ubuntu/tags?page=1&name=noble
ARG ROOT_IMAGE=ubuntu:24.04


LABEL maintainer="Jupyter Project <[email protected]>"
Expand Down Expand Up @@ -90,6 +90,12 @@ RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashr
# and docs: https://docs.conda.io/projects/conda/en/latest/dev-guide/deep-dives/activation.html
echo 'eval "$(conda shell.bash hook)"' >> /etc/skel/.bashrc

# Delete existing user with UID="${NB_UID}" if it exists
# hadolint ignore=SC2046
RUN if grep -q "${NB_UID}" /etc/passwd; then \
userdel --remove $(id -un "${NB_UID}"); \
fi

# Create "${NB_USER}" user (`jovyan` by default) with UID="${NB_UID}" (`1000` by default) and in the 'users' group
# and make sure these dirs are writable by the `users` group.
RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
Expand All @@ -105,7 +111,7 @@ RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
USER ${NB_UID}

# Pin the Python version here, or set it to "default"
ARG PYTHON_VERSION=3.11
ARG PYTHON_VERSION=3.12

# Setup work directory for backward-compatibility
RUN mkdir "/home/${NB_USER}/work" && \
Expand Down Expand Up @@ -140,12 +146,13 @@ RUN set -x && \
--prefix="${CONDA_DIR}" \
--yes \
'jupyter_core' \
'conda' \
'mamba' \
"${PYTHON_SPECIFIER}" && \
rm -rf /tmp/bin/ && \
# Pin major.minor version of python
# https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#preventing-packages-from-updating-pinning
mamba list --full-name 'python' | tail -1 | tr -s ' ' | cut -d ' ' -f 1,2 | sed 's/\.[^.]*$/.*/' >> "${CONDA_DIR}/conda-meta/pinned" && \
mamba list --full-name 'python' | awk 'END{sub("[^.]*$", "*", $2); print $1 " " $2}' >> "${CONDA_DIR}/conda-meta/pinned" && \
mamba clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
Expand Down Expand Up @@ -212,13 +219,16 @@ USER ${NB_UID}
# files across image layers when the permissions change
WORKDIR /tmp
RUN mamba install --yes \
'jupyterhub' \
'jupyterhub-singleuser' \
'jupyterlab' \
'nbclassic' \
'notebook' && \
# Sometimes, when the new version of `jupyterlab` is released, latest `notebook` might not support it for some time
# Old versions of `notebook` (<v7) didn't have a restriction on the `jupyterlab` version, and old `notebook` is getting installed
# That's why we have to pin the minimum notebook version
# More info: https://github.com/jupyter/docker-stacks/pull/2167
'notebook>=7.2.2' && \
jupyter server --generate-config && \
mamba clean --all -f -y && \
npm cache clean --force && \
jupyter lab clean && \
rm -rf "/home/${NB_USER}/.cache/yarn" && \
fix-permissions "${CONDA_DIR}" && \
Expand Down
2 changes: 1 addition & 1 deletion .build/docker-stacks
Submodule docker-stacks updated 55 files
+3 −1 .github/workflows/contributed-recipes.yml
+1 −1 .github/workflows/docker-build-test-upload.yml
+2 −0 .github/workflows/sphinx.yml
+3 −1 .gitignore
+13 −13 .pre-commit-config.yaml
+3 −0 .readthedocs.yaml
+46 −0 CHANGELOG.md
+2 −2 Makefile
+8 −7 README.md
+3 −3 binder/Dockerfile
+1 −0 docs/conf.py
+1 −0 docs/contributing/features.md
+1 −1 docs/contributing/lint.md
+12 −10 docs/images/inherit.svg
+2 −0 docs/index.rst
+1 −1 docs/maintaining/aarch64-runner.md
+5 −0 docs/using/changelog.md
+9 −3 docs/using/common.md
+88 −0 docs/using/custom-images.md
+44 −0 docs/using/recipe_code/docker-bake.python312.hcl
+6 −1 docs/using/recipe_code/generate_matrix.py
+23 −0 docs/using/recipe_code/ijavascript.dockerfile
+1 −1 docs/using/recipe_code/jupyterhub_version.dockerfile
+1 −1 docs/using/recipe_code/microsoft_odbc.dockerfile
+1 −1 docs/using/recipe_code/oracledb.dockerfile
+11 −16 docs/using/recipes.md
+7 −7 docs/using/running.md
+12 −11 docs/using/selecting.md
+2 −1 docs/using/specifics.md
+1 −1 examples/docker-compose/README.md
+2 −2 images/all-spark-notebook/Dockerfile
+8 −5 images/base-notebook/Dockerfile
+2 −2 images/datascience-notebook/Dockerfile
+13 −6 images/docker-stacks-foundation/Dockerfile
+2 −2 images/julia-notebook/Dockerfile
+2 −2 images/minimal-notebook/Dockerfile
+5 −0 images/minimal-notebook/setup-scripts/setup_julia.py
+4 −4 images/pyspark-notebook/Dockerfile
+14 −6 images/pyspark-notebook/setup_spark.py
+2 −2 images/pytorch-notebook/Dockerfile
+2 −2 images/pytorch-notebook/cuda11/Dockerfile
+3 −3 images/pytorch-notebook/cuda12/Dockerfile
+2 −2 images/r-notebook/Dockerfile
+2 −2 images/scipy-notebook/Dockerfile
+2 −2 images/tensorflow-notebook/Dockerfile
+3 −3 images/tensorflow-notebook/cuda/Dockerfile
+6 −2 tagging/taggers.py
+5 −1 tests/all-spark-notebook/test_spark_notebooks.py
+0 −9 tests/base-notebook/test_npm_package_manager.py
+2 −4 tests/conftest.py
+4 −2 tests/docker-stacks-foundation/test_packages.py
+1 −1 tests/docker-stacks-foundation/test_python_version.py
+7 −7 tests/package_helper.py
+10 −2 tests/pyspark-notebook/test_spark.py
+1 −1 tests/pyspark-notebook/units/unit_pandas_version.py
Empty file modified .build/jupyter_server_config_token_addendum.py
100644 → 100755
Empty file.
5 changes: 5 additions & 0 deletions .build/setup-scripts/setup_julia.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,11 @@ def get_latest_julia_url() -> tuple[str, str]:
triplet = unify_aarch64(platform.machine()) + "-linux-gnu"
file_info = [vf for vf in latest_version_files if vf["triplet"] == triplet][0]
LOGGER.info(f"Latest version: {file_info['version']} url: {file_info['url']}")
if file_info["version"] == "1.11.2":
LOGGER.warning(
"Not using Julia 1.11.2, because it hangs in GitHub self-hosted runners"
)
return file_info["url"].replace("1.11.2", "1.11.1"), "1.11.1"
return file_info["url"], file_info["version"]


Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ for creating and maintaining a robust Python, R, and Julia toolstack for Data Sc
3. Get access to your GPU via CUDA drivers within Docker containers. For this, follow the installation steps in this
[Medium article](https://medium.com/@christoph.schranz/set-up-your-own-gpu-based-jupyterlab-e0d45fcacf43). You can confirm that you can access your GPU within Docker if the command below returns a result similar to this one:
```bash
docker run --gpus all nvidia/cuda:12.5.1-cudnn-runtime-ubuntu22.04 nvidia-smi
docker run --gpus all nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04 nvidia-smi
```
```bash
Tue Nov 26 15:13:37 2024
Expand Down Expand Up @@ -147,7 +147,7 @@ we recommend checking out this [tutorial](https://www.youtube.com/watch?v=7wfPqA

Building a custom Docker image is the recommended option if you have a different GPU architecture or if you want to customize the pre-installed packages. The Dockerfiles in `custom/` can be modified to achieve this. To use a custom base image, modify `custom/header.Dockerfile`. To install specific GPU-related libraries, modify `custom/gpulibs.Dockerfile`, and to add specific libraries, append them to `custom/usefulpackages.Dockerfile`. Moreover, this offers the option for a **static token** or password which does not change with a container's restart.

After making the necessary modifications, regenerate the `Dockerfile` in `/.build`. Once you have confirmed that your GPU is accessible within Docker containers by running `docker run --gpus all nvidia/cuda:12.5.1-cudnn-runtime-ubuntu22.04 nvidia-sm` and seeing the GPU statistics, you can generate, build, and run the Docker image.
After making the necessary modifications, regenerate the `Dockerfile` in `/.build`. Once you have confirmed that your GPU is accessible within Docker containers by running `docker run --gpus all nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04 nvidia-sm` and seeing the GPU statistics, you can generate, build, and run the Docker image.
The following commands will start *GPU-Jupyter* on [localhost:8848](http://localhost:8848) with the default password `gpu-jupyter`.

```bash
Expand Down
2 changes: 1 addition & 1 deletion custom/header.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Use NVIDIA CUDA as base image and run the same installation as in the other packages.
# The version of cuda must match those of the packages installed in src/Dockerfile.gpulibs
FROM nvidia/cuda:12.5.1-cudnn-runtime-ubuntu22.04
FROM nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04
LABEL authors="Christoph Schranz <[email protected]>"

# This is a concatenated Dockerfile, the maintainers of subsequent sections may vary.
Expand Down
1 change: 0 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
version: "3.8"
services:
gpu-jupyter:
container_name: gpu-jupyter
Expand Down
22 changes: 11 additions & 11 deletions generate-Dockerfile.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ cd $(cd -P -- "$(dirname -- "$0")" && pwd -P)
export DOCKERFILE=".build/Dockerfile"
export STACKS_DIR=".build/docker-stacks"
# please test the build of the commit in https://github.com/jupyter/docker-stacks/commits/main in advance
export HEAD_COMMIT="00987883e58d139b5ed01f803f95e639c59bf340"
export HEAD_COMMIT="7bdb19cc1be5d9fcd3f41ce317e2ac741830472a"

while [[ "$#" -gt 0 ]]; do case $1 in
-p|--pw|--password) PASSWORD="$2" && USE_PASSWORD=1; shift;;
Expand Down Expand Up @@ -67,15 +67,15 @@ echo "
############################################################################
" >> $DOCKERFILE
if [ -f "$STACKS_DIR/images/docker-stacks-foundation/Dockerfile" ]; then
cat $STACKS_DIR/images/docker-stacks-foundation/Dockerfile | grep -v 'BASE_CONTAINER' | grep -v 'FROM $ROOT_CONTAINER' >> $DOCKERFILE
cat $STACKS_DIR/images/docker-stacks-foundation/Dockerfile | grep -v 'BASE_IMAGE' | grep -v 'FROM $ROOT_IMAGE' >> $DOCKERFILE
# copy files that are used during the build
cp $STACKS_DIR/images/docker-stacks-foundation/initial-condarc .build/
cp $STACKS_DIR/images/docker-stacks-foundation/fix-permissions .build/
cp $STACKS_DIR/images/docker-stacks-foundation/start.sh .build/
cp $STACKS_DIR/images/docker-stacks-foundation/run-hooks.sh .build/
cp $STACKS_DIR/images/docker-stacks-foundation/10activate-conda-env.sh .build/
else
cat $STACKS_DIR/docker-stacks-foundation/Dockerfile | grep -v 'BASE_CONTAINER' | grep -v 'FROM $ROOT_CONTAINER' >> $DOCKERFILE
cat $STACKS_DIR/docker-stacks-foundation/Dockerfile | grep -v 'BASE_IMAGE' | grep -v 'FROM $ROOT_IMAGE' >> $DOCKERFILE
# copy files that are used during the build
cp $STACKS_DIR/docker-stacks-foundation/initial-condarc .build/
cp $STACKS_DIR/docker-stacks-foundation/fix-permissions .build/
Expand All @@ -88,7 +88,7 @@ echo "
############################################################################
" >> $DOCKERFILE
if [ -f "$STACKS_DIR/images/base-notebook/Dockerfile" ]; then
cat $STACKS_DIR/images/base-notebook/Dockerfile | grep -v 'BASE_CONTAINER' >> $DOCKERFILE
cat $STACKS_DIR/images/base-notebook/Dockerfile | grep -v 'BASE_IMAGE' >> $DOCKERFILE
# copy files that are used during the build
cp $STACKS_DIR/images/base-notebook/jupyter_server_config.py .build/
cp $STACKS_DIR/images/base-notebook/start-notebook.sh .build/
Expand All @@ -97,7 +97,7 @@ if [ -f "$STACKS_DIR/images/base-notebook/Dockerfile" ]; then
cp $STACKS_DIR/images/base-notebook/start-singleuser.py .build/
cp $STACKS_DIR/images/base-notebook/docker_healthcheck.py .build/
else
cat $STACKS_DIR/base-notebook/Dockerfile | grep -v 'BASE_CONTAINER' >> $DOCKERFILE
cat $STACKS_DIR/base-notebook/Dockerfile | grep -v 'BASE_IMAGE' >> $DOCKERFILE
# copy files that are used during the build
cp $STACKS_DIR/base-notebook/jupyter_server_config.py .build/
cp $STACKS_DIR/base-notebook/start-notebook.sh .build/
Expand All @@ -112,12 +112,12 @@ echo "
############################################################################
" >> $DOCKERFILE
if [ -f "$STACKS_DIR/images/minimal-notebook/Dockerfile" ]; then
cat $STACKS_DIR/images/minimal-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/images/minimal-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
# copy files that are used during the build
cp -r $STACKS_DIR/images/minimal-notebook/setup-scripts .build/
cp $STACKS_DIR/images/minimal-notebook/Rprofile.site .build/
else
cat $STACKS_DIR/minimal-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/minimal-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
# copy files that are used during the build
cp -r $STACKS_DIR/minimal-notebook/setup-scripts .build/
cp $STACKS_DIR/minimal-notebook/Rprofile.site .build/
Expand All @@ -129,9 +129,9 @@ echo "
############################################################################
" >> $DOCKERFILE
if [ -f "$STACKS_DIR/images/scipy-notebook/Dockerfile" ]; then
cat $STACKS_DIR/images/scipy-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/images/scipy-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
else
cat $STACKS_DIR/scipy-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/scipy-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
fi

# install Julia and R if not excluded or spare mode is used
Expand All @@ -142,9 +142,9 @@ if [[ "$no_datascience_notebook" != 1 ]]; then
############################################################################
" >> $DOCKERFILE
if [ -f "$STACKS_DIR/images/datascience-notebook/Dockerfile" ]; then
cat $STACKS_DIR/images/datascience-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/images/datascience-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
else
cat $STACKS_DIR/images/datascience-notebook/Dockerfile | grep -v BASE_CONTAINER >> $DOCKERFILE
cat $STACKS_DIR/images/datascience-notebook/Dockerfile | grep -v BASE_IMAGE >> $DOCKERFILE
fi
else
echo "Set 'no-datascience-notebook' = 'python-only', not installing the datascience-notebook with Julia and R."
Expand Down
Loading