Skip to content

Commit

Permalink
Merge branch 'Deep-MI:dev' into dev
Browse files Browse the repository at this point in the history
  • Loading branch information
taha-abdullah authored Sep 3, 2024
2 parents a667780 + 8116737 commit fc250d4
Show file tree
Hide file tree
Showing 19 changed files with 308 additions and 509 deletions.
6 changes: 0 additions & 6 deletions .github/workflows/code-style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,6 @@ jobs:
python -m pip install --progress-bar off .[style]
- name: Run Ruff
run: ruff check .
- name: Run isort
uses: isort/isort-action@master
- name: Run black
uses: psf/black@stable
with:
options: "--check --verbose"
- name: Run codespell
uses: codespell-project/actions-codespell@master
with:
Expand Down
5 changes: 4 additions & 1 deletion Docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ARG PYTHON_VERSION=3.10
ARG FORGE_VERSION=23.11.0-0
ARG FORGE_VERSION=24.3.0-0

# Install conda
RUN wget --no-check-certificate -qO ~/miniforge.sh \
Expand Down Expand Up @@ -189,6 +189,9 @@ SHELL ["/bin/bash", "--login", "-c"]
COPY --from=selected_freesurfer_build_image /opt/freesurfer /opt/freesurfer
COPY --from=selected_conda_build_image /venv /venv

# Fix for cuda11.8+cudnn8.7 bug+warning: https://github.com/pytorch/pytorch/issues/97041
RUN if [[ "$DEVICE" == "cu118" ]] ; then cd /venv/python3.10/site-packages/torch/lib && ln -s libnvrtc-*.so.11.2 libnvrtc.so ; fi

# Copy fastsurfer over from the build context and add PYTHONPATH
COPY . /fastsurfer/
ENV PYTHONPATH=/fastsurfer:/opt/freesurfer/python/packages \
Expand Down
16 changes: 8 additions & 8 deletions Docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ As you can see, only the tag of the image is changed from gpu to cpu and the sta

Here we build an experimental image to test performance when running on AMD GPUs. Note that you need a supported OS and Kernel version and supported GPU for the RocM to work correctly. You need to install the Kernel drivers into
your host machine kernel (amdgpu-install --usecase=dkms) for the amd docker to work. For this follow:
https://docs.amd.com/en/latest/deploy/linux/quick_start.html
https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html#rocm-install-quick, https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html#amdgpu-install-dkms and https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html

```bash
PYTHONPATH=<FastSurferRoot>
Expand All @@ -149,22 +149,22 @@ python build.py --device rocm --tag my_fastsurfer:rocm
and run segmentation only:

```bash
docker run --rm --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--device=/dev/kfd --device=/dev/dri --group-add video --ipc=host \
--shm-size 8G \
docker run --rm --security-opt seccomp=unconfined \
--device=/dev/kfd --device=/dev/dri --group-add video \
-v /home/user/my_mri_data:/data \
-v /home/user/my_fastsurfer_analysis:/output \
my_fastsurfer:rocm \
--t1 /data/subjectX/t1-weighted.nii.gz \
--sid subjectX --sd /output
```

Note, we tested on an AMD Radeon Pro W6600, which is [not officially supported](https://docs.amd.com/en/latest/release/gpu_os_support.html), but setting `HSA_OVERRIDE_GFX_VERSION=10.3.0` [inside docker did the trick](https://en.opensuse.org/AMD_OpenCL#ROCm_-_Running_on_unsupported_hardware):
In conflict with the official ROCm documentation (above), we also needed to add the group render `--group-add render` (in addition to `--group-add video`).

Note, we tested on an AMD Radeon Pro W6600, which is [not officially supported](https://docs.amd.com/en/latest/release/gpu_os_support.html), but setting `HSA_OVERRIDE_GFX_VERSION=10.3.0` [inside docker did the trick](https://en.opensuse.org/SDB:AMD_GPGPU#Using_CUDA_code_with_ZLUDA_and_ROCm):

```bash
docker run --rm --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--device=/dev/kfd --device=/dev/dri --group-add video --ipc=host \
--shm-size 8G \
docker run --rm --security-opt seccomp=unconfined \
--device=/dev/kfd --device=/dev/dri --group-add video --group-add render \
-v /home/user/my_mri_data:/data \
-v /home/user/my_fastsurfer_analysis:/output \
-e HSA_OVERRIDE_GFX_VERSION=10.3.0 \
Expand Down
17 changes: 9 additions & 8 deletions Docker/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@
Target = Literal['runtime', 'build_common', 'build_conda', 'build_freesurfer',
'build_base', 'runtime_cuda']
CacheType = Literal["inline", "registry", "local", "gha", "s3", "azblob"]
AllDeviceType = Literal["cpu", "cuda", "cu116", "cu117", "cu118", "rocm", "rocm5.1.1",
"rocm5.4.2"]
DeviceType = Literal["cpu", "cu116", "cu117", "cu118", "rocm5.1.1", "rocm5.4.2"]
AllDeviceType = Literal["cpu", "cuda", "cu118", "cu121", "cu124", "rocm", "rocm6.1"]
DeviceType = Literal["cpu", "cu118", "cu121", "cu124", "rocm6.1"]

CREATE_BUILDER = "Create builder with 'docker buildx create --name fastsurfer'."
CONTAINERD_MESSAGE = (
Expand All @@ -59,10 +58,11 @@ class DEFAULTS:
# and rocm versions, if pytorch comes with new versions.
# torch 1.12.0 comes compiled with cu113, cu116, rocm5.0 and rocm5.1.1
# torch 2.0.1 comes compiled with cu117, cu118, and rocm5.4.2
# torch 2.4 comes compiled with cu118, cu121, cu124 and rocm6.1
MapDeviceType: Dict[AllDeviceType, DeviceType] = dict(
((d, d) for d in get_args(DeviceType)),
rocm="rocm5.1.1",
cuda="cu117",
rocm="rocm6.1",
cuda="cu124",
)
BUILD_BASE_IMAGE = "ubuntu:22.04"
RUNTIME_BASE_IMAGE = "ubuntu:22.04"
Expand Down Expand Up @@ -185,12 +185,12 @@ def make_parser() -> argparse.ArgumentParser:

parser.add_argument(
"--device",
choices=["cpu", "cuda", "cu117", "cu118", "rocm", "rocm5.4.2"],
choices=["cpu", "cuda", "cu118", "cu121", "cu124", "rocm", "rocm6.1"],
required=True,
help="""selection of internal build stages to build for a specific platform.<br>
- cuda: defaults to cu118, cuda 11.8<br>
- cuda: defaults to cu124, cuda 12.4<br>
- cpu: only cpu support<br>
- rocm: defaults to rocm5.4.2 (experimental)""",
- rocm: defaults to rocm6.1 (experimental)""",
)
parser.add_argument(
"--tag",
Expand Down Expand Up @@ -231,6 +231,7 @@ def make_parser() -> argparse.ArgumentParser:
--cache type=registry,ref=server/fastbuild,mode=max.
Will default to the environment variable FASTSURFER_BUILD_CACHE:
{cache_kwargs.get('default', 'N/A')}""",
metavar="type={inline,local,...}[,<param>=<value>[,...]]",
**cache_kwargs,
)
parser.add_argument(
Expand Down
2 changes: 1 addition & 1 deletion Docker/install_env.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
def mode(arg: str) -> str:
if arg in ["base", "cpu"] or \
re.match("^cu\\d+$", arg) or \
re.match("^rocm\\d+\\.\\d+(\\.\\d+)?$"):
re.match("^rocm\\d+\\.\\d+(\\.\\d+)?$", arg):
return arg
else:
raise argparse.ArgumentTypeError(f"The mode was '{arg}', but should be "
Expand Down
10 changes: 6 additions & 4 deletions FastSurferCNN/data_loader/conform.py
Original file line number Diff line number Diff line change
Expand Up @@ -635,7 +635,8 @@ def conform(
# Pxyz is the center of the image in world coords

# target scalar type and dtype
sctype = np.uint8 if dtype is None else np.obj2sctype(dtype, default=np.uint8)
#sctype = np.uint8 if dtype is None else np.obj2sctype(dtype, default=np.uint8)
sctype = np.uint8 if dtype is None else np.dtype(dtype).type
target_dtype = np.dtype(sctype)

src_min, scale = 0, 1.0
Expand Down Expand Up @@ -761,7 +762,7 @@ def is_conform(
raise ValueError(f"ERROR: Multiple input frames ({ishape[3]}) not supported!")

checks = {
f"Number of Dimensions 3": (len(ishape) == 3, f"image ndim {img.ndim}")
"Number of Dimensions 3": (len(ishape) == 3, f"image ndim {img.ndim}")
}
# check dimensions
if Criteria.FORCE_IMG_SIZE in criteria:
Expand All @@ -775,7 +776,7 @@ def is_conform(
_vox_sizes = conformed_vox_size if is_correct_vox_size else izoom[:3]
if Criteria.FORCE_ISO_VOX in criteria:
vox_size_criteria = f"Voxel Size {'x'.join([str(conformed_vox_size)] * 3)}"
image_vox_size = f"image " + "x".join(map(str, izoom))
image_vox_size = "image " + "x".join(map(str, izoom))
checks[vox_size_criteria] = (is_correct_vox_size, image_vox_size)

# check orientation LIA
Expand All @@ -795,7 +796,8 @@ def is_conform(
if dtype is None or (isinstance(dtype, str) and dtype.lower() == "uchar"):
dtype = "uint8"
else: # assume obj
dtype = np.dtype(np.obj2sctype(dtype)).name
#dtype = np.dtype(np.obj2sctype(dtype)).name
dtype = np.dtype(dtype).type.__name__
is_correct_dtype = img.get_data_dtype() == dtype
checks[f"Dtype {dtype}"] = (is_correct_dtype, f"dtype {img.get_data_dtype()}")

Expand Down
2 changes: 1 addition & 1 deletion FastSurferCNN/data_loader/data_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -623,7 +623,7 @@ def read_classes_from_lut(lut_file: str | Path):
if lut_file.suffix == ".csv":
kwargs["sep"] = ","
elif lut_file.suffix == ".txt":
kwargs["delim_whitespace"] = True
kwargs["sep"] = "\\s+"
else:
raise RuntimeError(
f"Unknown LUT file extension {lut_file}, must be csv, txt or tsv."
Expand Down
4 changes: 3 additions & 1 deletion FastSurferCNN/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,9 @@ def load_checkpoint(self, ckpt: Union[str, os.PathLike]):
# make sure the model is, where it is supposed to be
self.model.to(self.device)

model_state = torch.load(ckpt, map_location=device)
# WARNING: weights_only=False can cause unsafe code execution, but here the
# checkpoint can be considered to be from a safe source
model_state = torch.load(ckpt, map_location=device, weights_only=False)
self.model.load_state_dict(model_state["model_state"])

# workaround for mps (move the model back to mps)
Expand Down
4 changes: 3 additions & 1 deletion FastSurferCNN/utils/checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,9 @@ def load_from_checkpoint(
loaded_epoch : int
Epoch number.
"""
checkpoint = torch.load(checkpoint_path, map_location="cpu")
# WARNING: weights_only=False can cause unsafe code execution, but here the
# checkpoint can be considered to be from a safe source
checkpoint = torch.load(checkpoint_path, map_location="cpu", weights_only=False)

if drop_classifier:
classifier_conv = ["classifier.conv.weight", "classifier.conv.bias"]
Expand Down
4 changes: 3 additions & 1 deletion HypVINN/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,9 @@ def load_checkpoint(self, ckpt: str):
of a model.
"""
logger.info("Loading checkpoint {}".format(ckpt))
model_state = torch.load(ckpt, map_location=self.device)
# WARNING: weights_only=False can cause unsafe code execution, but here the
# checkpoint can be considered to be from a safe source
model_state = torch.load(ckpt, map_location=self.device, weights_only=False)
self.model.load_state_dict(model_state["model_state"])

def get_modelname(self):
Expand Down
13 changes: 6 additions & 7 deletions Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@
"#@title Here we first setup the environment by downloading the open source deep-mi/fastsurfer project and the required packages\n",
"import os\n",
"import sys\n",
"from os.path import exists, join, basename, splitext\n",
"from os.path import exists, basename, splitext\n",
"\n",
"print(\"Starting setup. This could take a few minutes\")\n",
"print(\"----------------------------------------------\")\n",
Expand Down Expand Up @@ -474,12 +474,12 @@
"source": [
"#@title Click this run button, if you would prefer to download the segmentation in nifti-format\n",
"import nibabel as nib\n",
"from google.colab import files\n",
"# conversion to nifti\n",
"data = nib.load(f'{SETUP_DIR}fastsurfer_seg/Tutorial/mri/aparc.DKTatlas+aseg.deep.mgz')\n",
"img_nifti = nib.Nifti1Image(data.get_fdata(), data.affine, header=nib.Nifti1Header())\n",
"nib.nifti1.save(img_nifti, f'{SETUP_DIR}fastsurfer_seg/Tutorial/mri/aparc.DKTatlas+aseg.deep.nii.gz')\n",
"\n",
"from google.colab import files\n",
"files.download(f'{SETUP_DIR}fastsurfer_seg/Tutorial/mri/aparc.DKTatlas+aseg.deep.nii.gz')\n"
]
},
Expand Down Expand Up @@ -519,12 +519,12 @@
"source": [
"#@title Click this run button, if you would prefer to download the image in nifti-format\n",
"import nibabel as nib\n",
"from google.colab import files\n",
"# conversion to nifti\n",
"data = nib.load(f\"{SETUP_DIR}140_orig.mgz\")\n",
"img_nifti = nib.Nifti1Image(data.get_fdata(), data.affine, header=nib.Nifti1Header())\n",
"nib.nifti1.save(img_nifti, f\"{SETUP_DIR}140_orig.nii.gz\")\n",
"\n",
"from google.colab import files\n",
"files.download(f\"{SETUP_DIR}140_orig.nii.gz\")\n"
]
},
Expand Down Expand Up @@ -612,11 +612,11 @@
"%matplotlib inline\n",
"import nibabel as nib\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('seaborn-v0_8-whitegrid')\n",
"from skimage import color\n",
"import torch\n",
"import numpy as np\n",
"from skimage import color\n",
"from torchvision import utils\n",
"plt.style.use('seaborn-v0_8-whitegrid')\n",
"\n",
"def plot_predictions(image, pred):\n",
" \"\"\"\n",
Expand Down Expand Up @@ -676,7 +676,6 @@
"from ipywidgets import widgets\n",
"import matplotlib.pyplot as plt\n",
"import nibabel as nib\n",
"import numpy as np\n",
"#from mpl_toolkits.mplot3d.art3d import Poly3DCollection\n",
"from skimage import measure\n",
"\n",
Expand Down Expand Up @@ -853,7 +852,7 @@
"def plot_3d_plotly_shape(structure, hemisphere, show_mesh=True, crop=True, grid=True):\n",
" import plotly.graph_objects as go\n",
" label = label_lookups(structure, hemisphere)\n",
" test_cond = np.in1d(pred_data, label).reshape(pred_data.shape)\n",
" test_cond = np.isin(pred_data, label).reshape(pred_data.shape)\n",
" roi = np.where(test_cond, 1, 0)\n",
" vert_p, faces_p, normals_p, values_p = measure.marching_cubes(roi, 0, spacing=(1, 1, 1))\n",
"\n",
Expand Down
14 changes: 7 additions & 7 deletions env/export_pip-r.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,21 +48,23 @@ echo "Exporting versions from $2..."
echo "#"
} > $1

pip_cmd="python --version && pip list --format=freeze --no-color --all --disable-pip-version-check --no-input"
pip_cmd="python --version && pip list --format=freeze --no-color --disable-pip-version-check --no-input"
if [ "${2/#.sif}" != "$2" ]
then
# singularity
cmd="singularity exec $2 /bin/bash -c '$pip_cmd'"
cmd=("singularity" "exec" "$2" "/bin/bash" -c "$pip_cmd")
clean_cmd="singularity exec $2 /bin/bash -c '$pip_cmd'"
else
# docker
cmd="docker run --entrypoint /bin/bash $2 -c '$pip_cmd'"
clean_cmd="docker run --rm -u <user_id>:<group_id> --entrypoint /bin/bash $2 -c '$pip_cmd'"
cmd=("docker" "run" --rm -u "$(id -u):$(id -g)" --entrypoint /bin/bash "$2" -c "$pip_cmd")
fi
{
echo "# Which ran the following command:"
echo "# $cmd"
echo "# $clean_cmd"
echo "#"
} >> $1
out=$($cmd)
out=$("${cmd[@]}")
hardware=$(echo "$out" | grep "torch==" | cut -d"+" -f2)
pyversion=$(echo "$out" | head -n 1 | cut -d" " -f2)
{
Expand All @@ -73,5 +75,3 @@ pyversion=$(echo "$out" | head -n 1 | cut -d" " -f2)
echo ""
echo "# $out"
} >> $1

}
50 changes: 25 additions & 25 deletions env/fastsurfer.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,28 @@ channels:
- defaults

dependencies:
- h5py=3.7.0
- lapy=1.0.1
- matplotlib=3.7.1
- nibabel=5.1.0
- numpy=1.25.0
- pandas=1.5.3
- pillow=10.0.1
- pip=23.1.2
- python=3.10
- python-dateutil=2.8.2
- pyyaml=6.0
- scikit-image=0.19.3
- scikit-learn=1.2.2
- scipy=1.10.1
- setuptools=67.8.0
- tensorboard=2.12.1
- tqdm=4.66
- yacs=0.1.8
- pip
- pip:
- --extra-index-url https://download.pytorch.org/whl/cu117
- simpleitk==2.2.1
- torch==2.0.1
- torchio==0.18.83
- torchvision==0.15.2
- h5py=3.11.0
- lapy=1.1.0
- matplotlib=3.9.2
- nibabel=5.2.1
- numpy=1.26.4
- pandas=2.2.2
- pillow=10.4.0
- pip=24.2
- python=3.10
- python-dateutil=2.9.0
- pyyaml=6.0.2
- requests=2.32.3
- scikit-image=0.24.0
- scikit-learn=1.5.1
- scipy=1.14.1
- setuptools=72.2.0
- tensorboard=2.17.1
- tqdm=4.66.5
- yacs=0.1.8
- pip:
- --extra-index-url https://download.pytorch.org/whl/cu124
- simpleitk==2.4.0
- torch==2.4.0+cu124
- torchio==0.19.9
- torchvision==0.19.0+cu124
Loading

0 comments on commit fc250d4

Please sign in to comment.