Skip to content

Commit

Permalink
Update pretrained models (#45)
Browse files Browse the repository at this point in the history
* restructure layers and 4D architectures

* changed 4d structure a bit

* added benchmark results for models

* added DynamicDeeplab to model Zoo

* 0.0.5 -> 0.0.6

* fixed bug in efficientnet
  • Loading branch information
kbressem authored Mar 27, 2021
1 parent b696cfe commit 80af7b5
Show file tree
Hide file tree
Showing 39 changed files with 1,921 additions and 1,005 deletions.
9 changes: 5 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,15 @@

## How to get started

This repository was initalized from a `nbdev` template. To make simple fixes to the code (e.g. typos) and push working notebooks to Github which pass all tests, one needs at least a basic anaconda enviroment with `nbdev`.
You can create a working enviroment with `conda create --name <env> -c fastai nbdev`. If you want to participate in developement you need to run `conda create --name <env> -c fastai -c pytorch -c simpleitk -c conda-forge -c huggingface fastai simpleitk av transformers nbdev scikit-image`, to create an enviroment with working fastai, Huggingface and PyTorch modules.
Before you start changing code in the notebooks, please install the git hooks that run automatic scripts during each commit and merge to strip the notebooks of superfluous metadata (and avoid merge conflicts). After cloning the repository, run the following command inside it:

This repository was initialized from a `nbdev` template. To make simple fixes to the code (e.g., typos) and push working notebooks to Github, which pass all tests, you need at least a basic anaconda environment with `nbdev`, which can be created with `conda create --name <env> -c fastai nbdev`.
To participate in development you need to run `conda create --name <env> -c fastai -c PyTorch -c simpleitk -c conda-forge fastai simpleitk av nbdev scikit-image`, to create an environment with working fastai, SimpleITK and PyTorch modules.
Before starting to change code in the notebooks, please install the git hooks that run automatic scripts during each commit and merge to strip the notebooks of superfluous metadata (and avoid merge conflicts). After cloning the repository, run the following command inside it:

```
nbdev_install_git_hooks
```
You also need to create a symlink between the libs and nbs by executing `!ln -s ../faimed3d nbs/faimed3d` from within one notebook.
You also need to create a symlink between the libs and nbs by executing `!ln -s ../faimed3d nbs/faimed3d` from within one notebook or the terminal.

## Did you find a bug?

Expand Down
93 changes: 59 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,55 +1,78 @@
# FAIMED 3D
> fastai extension for medical 3d images including 3d transforms, datablocks and novel network architectures.
> use fastai to quickly train fully three-dimensional models on radiological data

In contrast to fastai which uses Pydicom to read medical images, faimed3d uses SimpleITK, as it supports more image formats.
Currently, faimed3d is built using the following versions of fastai, fastcore, nbdev, PyTorch, torchvision and SimpleITK
## Classification

```
import fastai
import pydicom
import torch
print('fastai:', fastai.__version__)
print('pydicom:', pydicom.__version__)
print('torch:', torch.__version__)
print('SimpleITK: 2.0.2rc3 (ITK 5.1)')
```python
from faimed3d.all import *
```

fastai: 2.2.5
pydicom: 2.1.2
torch: 1.7.0
SimpleITK: 2.0.2rc3 (ITK 5.1)
Load data in various medical formats (DICOM, NIfTI, NRRD) or even videos as simple as in fastai.

```python
d = pd.read_csv('../data/radiopaedia_cases.csv')
dls = ImageDataLoaders3D.from_df(d,
item_tfms = Resize3D((20, 112, 112)),
batch_tfms = aug_transforms_3d(),
bs = 2, val_bs = 2)
```

## Example 3D classification
Faimed3d provides multiple model architectures, pretrained on the [UCF101](https://paperswithcode.com/sota/action-recognition-in-videos-on-ucf101) dataset for action recoginiton, which can be used for transfer learning.

```
from faimed3d.all import *
from torchvision.models.video import r3d_18
```
| Model | 3-fold accuracy | duration/epoch | model size |
|-----------------|-----------------|----------------|------------------|
| efficientnet b0 | 92.5 % | 9M:35S | 48.8 MB |
| efficientnet b1 | 90.1 % | 13M:20S | 80.5 MB |
| resnet 18 | 87.6 % | 6M:57S | 339.1 MB |
| resnet 50 | 94.8 % | 12M:16S | 561.2 MB |
| resnet 101 | 96.0 % | 17M:20S | 1,030 MB |

```python
# slow
learn = cnn_learner_3d(dls, efficientnet_b0)
```
d = pd.read_csv('../data/radiopaedia_cases.csv')

```python
# slow
learn.lr_find()
```

`faimed3d` keeps track of the metadata until the items are concatenated as a batch.

```
dls = ImageDataLoaders3D.from_df(d,
item_tfms = ResizeCrop3D(crop_by = (0, 6, 6),
resize_to = (20, 112, 112)),






SuggestedLRs(lr_min=0.014454397559165954, lr_steep=6.309573450380412e-07)




![png](docs/images/output_6_2.png)


Click [here](../examples/3d_classification.md) for a more in-depth classification example.

## Segmentation

```python
dls = SegmentationDataLoaders3D.from_df(d,
item_tfms = Resize3D((20, 112, 112)),
batch_tfms = aug_transforms_3d(),
bs = 2, val_bs = 2)
```

Construct a learner similar to fastai, even transfer learning is possible using the pretrained resnet18 from torchvision.
All models in faimed3d can be used as a backbone for U-Nets, even with pre-trained weights.

```
learn = cnn_learner_3d(dls, r3d_18)
```python
# slow
learn = unet_learner_3d(dls, efficientnet_b0, n_out = 2)
```

```
#slow
```python
# slow
learn.lr_find()
```

Expand All @@ -60,10 +83,12 @@ learn.lr_find()



SuggestedLRs(lr_min=6.918309954926372e-05, lr_steep=1.5848931980144698e-06)
SuggestedLRs(lr_min=0.33113112449646, lr_steep=0.10000000149011612)




![png](docs/images/output_12_2.png)

![png](docs/images/output_9_2.png)

Click [here](../examples/3d_segmentation.md) for a more in-depth segmentation example.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -21,7 +21,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -55,7 +55,11 @@
},
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"jupyter": {
"source_hidden": true
}
},
"source": [
"# deprecated\n",
"\n",
Expand Down Expand Up @@ -149,7 +153,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -212,7 +216,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [
{
Expand All @@ -221,7 +225,7 @@
"torch.Size([10, 256, 1, 3, 3])"
]
},
"execution_count": null,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -232,7 +236,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [
{
Expand All @@ -241,7 +245,7 @@
"torch.Size([10, 512, 1, 3, 3])"
]
},
"execution_count": null,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -252,7 +256,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -302,7 +306,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [
{
Expand All @@ -326,7 +330,7 @@
")"
]
},
"execution_count": null,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -337,7 +341,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\u001b[0;31mSignature:\u001b[0m \u001b[0mbuild_backbone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbackbone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutput_stride\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnorm_layer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mn_channels\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mDocstring:\u001b[0m <no docstring>\n",
"\u001b[0;31mSource:\u001b[0m \n",
"\u001b[0;32mdef\u001b[0m \u001b[0mbuild_backbone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbackbone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutput_stride\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnorm_layer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mn_channels\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mmodel\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbackbone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn_channels\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mn_channels\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnorm_layer\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mnorm_layer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m#output_stride, BatchNorm)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mx1\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstem\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mx2\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayer1\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mx3\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayer2\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mx4\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayer3\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mx5\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayer4\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx4\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mx1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx2\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx3\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx5\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mFile:\u001b[0m ~/Documents/faimed3d/faimed3d/models/resnet.py\n",
"\u001b[0;31mType:\u001b[0m function\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"build_backbone??"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -364,7 +402,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -425,9 +463,21 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (fastai v2)",
"display_name": "fastai",
"language": "python",
"name": "fastai-v2"
"name": "fastai"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion docs/Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ DEPENDENCIES
jekyll (>= 3.7)
jekyll-remote-theme
kramdown (>= 2.3.0)
nokogiri (< 1.11.1)
nokogiri (< 1.10.9)

BUNDLED WITH
2.1.4
8 changes: 4 additions & 4 deletions docs/basics.html

Large diffs are not rendered by default.

Loading

0 comments on commit 80af7b5

Please sign in to comment.