Skip to content

Commit

Permalink
Cellvit (#2)
Browse files Browse the repository at this point in the history
* Add cellvit

* Create LICENSE for CellViT

* Create README.md to cellvit

* Add path to weights
  • Loading branch information
megavaz authored Aug 20, 2024
1 parent d14c772 commit d31873c
Show file tree
Hide file tree
Showing 10 changed files with 1,726 additions and 16 deletions.
43 changes: 28 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,20 @@

## Updates

**08.09.2024**: We are excited to announce the release of Hibou-L under the Apache 2.0 license. You can find Hibou-L on Hugging Face 🤗 [here](https://huggingface.co/histai/hibou-L).
* **08.19.2024**: We release a CellViT-Hibou model, which is a hybrid model combining the CellViT and Hibou architectures. This model comes under CC BY-NC-SA 4.0 license. Check the `segmentation_example.ipynb` notebook for an example of how to use the model. The weights can be downloaded from [Hugging Face](https://huggingface.co/histai/cellvit-hibou-l) 🤗.
CellViT-Hibou is a model trained on PanNuke dataset for panoptic cell segmentation. It can segment and classify cells and tissues. For more information visit the original CellViT repository [here](https://github.com/TIO-IKIM/CellViT). Huge thanks to the authors of CellViT for their amazing work!

* **08.09.2024**: We are excited to announce the release of Hibou-L under the Apache 2.0 license. You can find Hibou-L on Hugging Face 🤗 [here](https://huggingface.co/histai/hibou-L).

## Introduction

This repository contains the code to run the Hibou-B model locally. For inquiries about accessing Hibou-L on CellDX, please contact us at [[email protected]](mailto:[email protected]).
This repository contains the code to run the Hibou models locally. For inquiries about accessing Hibou-L on CellDX, please contact us at [[email protected]](mailto:[email protected]).

## Getting Started

### Using HuggingFace

The easiest way to use the Hibou-B model is through the HuggingFace repository. Run the following code to get started:
The easiest way to use the Hibou models is through the HuggingFace repository. Run the following code to get started:

```python
from transformers import AutoImageProcessor, AutoModel
Expand All @@ -24,6 +27,15 @@ processor = AutoImageProcessor.from_pretrained("histai/hibou-b", trust_remote_co
model = AutoModel.from_pretrained("histai/hibou-b", trust_remote_code=True)
```

OR

```python
from transformers import HibouImageProcessor, HibouModel

processor = HibouImageProcessor.from_pretrained("histai/hibou-L", trust_remote_code=True)
model = HibouModel.from_pretrained("histai/hibou-L", trust_remote_code=True)
```

We use a customized implementation of the DINOv2 architecture from the transformers library to add support for registers, which requires the `trust_remote_code=True` flag.

### Using the Model Directly
Expand Down Expand Up @@ -57,18 +69,19 @@ For more information, refer to the [example.ipynb](example.ipynb) notebook.
## Metrics
**Table: Linear probing benchmarks reporting top-1 accuracy.**

*Metrics for Virchow and RudolfV are derived from the respective papers, as these models are not open-sourced.*

| Dataset | Phikon | Kaiko-B8 | Virchow* | RudolfV* | Prov-GigaPath | Hibou-B | Hibou-L |
|-----------|--------|----------|----------|----------|---------------|---------|---------|
| CRC-100K | 0.917 | 0.949 | 0.968* | **0.973*** | 0.968 | 0.955 | 0.966 |
| PCAM | 0.916 | 0.919 | 0.933* | 0.944* | **0.947** | 0.946 | 0.943 |
| MHIST | 0.791 | 0.832 | 0.834* | 0.821* | 0.839 | 0.812 | **0.849** |
| MSI-CRC | 0.750 | 0.786 | - | 0.755* | 0.771 | 0.779 | **0.797** |
| MSI-STAD | 0.760 | 0.814 | - | 0.788* | 0.784 | 0.797 | **0.825** |
| TIL-DET | 0.944 | **0.945** | - | 0.943* | 0.939 | 0.942 | 0.943 |
| **AVG (1-3)** | 0.875 | 0.900 | 0.912 | 0.913 | 0.918 | 0.904 | **0.919** |
| **AVG (1-6)** | 0.846 | 0.874 | - | 0.871 | 0.875 | 0.872 | **0.887** |
**Metrics for Virchow and RudolfV are derived from the respective papers.*

| Dataset | Phikon | Kaiko-B8 | Virchow* | RudolfV* | Prov-GigaPath | H-optimus-0 | Hibou-B | Hibou-L |
|-----------|--------|----------|----------|----------|---------------|-------------|---------|---------|
| CRC-100K | 0.917 | 0.949 | 0.968* | **0.973*** | 0.968 | 0.970 | 0.955 | 0.966 |
| PCAM | 0.916 | 0.919 | 0.933* | 0.944* | 0.947 | 0.942 | 0.946 | **0.953** |
| MHIST | 0.791 | 0.832 | 0.834* | 0.821* | 0.839 | **0.861** | 0.812 | 0.858 |
| MSI-CRC | 0.750 | 0.786 | - | 0.755* | 0.771 | 0.767 | 0.779 | **0.793** |
| MSI-STAD | 0.760 | 0.814 | - | 0.788* | 0.784 | 0.797 | 0.797 | **0.829** |
| TIL-DET | 0.944 | **0.945** | - | 0.943* | 0.939 | **0.948** | 0.942 | 0.942 |
| **AVG (1-3)** | 0.875 | 0.900 | 0.912 | 0.913 | 0.918 | 0.924 | 0.904 | **0.926** |
| **AVG (1-6)** | 0.846 | 0.874 | - | 0.871 | 0.875 | 0.881 | 0.872 | **0.890** |



## License
Expand Down
28 changes: 28 additions & 0 deletions configs/dataset_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
tissue_types:
"Adrenal_gland": 0
"Bile-duct": 1
"Bladder": 2
"Breast": 3
"Cervix": 4
"Colon": 5
"Esophagus": 6
"HeadNeck": 7
"Kidney": 8
"Liver": 9
"Lung": 10
"Ovarian": 11
"Pancreatic": 12
"Prostate": 13
"Skin": 14
"Stomach": 15
"Testis": 16
"Thyroid": 17
"Uterus": 18

nuclei_types:
"Background": 0
"Neoplastic": 1
"Inflammatory": 2
"Connective": 3
"Dead": 4
"Epithelial": 5
1 change: 1 addition & 0 deletions hibou/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
from .models import build_model
from .models.cellvit.cellvit import CellViTHibou
4 changes: 4 additions & 0 deletions hibou/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ def build_model(
qkv_bias=True,
proj_bias=True,
ffn_bias=True,
dropout_rate=0.0,
attention_dropout_rate=0.0,
num_register_tokens=4,
interpolate_offset=0,
interpolate_antialias=True,
Expand All @@ -36,6 +38,8 @@ def build_model(
num_register_tokens=num_register_tokens,
interpolate_offset=interpolate_offset,
interpolate_antialias=interpolate_antialias,
dropout_rate=dropout_rate,
attention_dropout_rate=attention_dropout_rate,
)
model = vision_transformer.__dict__[arch](**vit_kwargs)
if weights_path is not None:
Expand Down
Loading

0 comments on commit d31873c

Please sign in to comment.