File size: 2,504 Bytes
3fcb9c5 fb7aff2 3fcb9c5 c522049 9127336 3fcb9c5 d9f7080 7f224e1 d9f7080 a2a8145 7f224e1 6dfc194 7f224e1 6dfc194 7f224e1 6dfc194 7f224e1 6dfc194 7f224e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
tags:
- image-classification
- timm
- biology
- cancer
- owkin
- histology
library_name: timm
widget:
- src: >-
https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif
co2_eq_emissions:
emissions: 14590
source: "https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2"
training_type: "pre-training"
geographical_location: "Jean Zay cluster, France (~40 gCO₂eq/kWh)"
hardware_used: "32 V100 32Gb GPUs, 1216 GPU hours"
---
# Model card for vit_base_patch16_224.owkin_pancancer
A Vision Transformer (ViT) image classification model. \
Trained by Owkin on 40M pan-cancer histology tiles from TCGA.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 85.8
- Image size: 224 x 224 x 3
- **Papers:**
- Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2
- **Dataset:** TGCA: https://portal.gdc.cancer.gov/
- **Original:** https://github.com/owkin/HistoSSLscaling/
- **License:** https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article {Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
``` |