--- tags: - image-classification - timm - owkin - biology - cancer - colon library_name: timm datasets: - 1aurent/Kather-texture-2016 metrics: - accuracy pipeline_tag: image-classification model-index: - name: owkin_pancancer_ft_kather2016 results: - task: type: image-classification name: Image Classification dataset: name: 1aurent/Kather-texture-2016 type: image-classification metrics: - type: accuracy value: 0.984 name: accuracy verified: false widget: - src: >- https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg example_title: adipose license: other --- # Model card for vit_base_patch16_224.owkin_pancancer_ft_kather2016 A Vision Transformer (ViT) image classification model. \ Trained by Owkin on 40M pan-cancer histology tiles from TCGA. \ Fine-tuned on Kather Texture 2016 dataset. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 85.8 - Image size: 224 x 224 x 3 - **Papers:** - Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2 - **Pretrain Dataset:** TGCA: https://portal.gdc.cancer.gov/ - **Dataset:** Kather Texture 2016: https://huggingface.co/datasets/1aurent/Kather-texture-2016 - **Original:** https://github.com/owkin/HistoSSLscaling/ - **License:** https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm # get example histology image img = Image.open( urlopen( "https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg" ) ) # load model from the hub model = timm.create_model( model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_kather2016", pretrained=True, ).eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm # get example histology image img = Image.open( urlopen( "https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg" ) ) # load model from the hub model = timm.create_model( model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_kather2016", pretrained=True, num_classes=0, ).eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor ``` ## Citation ```bibtex @article {Filiot2023.07.21.23292757, author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti}, title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling}, elocation-id = {2023.07.21.23292757}, year = {2023}, doi = {10.1101/2023.07.21.23292757}, publisher = {Cold Spring Harbor Laboratory Press}, URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757}, eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf}, journal = {medRxiv} } ```