Edit model card

Model card for vit_base_patch16_224.owkin_pancancer

A Vision Transformer (ViT) image classification model.
Trained by Owkin on 40 million pan-cancer histology tiles from TCGA-COAD.

A version using the transformers library is also available here: https://huggingface.co/owkin/phikon

Model Details

Model Usage

Image Embeddings

from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(

# load model from the hub
model = timm.create_model(

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data)  # output is a (batch_size, num_features) shaped tensor


  author       = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
  title        = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
  elocation-id = {2023.07.21.23292757},
  year         = {2023},
  doi          = {10.1101/2023.07.21.23292757},
  publisher    = {Cold Spring Harbor Laboratory Press},
  url          = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
  eprint       = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
  journal      = {medRxiv}
Downloads last month
Model size
85.8M params
Tensor type
Inference API
Inference API (serverless) has been turned off for this model.

Datasets used to train 1aurent/vit_base_patch16_224.owkin_pancancer

Evaluation results