Safetensors
dinov2
medical
curia / README.md
cdancette's picture
Update README.md
01c1902 verified
metadata
tags:
  - medical
license: other
license_name: research-only-rail-m

We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital over several years—which to our knowledge is the largest such corpus of real-world data—encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark, Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes.

Check the research paper: https://arxiv.org/abs/2509.06830

Loading the model

To load the model, use the AutoModel class from huggingface transformers library.

from transformers import AutoModel
model = AutoModel.from_pretrained("raidium/curia")

You can also load the image pre-processor

from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("raidium/curia", trust_remote_code=True)

Then to forward an image:

img = np.random.rand(-1024, 1024, size=(256, 256)) # single axial slice, in PL orientation
model_input = processor(img)
features = model(**model_input)

The image must follow the following format:

input: numpy array of shape (H, W)
  Images needs to be in:
  - PL for axial
  - IL for coronal
  - IP for sagittal
  for CT, no windowing, just hounsfield or normalized image
  for MRI, similar, no windowing, just raw values or normalized image

License

The model is released under the RESEARCH-ONLY RAIL-M license. https://huggingface.co/raidium/curia/blob/main/LICENSE