This model crops hand radiographs to better standardize the image input for bone age models. The model uses a lightweight mobilenetv3_small_100 backbone and predicts normalized xywh coordinates. figure

The model was trained and validated using 12,592 pediatric hand radiographs from the RSNA Pediatric Bone Age Challenge using an 80%/20% split. On single-fold validation, the model achieved mean absolute errors (normalized coordinates) of:

x: 0.0152
y: 0.0121
w: 0.0261
h: 0.0213

To use the model:

import cv2
import torch
from transformers import AutoModel

model = AutoModel.from_pretrained("ianpan/bone-age-crop", trust_remote_code=True)
model = model.eval()
img = cv2.imread(..., 0)
img_shape = torch.tensor([img.shape[:2]])
x = model.preprocess(img)
x = torch.from_numpy(x).unsqueeze(0).unsqueeze(0)
x = x.float()

# if you do not provide img_shape
# model will return normalized coordinates
with torch.inference_mode():
  coords = model(x, img_shape)

# only 1 sample in batch
coords = coords[0].numpy()
x, y, w, h = coords
# coords already rescaled with img_shape
cropped_img = img[y: y + h, x: x + w]

If you have pydicom installed, you can also load a DICOM image directly:

img = model.load_image_from_dicom(path_to_dicom)
Downloads last month
41
Safetensors
Model size
1.53M params
Tensor type
F32
ยท
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for ianpan/bone-age-crop

Finetuned
(1)
this model

Space using ianpan/bone-age-crop 1