Edit model card

Welcome to Medical Adapters Zoo (Med-Adpt Zoo)!

Med-Adpt Zoo Map πŸ˜πŸŠπŸ¦πŸ¦’πŸ¦¨πŸ¦œπŸ¦₯

Lung Nodule (CT)

Melanoma (Skin Photo)

OpticCup (Fundus Image)

OpticDisc (Fundus Image)

Thyroid Nodule (UltraSound)

Aorta (Abdominal Image)

Esophagus (Abdominal Image)

Gallbladder (Abdominal Image)

Inferior Vena Cava (Abdominal Image)

Left Adrenal Gland (Abdominal Image)

Right Adrenal Gland (Abdominal Image)

Left Kidney (Abdominal Image)

Right kidney (Abdominal Image)

Liver (Abdominal Image)

Pancreas (Abdominal Image)

Spleen (Abdominal Image)

Stomach (Abdominal Image)

Portal Vein and Splenic Vein (Abdominal Image)

Edematous Tissue (Brain Tumor mpMRI)

Enhancing Tumor (Brain Tumor mpMRI)

Necrotic (Brain Tumor mpMRI)

Inferior Alveolar Nerve (CBCT)

Instrument Clasper (Surgical Video)

Instrument Shaft (Surgical Video)

Instrument Wrist (Surgical Video)

Kidney Tumor (MRI)

Liver (Liver Tumor CE-MRI)

Tumor (Liver Tumor CE-MRI)

Mandible (XRay)

Retina Vessel (Fundus Image)

White Blood Cell (MicroScope)

Download the Adapters you need here

What

Here are the pre-trained Adapters to transfer SAM (Segment Anything Model) for segmenting various organs/lesions from the medical images. Check our paper: Medical SAM Adapter for the details.

Why

SAM (Segment Anything Model) is one of the most popular open models for image segmentation. Unfortunately, it does not perform well on the medical images. An efficient way to solve it is using Adapters, i.e., some layers with a few parameters to be added to the pre-trained SAM model to fine-tune it to the target down-stream tasks. Medical image segmentation includes many different organs, lesions, and abnormalities as the targets. So we are training different adapters for each of the targets, and sharing them here for the easy usage in the community.

Download an adapter for your target diseaseβ€”trained on organs, lesions, and abnormalitiesβ€”and effortlessly enhance SAM.

One adapter transfers your SAM to a medical domain expert. Give it a try!

How to Use

  1. Download the code of our MedSAM-Adapter here.
  2. Download the weights of the original SAM model.
  3. Load the original model and our adapter for downstream tasks.
import torch
import torchvision.transforms as transforms

import cfg
from utils import *

# set your own configs
args = cfg.parse_args()
GPUdevice = torch.device('cuda', args.gpu_device)

# load the original SAM model 
net = get_network(args, args.net, use_gpu=args.gpu, gpu_device=GPUdevice, distribution = args.distributed)
net.eval()

sam_weights = 'checkpoint/sam/sam_vit_b_01ec64.pth'    # load the original SAM weight
with open(sam_weights, "rb") as f:
    state_dict = torch.load(f)
    new_state_dict = {k: v for k, v in state_dict.items() if k in net.state_dict() and net.state_dict()[k].shape == v.shape}
    net.load_state_dict(new_state_dict, strict = False)

# load task-specific adapter
adapter_path = 'OpticCup_Fundus_SAM_1024.pth'
checkpoint_file = os.path.join(adapter_path)
assert os.path.exists(checkpoint_file)
loc = 'cuda:{}'.format(args.gpu_device)
checkpoint = torch.load(checkpoint_file, map_location=loc)

state_dict = checkpoint['state_dict']
if args.distributed != 'none':
    from collections import OrderedDict
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        # name = k[7:] # remove `module.`
        name = 'module.' + k
        new_state_dict[name] = v
    # load params
else:
    new_state_dict = state_dict

net.load_state_dict(new_state_dict,strict = False)

Authorship

Ziyue Wang (NUS): Adapters Training

Junde Wu (Oxford): Lead the project

Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .