license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
Welcome to Medical Adapters Zoo (Med-Adpt Zoo)!
Med-Adpt Zoo Map 🐘🐊🦍🦒🦨🦜🦥
Lung Nodule (CT)
Melanoma (Skin Photo)
OpticCup (Fundus Image)
OpticDisc (Fundus Image)
Thyroid Nodule (UltraSound)
Download the Adapters you need here
What
Here are the pre-trained Adapters to transfer SAM (Segment Anything Model) for segmenting various organs/lesions from the medical images. Check our paper: Medical SAM Adapter for the details.
Why
SAM (Segment Anything Model) is one of the most popular open models for image segmentation. Unfortunately, it does not perform well on the medical images. An efficient way to solve it is using Adapters, i.e., some layers with a few parameters to be added to the pre-trained SAM model to fine-tune it to the target down-stream tasks. Medical image segmentation includes many different organs, lesions, abnormalities as the targets. So we are training different adapters for each of the targets, and sharing them here for the easy usage in the community.
Download an adapter for your target disease—trained on organs, lesions, and abnormalities—and effortlessly enhance SAM.
One adapter transfers your SAM to a medical domain expert. Give it a try!
How to Use
- Download the code of our MedSAM-Adapter here.
- Download the weights of the original SAM model.
- Load the original model and our adapter for downstream tasks.
import torch
import torchvision.transforms as transforms
import cfg
from utils import *
# set your own configs
args = cfg.parse_args()
GPUdevice = torch.device('cuda', args.gpu_device)
# load the original SAM model
net = get_network(args, args.net, use_gpu=args.gpu, gpu_device=GPUdevice, distribution = args.distributed)
# load task-specific adapter
adapter_path = 'OpticCup_Fundus_SAM_1024.pth'
adapter = torch.load(adapter_path)['state_dict']
for name, param in adapter.items():
if name in adapter:
net.state_dict()[name].copy_(param)
Authorship
Ziyue Wang (NUS): Adapters Training
Junde Wu (Oxford): Lead the project