File size: 2,335 Bytes
144de54
 
6967514
 
8344180
144de54
6967514
8344180
6967514
123b4bd
57b6b4d
 
 
 
 
 
 
 
 
 
 
cdbde3d
1c9f839
db21559
de6eb37
 
db21559
 
 
8344180
db21559
 
8344180
6967514
 
 
8344180
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ca53ea
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
---

Welcome to Medical Adapters Zoo (Med-Adpt Zoo)!

## Med-Adpt Zoo Map 🐘🐊🦍🦒🦨🦜🦥

Lung Nodule (CT)

Melanoma (Skin Photo)

OpticCup (Fundus Image)

OpticDisc (Fundus Image)

Thyroid Nodule (UltraSound)

Download the Adapters you need [here](https://huggingface.co/KidsWithTokens/Medical-Adapter-Zoo/tree/main)

## What
Here are the pre-trained Adapters to transfer [SAM](https://segment-anything.com) (Segment Anything Model) for segmenting various organs/lesions from the medical images.
Check our paper: [Medical SAM Adapter](https://arxiv.org/abs/2304.12620) for the details.

## Why

SAM (Segment Anything Model) is one of the most popular open models for image segmentation. Unfortunately, it does not perform well on the medical images. 
An efficient way to solve it is using Adapters, i.e., some layers with a few parameters to be added to the pre-trained SAM model to fine-tune it to the target down-stream tasks.
Medical image segmentation includes many different organs, lesions, abnormalities as the targets.
So we are training different adapters for each of the targets, and sharing them here for the easy usage in the community.

Download an adapter for your target disease—trained on organs, lesions, and abnormalities—and effortlessly enhance SAM.

One adapter transfers your SAM to a medical domain expert. Give it a try! 

## How to Use

1. Download the code of our MedSAM-Adapter [here](https://github.com/KidsWithTokens/Medical-SAM-Adapter).
2. Download the weights of the original SAM model.
3. Load the original model and our adapter for downstream tasks.

```python
import torch
import torchvision.transforms as transforms

import cfg
from utils import *

# set your own configs
args = cfg.parse_args()
GPUdevice = torch.device('cuda', args.gpu_device)

# load the original SAM model 
net = get_network(args, args.net, use_gpu=args.gpu, gpu_device=GPUdevice, distribution = args.distributed)

# load task-specific adapter
adapter_path = 'OpticCup_Fundus_SAM_1024.pth'
adapter = torch.load(adapter_path)['state_dict']
for name, param in adapter.items():
    if name in adapter:
        net.state_dict()[name].copy_(param)
```

## Authorship

Ziyue Wang (NUS): Adapters Training

Junde Wu (Oxford): Lead the project