Files changed (1) hide show
  1. README.md +78 -6
README.md CHANGED
@@ -19,6 +19,58 @@ OpticDisc (Fundus Image)
19
 
20
  Thyroid Nodule (UltraSound)
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  Download the Adapters you need [here](https://huggingface.co/KidsWithTokens/Medical-Adapter-Zoo/tree/main)
23
 
24
  ## What
@@ -29,7 +81,7 @@ Check our paper: [Medical SAM Adapter](https://arxiv.org/abs/2304.12620) for the
29
 
30
  SAM (Segment Anything Model) is one of the most popular open models for image segmentation. Unfortunately, it does not perform well on the medical images.
31
  An efficient way to solve it is using Adapters, i.e., some layers with a few parameters to be added to the pre-trained SAM model to fine-tune it to the target down-stream tasks.
32
- Medical image segmentation includes many different organs, lesions, abnormalities as the targets.
33
  So we are training different adapters for each of the targets, and sharing them here for the easy usage in the community.
34
 
35
  Download an adapter for your target disease—trained on organs, lesions, and abnormalities—and effortlessly enhance SAM.
@@ -55,14 +107,34 @@ GPUdevice = torch.device('cuda', args.gpu_device)
55
 
56
  # load the original SAM model
57
  net = get_network(args, args.net, use_gpu=args.gpu, gpu_device=GPUdevice, distribution = args.distributed)
 
 
 
 
 
 
 
58
 
59
  # load task-specific adapter
60
  adapter_path = 'OpticCup_Fundus_SAM_1024.pth'
61
- adapter = torch.load(adapter_path)['state_dict']
62
- for name, param in adapter.items():
63
- if name in adapter:
64
- net.state_dict()[name].copy_(param)
65
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## Authorship
68
 
 
19
 
20
  Thyroid Nodule (UltraSound)
21
 
22
+ Aorta (Abdominal Image)
23
+
24
+ Esophagus (Abdominal Image)
25
+
26
+ Gallbladder (Abdominal Image)
27
+
28
+ Inferior Vena Cava (Abdominal Image)
29
+
30
+ Left Adrenal Gland (Abdominal Image)
31
+
32
+ Right Adrenal Gland (Abdominal Image)
33
+
34
+ Left Kidney (Abdominal Image)
35
+
36
+ Right kidney (Abdominal Image)
37
+
38
+ Liver (Abdominal Image)
39
+
40
+ Pancreas (Abdominal Image)
41
+
42
+ Spleen (Abdominal Image)
43
+
44
+ Stomach (Abdominal Image)
45
+
46
+ Portal Vein and Splenic Vein (Abdominal Image)
47
+
48
+ Edematous Tissue (Brain Tumor mpMRI)
49
+
50
+ Enhancing Tumor (Brain Tumor mpMRI)
51
+
52
+ Necrotic (Brain Tumor mpMRI)
53
+
54
+ Inferior Alveolar Nerve (CBCT)
55
+
56
+ Instrument Clasper (Surgical Video)
57
+
58
+ Instrument Shaft (Surgical Video)
59
+
60
+ Instrument Wrist (Surgical Video)
61
+
62
+ Kidney Tumor (MRI)
63
+
64
+ Liver (Liver Tumor CE-MRI)
65
+
66
+ Tumor (Liver Tumor CE-MRI)
67
+
68
+ Mandible (XRay)
69
+
70
+ Retina Vessel (Fundus Image)
71
+
72
+ White Blood Cell (MicroScope)
73
+
74
  Download the Adapters you need [here](https://huggingface.co/KidsWithTokens/Medical-Adapter-Zoo/tree/main)
75
 
76
  ## What
 
81
 
82
  SAM (Segment Anything Model) is one of the most popular open models for image segmentation. Unfortunately, it does not perform well on the medical images.
83
  An efficient way to solve it is using Adapters, i.e., some layers with a few parameters to be added to the pre-trained SAM model to fine-tune it to the target down-stream tasks.
84
+ Medical image segmentation includes many different organs, lesions, and abnormalities as the targets.
85
  So we are training different adapters for each of the targets, and sharing them here for the easy usage in the community.
86
 
87
  Download an adapter for your target disease—trained on organs, lesions, and abnormalities—and effortlessly enhance SAM.
 
107
 
108
  # load the original SAM model
109
  net = get_network(args, args.net, use_gpu=args.gpu, gpu_device=GPUdevice, distribution = args.distributed)
110
+ net.eval()
111
+
112
+ sam_weights = 'checkpoint/sam/sam_vit_b_01ec64.pth' # load the original SAM weight
113
+ with open(sam_weights, "rb") as f:
114
+ state_dict = torch.load(f)
115
+ new_state_dict = {k: v for k, v in state_dict.items() if k in net.state_dict() and net.state_dict()[k].shape == v.shape}
116
+ net.load_state_dict(new_state_dict, strict = False)
117
 
118
  # load task-specific adapter
119
  adapter_path = 'OpticCup_Fundus_SAM_1024.pth'
120
+ checkpoint_file = os.path.join(adapter_path)
121
+ assert os.path.exists(checkpoint_file)
122
+ loc = 'cuda:{}'.format(args.gpu_device)
123
+ checkpoint = torch.load(checkpoint_file, map_location=loc)
124
+
125
+ state_dict = checkpoint['state_dict']
126
+ if args.distributed != 'none':
127
+ from collections import OrderedDict
128
+ new_state_dict = OrderedDict()
129
+ for k, v in state_dict.items():
130
+ # name = k[7:] # remove `module.`
131
+ name = 'module.' + k
132
+ new_state_dict[name] = v
133
+ # load params
134
+ else:
135
+ new_state_dict = state_dict
136
+
137
+ net.load_state_dict(new_state_dict,strict = False)
138
 
139
  ## Authorship
140