Chanwoo Kim commited on
Commit
b08d088
1 Parent(s): b2f2d74

update README and norm mean

Browse files
Files changed (2) hide show
  1. README.md +23 -1
  2. preprocessor_config.json +7 -7
README.md CHANGED
@@ -6,7 +6,29 @@ tags:
6
  - medical
7
  ---
8
 
9
- # MONET (Medical cONcept rETrieve)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
12
 
 
6
  - medical
7
  ---
8
 
9
+ # MONET (Medical cONcept rETriever)
10
+
11
+ ## Description
12
+
13
+ [MONET]() is a CLIP ViT-L/14 vision-language foundation model trained on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with
14
+ supervised models built on previously concept-annotated dermatology datasets of clinical images. MONET enables AI transparency across the entire AI system development pipeline from building inherently interpretable models to dataset and model auditing.
15
+
16
+ * [GitHub](https://github.com/suinleelab/MONET)
17
+ * [Paper](https://github.com/suinleelab/MONET)
18
+
19
+ ## Citation
20
+
21
+ ```bibtex
22
+ @article{kim2024transparent,
23
+ title={Transparent medical image AI via an image–text foundation model grounded in
24
+ medical literature},
25
+ author={Chanwoo Kim and Soham U. Gadgil and Alex J. DeGrave and Jesutofunmi A. Omiye and Zhuo Ran Cai and Roxana Daneshjou and Su-In Lee},
26
+ journal={Nature Medicine},
27
+ #pages={1--10},
28
+ year={2024},
29
+ publisher={Nature Publishing Group US New York}
30
+ }
31
+ ```
32
 
33
  Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
34
 
preprocessor_config.json CHANGED
@@ -5,15 +5,15 @@
5
  "do_resize": true,
6
  "feature_extractor_type": "CLIPFeatureExtractor",
7
  "image_mean": [
8
- 0.48145466,
9
- 0.4578275,
10
- 0.40821073
11
  ],
12
  "image_std": [
13
- 0.26862954,
14
- 0.26130258,
15
- 0.27577711
16
  ],
17
  "resample": 3,
18
  "size": 224
19
- }
 
5
  "do_resize": true,
6
  "feature_extractor_type": "CLIPFeatureExtractor",
7
  "image_mean": [
8
+ 0.485,
9
+ 0.456,
10
+ 0.406
11
  ],
12
  "image_std": [
13
+ 0.229,
14
+ 0.224,
15
+ 0.225
16
  ],
17
  "resample": 3,
18
  "size": 224
19
+ }