shengz commited on
Commit
4b70aae
1 Parent(s): 79add88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -13,7 +13,10 @@ library_name: open_clip
13
  [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper) dataset using contrastive learning.
14
  It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
15
  It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
16
- BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches.
 
 
 
17
 
18
  ## Citation
19
 
 
13
  [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper) dataset using contrastive learning.
14
  It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
15
  It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
16
+ BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
17
+
18
+ ![](biomed-vlp-eval.svg)
19
+
20
 
21
  ## Citation
22