shengz commited on
Commit
3976589
1 Parent(s): f2702ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -10,7 +10,10 @@ library_name: open_clip
10
 
11
  # BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
12
 
13
- [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper) dataset using contrastive learning. It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations. It can perform various biomedical tasks such as cross-modal retrieval, image classification, and visual question answering. It achieves state-of-the-art results on several biomedical benchmarks, outperforming general-domain models and radiology-specific models.
 
 
 
14
 
15
  ## Citation
16
 
 
10
 
11
  # BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
12
 
13
+ [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper) dataset using contrastive learning.
14
+ It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
15
+ It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
16
+ BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches.
17
 
18
  ## Citation
19