katielink commited on
Commit
ec870ea
1 Parent(s): fc91239

Add arxiv link

Browse files

Will link the model to paper pages

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,13 +1,15 @@
 
 
 
1
  # Med-Flamingo-9B (CLIP ViT-L/14, Llama-7B)
2
 
3
  ![](logo.png)
4
 
5
- Med-Flamingo is a medical vision-language model with multimodal in-context learning abilities.
6
 
7
  This model is based on the OpenFlamingo-9B V1 model which uses the CLIP ViT-L/14 vision encoder and the Llama-7B language model as frozen backbones.
8
 
9
  Med-Flamingo was trained on paired and interleaved image-text from the medical literature.
10
 
11
 
12
- Check out our [git repo](https://github.com/snap-stanford/med-flamingo) for more details on setup & demo.
13
-
 
1
+ ---
2
+ {}
3
+ ---
4
  # Med-Flamingo-9B (CLIP ViT-L/14, Llama-7B)
5
 
6
  ![](logo.png)
7
 
8
+ [Med-Flamingo](https://arxiv.org/abs/2307.15189) is a medical vision-language model with multimodal in-context learning abilities.
9
 
10
  This model is based on the OpenFlamingo-9B V1 model which uses the CLIP ViT-L/14 vision encoder and the Llama-7B language model as frozen backbones.
11
 
12
  Med-Flamingo was trained on paired and interleaved image-text from the medical literature.
13
 
14
 
15
+ Check out our [git repo](https://github.com/snap-stanford/med-flamingo) for more details on setup & demo.