Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ https://github.com/baaivision/EVA/tree/master/EVA-CLIP
|
|
36 |
- To construct Merged-2B, we merged 1.6 billion samples from [LAION-2B](https://laion.ai/blog/laion-5b/) dataset with 0.4 billion samples from [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
|
37 |
|
38 |
- To our knowledge, EVA-CLIP series are the most performant open-sourced CLIP models at all scales, evaluated via zero-shot classification performance, especially on mainstream classification benchmarks such as ImageNet along with its variants.
|
39 |
-
For more details about EVA-CLIP, please refer to our [paper
|
40 |
|
41 |
### Pretrained
|
42 |
<div align="center">
|
|
|
36 |
- To construct Merged-2B, we merged 1.6 billion samples from [LAION-2B](https://laion.ai/blog/laion-5b/) dataset with 0.4 billion samples from [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
|
37 |
|
38 |
- To our knowledge, EVA-CLIP series are the most performant open-sourced CLIP models at all scales, evaluated via zero-shot classification performance, especially on mainstream classification benchmarks such as ImageNet along with its variants.
|
39 |
+
For more details about EVA-CLIP, please refer to our [paper](http://arxiv.org/abs/2303.15389).
|
40 |
|
41 |
### Pretrained
|
42 |
<div align="center">
|