Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,9 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
---
|
4 |
|
5 |
A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [CogVLM](https://huggingface.co/THUDM/cogvlm-chat-hf)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).
|
6 |
|
7 |
-
Note: likely not directly useful as it is severely undertrained.
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- Ejafa/ye-pop
|
5 |
---
|
6 |
|
7 |
A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [CogVLM](https://huggingface.co/THUDM/cogvlm-chat-hf)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).
|
8 |
|
9 |
+
Note: likely not directly useful as it is severely undertrained.
|