nopperl osanseviero HF staff commited on
Commit
b08ec1c
1 Parent(s): 5724056

Specify right model card metadata (#1)

Browse files

- Specify right model card metadata (145b19a02a9f2ee3e1fb2513412f914b71319bbb)


Co-authored-by: Omar Sanseviero <osanseviero@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -1,7 +1,10 @@
1
  ---
2
  license: apache-2.0
 
 
3
  datasets:
4
  - Ejafa/ye-pop
 
5
  ---
6
 
7
  A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [LLaVA 1.5](https://github.com/haotian-liu/LLaVA)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - llava
5
  datasets:
6
  - Ejafa/ye-pop
7
+ pipeline_tag: image-text-to-text
8
  ---
9
 
10
  A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [LLaVA 1.5](https://github.com/haotian-liu/LLaVA)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).