Jesse-marqo
commited on
Commit
•
44f4c65
1
Parent(s):
9a64d06
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ metrics:
|
|
20 |
[![GitHub](https://img.shields.io/badge/GitHub-black?logo=github)](https://github.com/marqo-ai/marqo-FashionCLIP)
|
21 |
|
22 |
# Marqo-FashionCLIP Model Card
|
23 |
-
|
24 |
Marqo-FashionCLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
25 |
The model was fine-tuned from ViT-B-16 (laion2b_s34b_b88k).
|
26 |
|
|
|
20 |
[![GitHub](https://img.shields.io/badge/GitHub-black?logo=github)](https://github.com/marqo-ai/marqo-FashionCLIP)
|
21 |
|
22 |
# Marqo-FashionCLIP Model Card
|
23 |
+
Marqo-FashionCLIP and Marqo-FashionSigLIP outperform the previous state-of-the-art fashion CLIP models (see results below).
|
24 |
Marqo-FashionCLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
25 |
The model was fine-tuned from ViT-B-16 (laion2b_s34b_b88k).
|
26 |
|