Jesse-marqo commited on
Commit
8c782ee
1 Parent(s): 68ce23d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -23,6 +23,70 @@ configs:
23
  license: apache-2.0
24
  ---
25
  **Disclaimer**: We do not own this dataset. Polyvore dataset is a public dataset which can be accessed through its [Github page](https://github.com/xthan/polyvore-dataset).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  When using the datset, cite the original work.
27
  ```
28
  @inproceedings{han2017learning,
 
23
  license: apache-2.0
24
  ---
25
  **Disclaimer**: We do not own this dataset. Polyvore dataset is a public dataset which can be accessed through its [Github page](https://github.com/xthan/polyvore-dataset).
26
+
27
+ This dataset was used to evaluate Marqo-FashionCLIP and Marqo-FashionSigLIP - see details below.
28
+
29
+ # Marqo-FashionSigLIP Model Card
30
+ Marqo-FashionSigLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
31
+ The model was fine-tuned from ViT-B-16-SigLIP (webli).
32
+
33
+ **Github Page**: [Marqo-FashionCLIP](https://github.com/marqo-ai/marqo-FashionCLIP)
34
+
35
+ **Blog**: [Marqo Blog](https://www.marqo.ai/blog/search-model-for-fashion)
36
+
37
+
38
+ ## Usage
39
+ The model can be seamlessly used with [OpenCLIP](https://github.com/mlfoundations/open_clip) by
40
+
41
+ ```python
42
+ import open_clip
43
+ model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:Marqo/marqo-fashionSigLIP')
44
+ tokenizer = open_clip.get_tokenizer('hf-hub:Marqo/marqo-fashionSigLIP')
45
+ import torch
46
+ from PIL import Image
47
+ image = preprocess_val(Image.open("docs/fashion-hippo.png")).unsqueeze(0)
48
+ text = tokenizer(["a hat", "a t-shirt", "shoes"])
49
+ with torch.no_grad(), torch.cuda.amp.autocast():
50
+ image_features = model.encode_image(image)
51
+ text_features = model.encode_text(text)
52
+ image_features /= image_features.norm(dim=-1, keepdim=True)
53
+ text_features /= text_features.norm(dim=-1, keepdim=True)
54
+ text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
55
+ print("Label probs:", text_probs)
56
+ ```
57
+
58
+ ## Benchmark Results
59
+ Average evaluation results on 6 public multimodal fashion datasets ([Atlas](https://huggingface.co/datasets/Marqo/atlas), [DeepFashion (In-shop)](https://huggingface.co/datasets/Marqo/deepfashion-inshop), [DeepFashion (Multimodal)](https://huggingface.co/datasets/Marqo/deepfashion-multimodal), [Fashion200k](https://huggingface.co/datasets/Marqo/fashion200k), [KAGL](https://huggingface.co/datasets/Marqo/KAGL), and [Polyvore](https://huggingface.co/datasets/Marqo/polyvore)) are reported below:
60
+
61
+ **Text-To-Image (Averaged across 6 datasets)**
62
+ | Model | AvgRecall | Recall@1 | Recall@10 | MRR |
63
+ |----------------------------|-------------|------------|-------------|-----------|
64
+ | Marqo-FashionSigLIP | **0.231** | **0.121** | **0.340** | **0.239** |
65
+ | FashionCLIP2.0 | 0.163 | 0.077 | 0.249 | 0.165 |
66
+ | OpenFashionCLIP | 0.132 | 0.060 | 0.204 | 0.135 |
67
+ | ViT-B-16-laion2b_s34b_b88k | 0.174 | 0.088 | 0.261 | 0.180 |
68
+ | ViT-B-16-SigLIP-webli | 0.212 | 0.111 | 0.314 | 0.214 |
69
+
70
+ **Category-To-Product (Averaged across 5 datasets)**
71
+ | Model | AvgP | P@1 | P@10 | MRR |
72
+ |----------------------------|-----------|-----------|-----------|-----------|
73
+ | Marqo-FashionSigLIP | **0.737** | **0.758** | **0.716** | **0.812** |
74
+ | FashionCLIP2.0 | 0.684 | 0.681 | 0.686 | 0.741 |
75
+ | OpenFashionCLIP | 0.646 | 0.653 | 0.639 | 0.720 |
76
+ | ViT-B-16-laion2b_s34b_b88k | 0.662 | 0.673 | 0.652 | 0.743 |
77
+ | ViT-B-16-SigLIP-webli | 0.688 | 0.690 | 0.685 | 0.751 |
78
+
79
+ **Sub-Category-To-Product (Averaged across 4 datasets)**
80
+ | Model | AvgP | P@1 | P@10 | MRR |
81
+ |----------------------------|-----------|-----------|-----------|-----------|
82
+ | Marqo-FashionSigLIP | **0.725** | **0.767** | **0.683** | **0.811** |
83
+ | FashionCLIP2.0 | 0.657 | 0.676 | 0.638 | 0.733 |
84
+ | OpenFashionCLIP | 0.598 | 0.619 | 0.578 | 0.689 |
85
+ | ViT-B-16-laion2b_s34b_b88k | 0.638 | 0.651 | 0.624 | 0.712 |
86
+ | ViT-B-16-SigLIP-webli | 0.643 | 0.643 | 0.643 | 0.726 |
87
+
88
+
89
+
90
  When using the datset, cite the original work.
91
  ```
92
  @inproceedings{han2017learning,