LAION-RVS-Fashion / README.md
Slep's picture
Fix README H1
ffbe69a verified
|
raw
history blame
No virus
5.69 kB
---
license: cc-by-nc-4.0
language:
- en
tags:
- fashion
- visual search
pretty_name: LAION Referred Visual Search Fashion
size_categories:
- 1M<n<10M
---
<div align="center">
<h1 align="center">LAION - Referred Visual Search - Fashion</h1>
Introduced in ***LRVSF-Fashion: Extending Visual Search with Referring Instructions***
<a href="https://simon-lepage.github.io"><strong>Simon Lepage</strong></a>
<strong>Jérémie Mary</strong>
<a href=https://davidpicard.github.io><strong>David Picard</strong></a>
<a href=https://ailab.criteo.com>CRITEO AI Lab</a>
&
<a href=https://imagine-lab.enpc.fr>ENPC</a>
</div>
<p align="center">
<a href="https://arxiv.org/abs/2306.02928">
<img alt="ArXiV Badge" src="https://img.shields.io/badge/arXiv-2306.02928-b31b1b.svg">
</a>
</p>
<div align="center">
<div id=links>
**Useful Links**<br>
[Test set](https://zenodo.org/doi/10.5281/zenodo.11189942) —
[Benchmark Code](https://github.com/Simon-Lepage/LRVSF-Benchmark) —
[LRVS-F Leaderboard](https://huggingface.co/spaces/Slep/LRVSF-Leaderboard) —
[Demo](https://huggingface.co/spaces/Slep/CondViT-LRVSF-Demo)
</div>
</div>
## **Composition**
LAION-RVS-Fashion is composed of images from :
- **[LAION 2B EN](https://huggingface.co/datasets/laion/laion2B-en)**
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](https://arxiv.org/abs/2306.02928) for additional details.
|Split|Products|Distractors|
|-:|:-:|:-:|
|Train|272,457|-|
|Valid|400|99,541|
|Test|2,000|2,000,014|
**Total number of training images :** 841,718.
## **Samples**
<table style='text-align:center'>
<tbody>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Neck</td>
<td colspan=2>Lower Body</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>a scarf with multi-coloured stripes</td>
<td colspan=2>stella pants - dark suede</td>
</tr>
<tr></tr>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Feet</td>
<td colspan=2>Bags</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>neon green patent leather heels with studs</td>
<td colspan=2>the burberry small leather bag is brown and leather</td>
</tr>
</tbody>
</table>
## **Attributes**
- **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository.
- **TEXT**: Text originally associated with the image.
- **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN.
- **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
- **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product.
- **INDEX_SRC**: ID of parquet file originally storing this image.
- **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors.
- **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions.
We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248).
---
## Citing the dataset
To cite our work, please use the following BibTeX entry :
```bibtex
@article{lepage2023lrvsf,
title={LRVS-Fashion: Extending Visual Search with Referring Instructions},
author={Lepage, Simon and Mary, Jérémie and Picard, David},
journal={arXiv:2306.02928},
year={2023}
}
```