--- license: mit language: - en tags: - fashion - visual search pretty_name: LAION — Referred Visual Search — Fashion size_categories: - 1M Categories Neck Lower Body BLIP2 Captions a scarf with multi-coloured stripes stella pants - dark suede Categories Feet Bags BLIP2 Captions neon green patent leather heels with studs the burberry small leather bag is brown and leather ## **Attributes** - **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository. - **TEXT**: Text originally associated with the image. - **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN. - **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes) - **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product. - **INDEX_SRC**: ID of parquet file originally storing this image. - **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors. - **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions. We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248). --- ## Citing the dataset To cite our work, please use the following BibTeX entry : ``` @article{lepage2023condvit, title={Weakly-Supervised Conditional Embedding for Referred Visual Search}, author={Lepage, Simon and Mary, Jérémie and Picard, David}, journal={arXiv:2306.02928}, year={2023} } ```