Datasets:
Slep
/

Languages:
English
Size Categories:
1M<n<10M
ArXiv:
License:
LAION-RVS-Fashion / README.md
Slep's picture
Fix README H1
ffbe69a verified
metadata
license: cc-by-nc-4.0
language:
  - en
tags:
  - fashion
  - visual search
pretty_name: LAION  Referred Visual Search  Fashion
size_categories:
  - 1M<n<10M

LAION - Referred Visual Search - Fashion

Introduced in LRVSF-Fashion: Extending Visual Search with Referring Instructions

Simon LepageJérémie MaryDavid Picard

CRITEO AI Lab & ENPC

ArXiV Badge

Useful Links
Test setBenchmark CodeLRVS-F LeaderboardDemo

Composition

LAION-RVS-Fashion is composed of images from :

These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the samples and refer to our paper for additional details.

Split Products Distractors
Train 272,457 -
Valid 400 99,541
Test 2,000 2,000,014

Total number of training images : 841,718.

Samples

Categories Neck Lower Body
BLIP2 Captions a scarf with multi-coloured stripes stella pants - dark suede
Categories Feet Bags
BLIP2 Captions neon green patent leather heels with studs the burberry small leather bag is brown and leather

Attributes

  • URL, WIDTH, HEIGHT, punsafe, pwatermark, language: Original LAION fields. Please refer to their repository.
  • TEXT: Text originally associated with the image.
  • ENG_TEXT : Translated version for MULTI/NOLANG, copy of TEXT for EN.
  • TYPE: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
  • PRODUCT_ID: Product identifier, allows to group together images depicting the same product.
  • INDEX_SRC: ID of parquet file originally storing this image.
  • CATEGORY: Categories of the products - Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body for the products, and NonClothing for some distractors.
  • blip2_caption1, blip2_caption2: BLIP2-FlanT5XL-generated captions.

We also release bootstrap_IDs.pkl, the file used to generate the bootstrapped results of the paper. test_subsets is composed of product IDs, while dist_{N}_subsets are row indices.


Citing the dataset

To cite our work, please use the following BibTeX entry :

@article{lepage2023lrvsf,
  title={LRVS-Fashion: Extending Visual Search with Referring Instructions},
  author={Lepage, Simon and Mary, Jérémie and Picard, David},
  journal={arXiv:2306.02928},
  year={2023}
}