ST-Net-dataset / README.md
viktorzhou's picture
Update README.md
f0f3fef verified
|
raw
history blame
1.77 kB
metadata
task_categories:
  - image-to-image
size_categories:
  - 10K<n<100K

This dataset was constructed for paper Towards Intelligent Design: A Self-Driven Framework for Collocated Clothing Synthesis Leveraging Fashion Styles and Textures. You need to fill in the form to get password for unzipping files.

Details

This dataset was constructed for self-supervised collocated clothing synthesis (CCS) task. It encompasses 30, 876 images, covering both upper and lower clothing items. We partition the dataset into training and test sets, maintaining a ratio of 4:1, for both 'upper → lower' and 'lower → upper' settings. Within this dataset, there exist instances where certain fashion items recur. To ensure non-overlap between the training and test sets, we employ a graph-based method and similarity comparison with learned perceptual image patch similarity (LPIPS). This ensures that the source domain images in any transformation direction do not intersect. For instance, in the transition direction of 'upper → lower', there is no intersection between the upper clothing images in the training set and those in the test set. All images in the dataset are with a resolution of 256 x 256 pixels.

Citation

If you use this dataset in your work, please cite it as:

@inproceedings{dong2024towards,
  title={Towards Intelligent Design: A Self-Driven Framework for Collocated Clothing Synthesis Leveraging Fashion Styles and Textures},
  author={Dong, Minglong and Zhou, Dongliang and Ma, Jianghong and Zhang, Haijun},
  booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={3725--3729},
  year={2024},
  organization={IEEE}
}