VikramSingh178's picture
Update README.md
1a0de52 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
  splits:
    - name: test
      num_bytes: 1024849819
      num_examples: 10000
  download_size: 1018358664
  dataset_size: 1024849819
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: mit
language:
  - en
tags:
  - art
size_categories:
  - 1K<n<10K
task_categories:
  - visual-question-answering
  - question-answering
  - text-to-image

Dataset Description

The Products-10k BLIP CAPTIONS dataset consists of 10000 images of various products along with their automatically generated captions. The captions are generated using the BLIP (Bootstrapping Language-Image Pre-training) model. This dataset aims to aid in tasks related to image captioning, visual recognition, and product classification.

Dataset Summary

  • Dataset Name: Products-10k
  • Generated Captions Model: Salesforce/blip-image-captioning-large
  • Number of Images: 10,000
  • Image Formats: JPEG, PNG
  • Captioning Prompt: "Photography of"
  • Source: The images are sourced from a variety of product categories.

Dataset Structure

The dataset is structured as follows:

  • image: Contains the product images in RGB format.
  • text: Contains the generated captions for each product image.

Usage

You can load and use this dataset with the Hugging Face datasets library as follows:

from datasets import load_dataset

dataset = load_dataset("VikramSingh178/Products-10k-BLIP-captions", split="test")

# Display an example
example = dataset[0]
image = example["image"]
caption = example["text"]
image.show()
print("Caption:", caption)
  author    = {Yalong Bai, Yuxiang Chen, Wei Yu, Linfang Wang, Wei Zhang},
  title     = {Products-10K: A Large-scale Product Recognition Dataset},
  journal   = {arXiv},
  year      = {2024},
  url       = {https://arxiv.org/abs/2008.10545}