File size: 1,943 Bytes
97fb6a8
 
 
 
 
 
 
 
 
c6f3c53
232ae4a
 
c6f3c53
97fb6a8
 
 
 
 
c6f3c53
 
 
 
 
 
 
413cb94
 
 
 
c768c1a
0038eff
d51a56d
0038eff
 
 
 
 
d51a56d
0038eff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1a0de52
0038eff
 
 
 
 
 
 
 
d51a56d
 
 
 
 
 
 
 
 
413cb94
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 1024849819
    num_examples: 10000
  download_size: 1018358664
  dataset_size: 1024849819
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
license: mit
language:
- en
tags:
- art
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
- question-answering
- text-to-image
---





## Dataset Description

The **Products-10k BLIP CAPTIONS** dataset consists of 10000 images of various products along with their automatically generated captions. The captions are generated using the BLIP (Bootstrapping Language-Image Pre-training) model. This dataset aims to aid in tasks related to image captioning, visual recognition, and product classification.

## Dataset Summary

- **Dataset Name**: Products-10k
- **Generated Captions Model**: Salesforce/blip-image-captioning-large
- **Number of Images**: 10,000
- **Image Formats**: JPEG, PNG
- **Captioning Prompt**: "Photography of"
- **Source**: The images are sourced from a variety of product categories.

## Dataset Structure

The dataset is structured as follows:

- **image**: Contains the product images in RGB format.
- **text**: Contains the generated captions for each product image.


## Usage

You can load and use this dataset with the Hugging Face `datasets` library as follows:

```python
from datasets import load_dataset

dataset = load_dataset("VikramSingh178/Products-10k-BLIP-captions", split="test")

# Display an example
example = dataset[0]
image = example["image"]
caption = example["text"]
image.show()
print("Caption:", caption)
```



```
  author    = {Yalong Bai, Yuxiang Chen, Wei Yu, Linfang Wang, Wei Zhang},
  title     = {Products-10K: A Large-scale Product Recognition Dataset},
  journal   = {arXiv},
  year      = {2024},
  url       = {https://arxiv.org/abs/2008.10545}
```