Update README.md
Browse files
README.md
CHANGED
@@ -24,3 +24,55 @@ tags:
|
|
24 |
size_categories:
|
25 |
- 1K<n<10K
|
26 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
size_categories:
|
25 |
- 1K<n<10K
|
26 |
---
|
27 |
+
|
28 |
+
---
|
29 |
+
datasets:
|
30 |
+
- VikramSingh178/Products-10k_sample-BLIP-captions
|
31 |
+
---
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
## Dataset Description
|
36 |
+
|
37 |
+
The **Products-10k** dataset consists of images of various products along with their automatically generated captions. The captions are generated using the BLIP (Bootstrapping Language-Image Pre-training) model. This dataset aims to aid in tasks related to image captioning, visual recognition, and product classification.
|
38 |
+
|
39 |
+
## Dataset Summary
|
40 |
+
|
41 |
+
- **Dataset Name**: Products-10k
|
42 |
+
- **Generated Captions Model**: Salesforce/blip-image-captioning-large
|
43 |
+
- **Number of Images**: 10,000
|
44 |
+
- **Image Formats**: JPEG, PNG
|
45 |
+
- **Captioning Prompt**: "Photography of"
|
46 |
+
- **Source**: The images are sourced from a variety of product categories.
|
47 |
+
|
48 |
+
## Dataset Structure
|
49 |
+
|
50 |
+
The dataset is structured as follows:
|
51 |
+
|
52 |
+
- **image**: Contains the product images in RGB format.
|
53 |
+
- **text**: Contains the generated captions for each product image.
|
54 |
+
|
55 |
+
### Example Data
|
56 |
+
|
57 |
+
Here is an example of the data in this dataset:
|
58 |
+
|
59 |
+
| Image | Caption |
|
60 |
+
|--------------|-------------------------------|
|
61 |
+
|  | Photography of a red shirt. |
|
62 |
+
|
63 |
+
## Usage
|
64 |
+
|
65 |
+
You can load and use this dataset with the Hugging Face `datasets` library as follows:
|
66 |
+
|
67 |
+
```python
|
68 |
+
from datasets import load_dataset
|
69 |
+
|
70 |
+
dataset = load_dataset("VikramSingh178/Products-10k_sample-BLIP-captions", split="train")
|
71 |
+
|
72 |
+
# Display an example
|
73 |
+
example = dataset[0]
|
74 |
+
image = example["image"]
|
75 |
+
caption = example["text"]
|
76 |
+
image.show()
|
77 |
+
print("Caption:", caption)
|
78 |
+
```
|