Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
File size: 2,937 Bytes
735c35d
 
 
 
 
 
 
 
8e49f6a
d4f4d2c
 
 
 
 
8e49f6a
2ab1275
d4f4d2c
77e370e
d4f4d2c
 
 
1af978e
 
d4f4d2c
 
 
 
 
8e49f6a
 
 
d4f4d2c
 
 
 
e86a312
 
 
 
 
 
 
 
 
 
d4f4d2c
 
 
 
903001e
d4f4d2c
 
 
 
 
 
2ab1275
2578c89
d4f4d2c
 
 
 
 
 
 
 
 
 
 
2578c89
 
424e73c
d4f4d2c
 
 
 
 
2578c89
d4f4d2c
 
 
 
 
1af978e
b30dd27
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
task_categories:
- visual-question-answering
language:
- en
pretty_name: YTTB-VQA
size_categories:
- n<1K
license: cc-by-nc-4.0
---
# Dataset Card for Dataset Name

## Dataset Description

- **Homepage:** https://gordonhu608.github.io/bliva/
- **Repository:** https://github.com/mlpc-ucsd/BLIVA.git
- **Paper:** 
- **Point of Contact:** w1hu@ucsd.edu

### Dataset Summary

The YTTB-VQA Dataset is a collection of 400 Youtube thumbnail question-answer pairs to evaluate the visual perception abilities of in-text images. It covers 11 
categories, including technology, sports, entertainment, food, news, history, music, nature, cars, and education. 

### Supported Tasks and Leaderboards

This dataset supports many tasks, including visual question answering, image captioning, etc. 

### License
CC-By-NC-4.0

### Languages

The language of the data is primarily English.

## Getting Started

### Creating the dataset

Run the following command to download the images and create the dataset:

```python3 create_dataset.py```

You will find the images in `images_new` and the dataset in `youtube_new.json`.

## Dataset Structure

### Data Instances

A data instance in this dataset represents entries from a collection augmented by human-generated questions submitted to BLIVA. The answer is then entered into the answer field.

### Data Fields

**video_id:** a unique string representing a specific YouTube thumbnail image.<br>
**question:** representing a human-generated question.<br>
**video_classes:** representing a specific category for the YouTube thumbnail image.<br>
**answers:** This represents a ground truth answer for the question made about the YouTube thumbnail image.<br>
**video link** Representing the URL link for each YouTube video.

### Data Splits

The data are unsplit.

## Dataset Creation

### Source Data

#### Initial Data Collection and Normalization

We randomly selected YouTube videos with text-rich thumbnails from different categories during the data collection. 
We recorded the unique video ID for each YouTube video and obtained the high-resolution thumbnail from the
URL ”http://img.youtube.com/vi/YouTube-Video-ID/maxresdefault.jpg”. 

### Annotations

#### Annotation process

We created the annotation file with the following fields: ”video id,” question,” video classes,” answers,” and ”video link" in JSON format. 

## Considerations for Using the Data

### Discussion of Biases

Although our dataset spans 11 categories, the ratio within each category varies. For example, 18% of the dataset pertains to education, while only 2% is dedicated to news.

### Acknowledgments

The youtube thumbnails dataset is purely for academic research and not for any monetary uses. For any of the authors who saw our dataset and found their thumbnail images used inappropriately, please get in touch with us directly by this email at w1hu@ucsd.edu and we will remove the image immediately.