Datasets:

Languages:
English
ArXiv:
License:
File size: 4,815 Bytes
d03ae21
 
e1333d2
 
 
 
 
 
 
 
 
 
 
e1dec9f
 
 
 
 
3f67393
e1dec9f
 
 
 
 
 
d515a29
 
 
 
 
 
e1dec9f
 
 
 
 
 
 
 
 
 
 
d515a29
e1dec9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d515a29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1dec9f
 
d515a29
e1dec9f
 
 
 
 
 
d515a29
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: apache-2.0
task_categories:
- text-generation
- summarization
language:
- en
tags:
- Pretraining
- Interleaved
- Reasoning
size_categories:
- 1M<n<10M
---

# Multimodal-Textbook
<img src="./src/logo.png" alt="Image" style="width: 900px;">  

[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209) [![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master)




## Overview

This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"](https://arxiv.org/pdf/2306.07209). 
- It contains **pre-training corpus using interleaved image-text format**. Specifically, our multimodal-textbook includes **6.5M keyframes** extracted from instructional videos, interleaving with 0.8B **ASR texts**.
- All the images and text are extracted from online instructional videos (22,000 class hours), covering multiple fundamental subjects, e.g., mathematics, physics, and chemistry. 
- Our textbook corpus providing a more coherent context and richer knowledge for image-text aligning. 
- Our code can be found in [Multimodal-Textbook](https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct).


  


<img src="./src/page_fig.png" alt="Image" style="width: 900px;">  




## Visualize Our Textbook   

Due to the large size of the dataset (our complete textbook dataset is 11GB for JSON files and 0.7TB for images), we sampled 100 samples and the corresponding images and stored them in the `example_data` folder: `./example_data/textbook_sample_100.json`.

Each sample is stored in dict format as follows:
```
[
{'images':  [keyframe1, None, keyframe2, None, keyframe3, None,.....],
 'texts':   [None,      asr1,  None,      asr2, None,     asr3,.....],
 'text_ocr_list':  [None, asr1+ocr1,  None, asr2+ocr2, None, asr3+ocr3,.....],
 'metadata': [...],
 'image_num': 15,
 'text_num': 425,
 'token_num': 9065},
 ....
]
```
Just like [OBELICS](https://github.com/huggingface/OBELICS), the "images" and "texts" are arranged interleavely: 
- "Images" list contains multiple keyframes and "None", where "None" represents that the current position is text. 
- "texts" list contain multiple asr text. The position of "None" in "texts" list is image.
- "text_ocr_list": In addition to asr text, "text_ocr_list" also includes OCR text.
- "image_num", "text_num", "token_num": respectively represent the number of images, the number of asr text tokens, and the estimated total number of tokens in this sample.


To view our dataset more conveniently, we have written a jupyter notebook: `./llava/dataset/show_interleaved_dataset.ipynb`

```
cd example_data
show_interleaved_dataset.ipynb
```
In the notebook, you can see keyframes interleaving with text.




## Using Our Dataset
We provide the json file and corresponding images folder for textbook: 
- json file: `multimodal_textbook.json` (610k samples ~ 11GB)
- image_folder: `dataset_images_interval_7.tar.gz` (6.5M image ~ 700GB)

Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.

```
"images": [
            "/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/-1uixJ1V-As@0.0_10.0#1.jpg",
            null,  
            "/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/-1uixJ1V-As@10.0_55.0#6.jpg",
            null,
            ......
        ],
        "texts": [
            null,
            " Hi everyone, and welcome to another lesson in our Eureka Tips for computers series.",
            null,
            " I'm actually trying to use the number line to find the sum for each. So to start I'm going to use the paint tool to demonstrate. Let's use the number line for four plus five. We're going to start at four then we're going to count up five. One two three four five. That equals nine. Now let's do three plus six for the next one.",
            ....
        ],
```


### Naming Format for keyframe  

For each keyframe, its naming format rule is:   
`video id@start-time_end-time#keyframe-number.jpg`.   
For example, the path and file name of a keyframe is   
`-1uixJ1V-As/-1uixJ1V-As@10.0_55.0#2.jpg`.   

This means that this image is extracted from the video (`-1uixJ1V-As`), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).