File size: 3,512 Bytes
dd45819
 
df8afea
 
 
 
 
 
 
 
 
dd45819
4277dbb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
license: cc
task_categories:
- image-to-text
task_ids:
- image-captioning
language:
- vi
size_categories:
- 100M<n<1B
pretty_name: Google WIT Vietnamese
---

# Google WIT Vietnamese

This data repos contain extracted data from [Google WIT](https://github.com/google-research-datasets/wit/blob/main/DATA.md). The extracted data is all for Vietnamese language. 

Given `x` is a data point in the OG dataset which has keys following OG `field_name`, the criteria to filter is
```python
criteria = lambda x: x.get("language", "") == "vi" and x.get("caption_reference_description", "")
```

## Text-related details

All `.tsv.gz` files follow OG data files in terms of file names and file structures.

### Train split

`wit_v1.train.*.tsv.gz`

Train data length of each file (not including the header),
```
17690
17756
17810
17724
17619
17494
17624
17696
17777
17562
```
Total 176752

### Validation split

`wit_v1.val.*.tsv.gz`

Val data length of each file (not including the header),
```
292
273
275
320
306
```
Total 1466

### Test split

`wit_v1.test.*.tsv.gz`

Test data length of each file (not including the header),
```
215
202
201
201
229
```
Total 1048

## Image-related details

### Image URL only

`*.image_url_list.txt` are simply lists of image urls from `*.tsv.gz` files

Image url length of each file (train, val, test, all)
```
157281
1271
900
159452
```
Google Research has made sure that all sets don't share same exact images.

### Downloaded Images

⚠ Please for the love of the gods, read this section carefully.

For `all.index.fmt_id.image_url_list.tsv`, from left to right, without headers, the columns are `index`, `fmt_id`, `image_url`. It is to map `image_url` (in `all.image_url_list.txt`) to `fmt_id`. It's for downloading images.

`fmt_id` is:
- used to name images (with proper image extensions) in `images/`.
- `index` but filled with 6 zeros

Downloading time was less than 36 hours with:
- 90 Mbps
- Processor	Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz   1.99 GHz
- No asynchronous 

For `fail.index.fmt_id.status.image_url_list.tsv`, from left to right, without headers, the columns are `index`, `fmt_id`, `status`, `image_url`. It is to track image urls (during downloading) that are inaccessible.

3367 image urls returned 404 (`status` values). In other words, we were able to download 97.88839275% of images.

`images/` folder takes disk space of:
- 215 GBs (uncompressed)
- 209 GBs (compressed)

We use Pillow to open each image to make sure that downloaded images are usable. We also log all faulty files in `corrupted_image_list.json`. There are less than 70 image files.

For `corrupted_image_list.json`, for each item in this list, the keys are `file_name`, `error`. `file_name` is `fmt_id` with extension but without `images/`. Some errors are either:
- files exceed Pillow default limit
- files are truncated

To actually load those files, the following code can be used to change Pillow behavior
```python
from PIL import Image, ImageFile

# For very big image files
Image.MAX_IMAGE_PIXELS = None

# For truncated image files
ImageFile.LOAD_TRUNCATED_IMAGES = True
```

Zip `images/` folder,
```bash
zip -r images.zip images/
zip images.zip --out spanned_images.zip -s 40g
```
https://superuser.com/questions/336219/how-do-i-split-a-zip-file-into-multiple-segments

Unzip `spanned_images.*` files,
```bash
zip -s 0 spanned_images.zip --out images.zip
unzip images.zip
```
https://unix.stackexchange.com/questions/40480/how-to-unzip-a-multipart-spanned-zip-on-linux