google-wit-vi / README.md
dinhanhx's picture
Update README.md
df8afea
metadata
license: cc
task_categories:
  - image-to-text
task_ids:
  - image-captioning
language:
  - vi
size_categories:
  - 100M<n<1B
pretty_name: Google WIT Vietnamese

Google WIT Vietnamese

This data repos contain extracted data from Google WIT. The extracted data is all for Vietnamese language.

Given x is a data point in the OG dataset which has keys following OG field_name, the criteria to filter is

criteria = lambda x: x.get("language", "") == "vi" and x.get("caption_reference_description", "")

Text-related details

All .tsv.gz files follow OG data files in terms of file names and file structures.

Train split

wit_v1.train.*.tsv.gz

Train data length of each file (not including the header),

17690
17756
17810
17724
17619
17494
17624
17696
17777
17562

Total 176752

Validation split

wit_v1.val.*.tsv.gz

Val data length of each file (not including the header),

292
273
275
320
306

Total 1466

Test split

wit_v1.test.*.tsv.gz

Test data length of each file (not including the header),

215
202
201
201
229

Total 1048

Image-related details

Image URL only

*.image_url_list.txt are simply lists of image urls from *.tsv.gz files

Image url length of each file (train, val, test, all)

157281
1271
900
159452

Google Research has made sure that all sets don't share same exact images.

Downloaded Images

⚠ Please for the love of the gods, read this section carefully.

For all.index.fmt_id.image_url_list.tsv, from left to right, without headers, the columns are index, fmt_id, image_url. It is to map image_url (in all.image_url_list.txt) to fmt_id. It's for downloading images.

fmt_id is:

  • used to name images (with proper image extensions) in images/.
  • index but filled with 6 zeros

Downloading time was less than 36 hours with:

  • 90 Mbps
  • Processor Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz 1.99 GHz
  • No asynchronous

For fail.index.fmt_id.status.image_url_list.tsv, from left to right, without headers, the columns are index, fmt_id, status, image_url. It is to track image urls (during downloading) that are inaccessible.

3367 image urls returned 404 (status values). In other words, we were able to download 97.88839275% of images.

images/ folder takes disk space of:

  • 215 GBs (uncompressed)
  • 209 GBs (compressed)

We use Pillow to open each image to make sure that downloaded images are usable. We also log all faulty files in corrupted_image_list.json. There are less than 70 image files.

For corrupted_image_list.json, for each item in this list, the keys are file_name, error. file_name is fmt_id with extension but without images/. Some errors are either:

  • files exceed Pillow default limit
  • files are truncated

To actually load those files, the following code can be used to change Pillow behavior

from PIL import Image, ImageFile

# For very big image files
Image.MAX_IMAGE_PIXELS = None

# For truncated image files
ImageFile.LOAD_TRUNCATED_IMAGES = True

Zip images/ folder,

zip -r images.zip images/
zip images.zip --out spanned_images.zip -s 40g

https://superuser.com/questions/336219/how-do-i-split-a-zip-file-into-multiple-segments

Unzip spanned_images.* files,

zip -s 0 spanned_images.zip --out images.zip
unzip images.zip

https://unix.stackexchange.com/questions/40480/how-to-unzip-a-multipart-spanned-zip-on-linux