Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    BadZipFile
Message:      File is not a zip file
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 846, in __iter__
                  for key, example in self._iter():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 788, in _iter
                  yield from ex_iterable
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/Leyo--TGIF/83967bb9cca723f70c977a431f3164ff9b2c6f6214227f5fb17764cbdf6decfe/", line 99, in _generate_examples
                  with open(split_links_file, encoding="utf-8") as txt_file:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 458, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 284, in filesystem
                  return cls(**storage_options)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 54, in __init__
         = zipfile.ZipFile(, mode=mode)
                File "/usr/local/lib/python3.9/", line 1266, in __init__
                File "/usr/local/lib/python3.9/", line 1333, in _RealGetContents
                  raise BadZipFile("File is not a zip file")
              zipfile.BadZipFile: File is not a zip file

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for [Dataset Name]

Dataset Summary

The Tumblr GIF (TGIF) dataset contains 100K animated GIFs and 120K sentences describing visual content of the animated GIFs. The animated GIFs have been collected from Tumblr, from randomly selected posts published between May and June of 2015. We provide the URLs of animated GIFs in this release. The sentences are collected via crowdsourcing, with a carefully designed annotation interface that ensures high quality dataset. We provide one sentence per animated GIF for the training and validation splits, and three sentences per GIF for the test split. The dataset shall be used to evaluate animated GIF/video description techniques.


The captions in the dataset are in English.

Dataset Structure

Data Fields

Data Splits

train validation test Overall
# of GIFs 80,000 10,708 11,360 102,068


Quoting TGIF paper:
"We annotated animated GIFs with natural language descriptions using the crowdsourcing service CrowdFlower. We carefully designed our annotation task with various quality control mechanisms to ensure the sentences are both syntactically and semantically of high quality. A total of 931 workers participated in our annotation task. We allowed workers only from Australia, Canada, New Zealand, UK and USA in an effort to collect fluent descriptions from native English speakers. Figure 2 shows the instructions given to the workers. Each task showed 5 animated GIFs and asked the worker to describe each with one sentence. To promote language style diversity, each worker could rate no more than 800 images (0.7% of our corpus). We paid 0.02 USD per sentence; the entire crowdsourcing cost less than 4K USD. We provide details of our annotation task in the supplementary material."

Personal and Sensitive Information

Nothing specifically mentioned in the paper.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Licensing Information

This dataset is provided to be used for approved non-commercial research purposes. No personally identifying information is available in this dataset.

Citation Information

  author = {Li, Yuncheng and Song, Yale and Cao, Liangliang and Tetreault, Joel and Goldberg, Larry and Jaimes, Alejandro and Luo, Jiebo},
  title = "{TGIF: A New Dataset and Benchmark on Animated GIF Description}",
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2016}


Thanks to @leot13 for adding this dataset.

Downloads last month