Dataset: bookcorpus


Dataset Card for "bookcorpus"

Table of Contents

Dataset Description

Dataset Summary

Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.

Supported Tasks

More Information Needed

Languages

More Information Needed

Dataset Structure

We show detailed information for up to 5 configurations of the dataset.

Data Instances

plain_text

  • Size of downloaded dataset files: 1124.87 MB
  • Size of the generated dataset: 4629.00 MB
  • Total amount of disk used: 5753.87 MB

An example of 'train' looks as follows.

{
    "text": "But I traded all my life for some lovin' and some gold"
}

Data Fields

The data fields are the same among all splits.

plain_text

  • text: a string feature.

Data Splits Sample Size

name train
plain_text 74004228

Dataset Creation

Curation Rationale

More Information Needed

Source Data

More Information Needed

Annotations

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@InProceedings{Zhu_2015_ICCV,
    title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
    author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {December},
    year = {2015}
}

Models trained or fine-tuned on bookcorpus