tldr-17 / README.md
system's picture
system HF staff
Update files from the datasets library (from 1.3.0)
1ceed21
|
raw
history blame
6.92 kB


Dataset Card for "reddit"

Table of Contents

Dataset Description

Dataset Summary

This corpus contains preprocessed posts from the Reddit dataset. The dataset consists of 3,848,330 posts with an average length of 270 words for content, and 28 words for the summary.

Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id. Content is used as document and summary is used as summary.

Supported Tasks

More Information Needed

Languages

More Information Needed

Dataset Structure

We show detailed information for up to 5 configurations of the dataset.

Data Instances

default

  • Size of downloaded dataset files: 2996.31 MB
  • Size of the generated dataset: 18063.11 MB
  • Total amount of disk used: 21059.41 MB

An example of 'train' looks as follows.

{
    "author": "me",
    "body": "<>",
    "content": "input document.",
    "id": "1",
    "normalizedBody": "",
    "subreddit": "machinelearning",
    "subreddit_id": "2",
    "summary": "output summary."
}

Data Fields

The data fields are the same among all splits.

default

  • author: a string feature.
  • body: a string feature.
  • normalizedBody: a string feature.
  • subreddit: a string feature.
  • subreddit_id: a string feature.
  • id: a string feature.
  • content: a string feature.
  • summary: a string feature.

Data Splits Sample Size

name train
default 3848330

Dataset Creation

Curation Rationale

More Information Needed

Source Data

More Information Needed

Annotations

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


@inproceedings{volske-etal-2017-tl,
    title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
    author = {V{"o}lske, Michael  and
      Potthast, Martin  and
      Syed, Shahbaz  and
      Stein, Benno},
    booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W17-4508",
    doi = "10.18653/v1/W17-4508",
    pages = "59--63",
    abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}

Contributions

Thanks to @mariamabarham, @patrickvonplaten, @thomwolf for adding this dataset.