reddit_tifu / README.md
albertvillanova's picture
Convert dataset sizes from base 2 to base 10 in the dataset card (#1)
5aa5972
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - crowdsourced
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
pretty_name: Reddit TIFU
size_categories:
  - 100K<n<1M
source_datasets:
  - original
task_categories:
  - summarization
task_ids: []
paperswithcode_id: reddit-tifu
tags:
  - reddit-posts-summarization
dataset_info:
  - config_name: short
    features:
      - name: ups
        dtype: float32
      - name: num_comments
        dtype: float32
      - name: upvote_ratio
        dtype: float32
      - name: score
        dtype: float32
      - name: documents
        dtype: string
      - name: tldr
        dtype: string
      - name: title
        dtype: string
    splits:
      - name: train
        num_bytes: 137715925
        num_examples: 79740
    download_size: 670607856
    dataset_size: 137715925
  - config_name: long
    features:
      - name: ups
        dtype: float32
      - name: num_comments
        dtype: float32
      - name: upvote_ratio
        dtype: float32
      - name: score
        dtype: float32
      - name: documents
        dtype: string
      - name: tldr
        dtype: string
      - name: title
        dtype: string
    splits:
      - name: train
        num_bytes: 91984758
        num_examples: 42139
    download_size: 670607856
    dataset_size: 91984758

Dataset Card for "reddit_tifu"

Table of Contents

Dataset Description

Dataset Summary

Reddit dataset, where TIFU denotes the name of subbreddit /r/tifu. As defined in the publication, style "short" uses title as summary and "long" uses tldr as summary.

Features includes:

  • document: post text without tldr.
  • tldr: tldr line.
  • title: trimmed title without tldr.
  • ups: upvotes.
  • score: score.
  • num_comments: number of comments.
  • upvote_ratio: upvote ratio.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

long

  • Size of downloaded dataset files: 670.61 MB
  • Size of the generated dataset: 92.00 MB
  • Total amount of disk used: 762.62 MB

An example of 'train' looks as follows.

{'ups': 115.0,
 'num_comments': 23.0,
 'upvote_ratio': 0.88,
 'score': 115.0,
 'documents': 'this actually happened a couple of years ago. i grew up in germany where i went to a german secondary school that went from 5th to 13th grade (we still had 13 grades then, they have since changed that). my school was named after anne frank and we had a club that i was very active in from 9th grade on, which was dedicated to teaching incoming 5th graders about anne franks life, discrimination, anti-semitism, hitler, the third reich and that whole spiel. basically a day where the students\' classes are cancelled and instead we give them an interactive history and social studies class with lots of activities and games. \n\nthis was my last year at school and i already had a lot of experience doing these project days with the kids. i was running the thing with a friend, so it was just the two of us and 30-something 5th graders. we start off with a brief introduction and brainstorming: what do they know about anne frank and the third reich? you\'d be surprised how much they know. anyway after the brainstorming we do a few activities, and then we take a short break. after the break we split the class into two groups to make it easier to handle. one group watches a short movie about anne frank while the other gets a tour through our poster presentation that our student group has been perfecting over the years. then the groups switch. \n\ni\'m in the classroom to show my group the movie and i take attendance to make sure no one decided to run away during break. i\'m going down the list when i come to the name sandra (name changed). a kid with a boyish haircut and a somewhat deeper voice, wearing clothes from the boy\'s section at a big clothing chain in germany, pipes up. \n\nnow keep in mind, these are all 11 year olds, they are all pre-pubescent, their bodies are not yet showing any sex specific features one would be able to see while they are fully clothed (e.g. boobs, beards,...). this being a 5th grade in the rather conservative (for german standards) bavaria, i was confused. i looked down at the list again making sure i had read the name right. look back up at the kid. \n\nme: "you\'re sandra?"\n\nkid: "yep."\n\nme: "oh, sorry. *thinking the kid must be from somewhere where sandra is both a girl\'s and boy\'s name* where are you from? i\'ve only ever heard that as a girl\'s name before."\n\nthe class starts laughing. sandra gets really quiet. "i am a girl..." she says. some of the other students start saying that their parents made the same mistake when they met sandra. i feel so sorry and stupid. i get the class to calm down and finish taking attendance. we watch the movie in silence. after the movie, when we walked down to where the poster presentation took place i apologised to sandra. i felt so incredibly terrible, i still do to this day. throughout the rest of the day i heard lots of whispers about sandra. i tried to stop them whenever they came up, but there was no stopping the 5th grade gossip i had set in motion.\n\nsandra, if you\'re out there, i am so incredibly sorry for humiliating you in front of your class. i hope you are happy and healthy and continue to live your life the way you like. don\'t let anyone tell you you have to dress or act a certain way just because of the body parts you were born with. i\'m sorry if i made you feel like you were wrong for dressing and acting differently. i\'m sorry i probably made that day hell for you. i\'m sorry for my ignorance.',
 'tldr': 'confuse a 5th grade girl for a boy in front of half of her class. kids are mean. sorry sandra.**',
 'title': 'gender-stereotyping'}

short

  • Size of downloaded dataset files: 670.61 MB
  • Size of the generated dataset: 137.75 MB
  • Total amount of disk used: 808.37 MB

An example of 'train' looks as follows.

{'ups': 50.0,
 'num_comments': 13.0,
 'upvote_ratio': 0.77,
 'score': 50.0,
 'documents': "i was on skype on my tablet as i went to the toilet iming a friend. i don't multitask very well, so i forgot one of the most important things to do before pooping. i think the best part was when i realised and told my mate who just freaked out because i was talking to him on the john!",
 'tldr': '',
 'title': 'forgetting to pull my underwear down before i pooped.'}

Data Fields

The data fields are the same among all splits.

long

  • ups: a float32 feature.
  • num_comments: a float32 feature.
  • upvote_ratio: a float32 feature.
  • score: a float32 feature.
  • documents: a string feature.
  • tldr: a string feature.
  • title: a string feature.

short

  • ups: a float32 feature.
  • num_comments: a float32 feature.
  • upvote_ratio: a float32 feature.
  • score: a float32 feature.
  • documents: a string feature.
  • tldr: a string feature.
  • title: a string feature.

Data Splits

name train
long 42139
short 79740

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

MIT License.

Citation Information

@misc{kim2018abstractive,
    title={Abstractive Summarization of Reddit Posts with Multi-level Memory Networks},
    author={Byeongchang Kim and Hyunwoo Kim and Gunhee Kim},
    year={2018},
    eprint={1811.00783},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Contributions

Thanks to @patrickvonplaten, @thomwolf for adding this dataset.