Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
crowdsourced
Annotations Creators:
lexyr
Source Datasets:
original
License:

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for ten-million-reddit-answers

Dataset Summary

This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.

The data was procured from /r/AskReddit using SocialGrep.

Languages

Mainly English.

Dataset Structure

Data Instances

A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.

Data Fields

  • 'type': the type of the data point. Can be 'post' or 'comment'.

  • 'id': the base-36 Reddit ID of the data point. Unique when combined with type.

  • 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.

  • 'subreddit.name': the human-readable name of the data point's host subreddit.

  • 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.

  • 'created_utc': a UTC timestamp for the data point.

  • 'permalink': a reference link to the data point on Reddit.

  • 'score': score of the data point on Reddit.

  • 'domain': (Post only) the domain of the data point's link.

  • 'url': (Post only) the destination of the data point's link, if any.

  • 'selftext': (Post only) the self-text of the data point, if any.

  • 'title': (Post only) the title of the post data point.

  • 'body': (Comment only) the body of the comment data point.

  • 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

CC-BY v4.0

Contributions

[Needs More Information]

Downloads last month
238
Edit dataset card