The dataset preview is not available for this subset.
The dataset is too big to be converted to Parquet. The size of the dataset (49105493092 B, as given per the datasets library) exceeds the maximum supported size (5000000000 B). Please report the issue.
Traceback:    The previous step failed, the error is copied to this step:  kind='config-info' dataset='HuggingFaceGECLM/REDDIT_submissions' config='HuggingFaceGECLM--REDDIT_submissions' split=None---The previous step failed, the error is copied to this step:  kind='config-parquet-and-info' dataset='HuggingFaceGECLM/REDDIT_submissions' config='HuggingFaceGECLM--REDDIT_submissions' split=None---

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "REDDIT_submissions"

Dataset Summary

Submissions of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).

Supported Tasks

These submissions can be used for text generation and language modeling, as well as dialogue modeling.

Dataset Structure

Data Splits

Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"

Dataset Creation

Curation Rationale

All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "allow_live_comments", "archived", "author", "author_fullname", "banned_by", "category", "content_categories", "contest_mode", "created_utc", "discussion_type", "distinguished", "domain", "edited", "gilded", "hidden", "hide_score", "id", "is_created_from_ads_ui", "is_crosspostable", "is_meta", "is_original_content", "is_reddit_media_domain", "is_robot_indexable", "is_self", "is_video", "locked", "media", "media_embed", "media_only", "name", "no_follow", "num_comments", "num_crossposts", "over_18", "parent_whitelist_status", "permalink", "pinned", "post_hint", "pwls", "quarantine", "removed_by", "removed_by_category", "retrieved_on", "score", "secure_media", "secure_media_embed", "selftext", "send_replies", "spoiler", "stickied", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_subscribers", "subreddit_type", "suggested_sort", "title", "top_awarded_type", "total_awards_received", "treatment_tags", "upvote_ratio", "url", "url_overridden_by_dest", "view_count", "whitelist_status", "wls".

Source Data

The Reddit PushShift data dumps are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.

Initial Data Collection and Normalization

See the paper.

Who are the source language producers?

Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).

Personal and Sensitive Information

The data contains Redditor's usernames associated to their content.

Considerations for Using the Data

This dataset should be anonymized before any processing. Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.


Thanks to @clefourrier for adding this dataset.

Downloads last month