Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
UpVoteWeb / README.md
grassfdn's picture
Update README.md
083744e verified
metadata
license: odc-by
size_categories:
  - 100M<n<1B
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet

Dataset Summary

This dataset is a filtered collection of posts and comments from Reddit in the year 2024. It has been prepared for research and educational purposes. This dataset includes public web data from various subreddits, providing a snapshot of the discussions happening on the platform during this period. The dataset has been processed to anonymize any personal information found in the posts and comments, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.

Supported Tasks and Leaderboards

The dataset may be used for a variety of natural language processing (NLP) tasks including:

  • Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.

  • Language Modeling: Training language models to understand and generate conversational text.

  • Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.

  • Topic Modeling: Identifying and modeling topics discussed in the posts and comments.

Languages

The primary language of the dataset is English, as the majority of users post in English. However, posts in other languages may also be present, reflecting the diverse user base of the platform.

Dataset Structure

Data Instances

Each data instance represents a post or comment and includes the following fields:

  • id: A unique identifier for the comment or post.

  • parent_id: The identifier of the parent comment or post. The prefixes are defined as follows:

    • t5: subreddit

    • t3: post

    • t1: comment

  • text: The content of the comment or post, with email addresses and IP addresses anonymized.

  • url: The URL of the original thread on Reddit.

  • date: The timestamp of the comment or post in UTC.

  • language: The detected language of the text.

  • language_score: The confidence score of the language detection.

  • token_count: The number of tokens in the text, as determined by the GPT-2 tokenizer.

  • score: The score (upvotes minus downvotes) of the comment or post.

  • subreddit: The subreddit where the comment or post was made.

  • author: The username of the author of the comment or post.

  • media_urls: An array of links to any multimedia included in the comment or post.

Data Fields

  • id: string

  • parent_id: string

  • text: string

  • url: string

  • date: string

  • language: string

  • language_score: float

  • token_count: int

  • score: int

  • subreddit: string

  • author: string

  • media_urls: array

Data Preprocessing

The dataset has undergone several preprocessing steps to ensure the quality and privacy of the data:

  1. Personal Information Anonymization[CM1] : Email addresses and IP addresses have been replaced with [EMAIL] and [IP] placeholders, respectively.

  2. Language Detection: Each text instance has been processed using FastText to detect its language and assign a confidence score.

  3. Tokenization: Text instances have been tokenized using the GPT-2 tokenizer to provide a token count.

  4. NSFW Filtering: The dataset has been filtered to exclude content marked as NSFW, utilizing the NSFW metadata provided by Reddit's moderation.

Usage Example:

Here is an example of how to load and use the dataset in Python.

from datasets import load_dataset

#Load the dataset
dataset = load_dataset("OpenCo7/UpVoteWeb", split = "train", streaming = True)

Dataset Creation

Curation Rationale

The Reddit platform hosts public web content about a diverse range of topics, all presented in a conversational format. This has made it a resource in training some of the highest profile LLMs to date. UpVoteWeb is a large, clean pretraining dataset built from this content, for use in developing open source models for research and educational purposes. The dataset is provided for research and educational purposes.

Source Data

This dataset is a filtered collection of posts and comments from Reddit in the year 2024. Annotations

We augment the scraped data with the language, language_score, and token_count annotations. The language and language_score annotations are generated using FastText and token_count is generated using the gpt2 tokenizer.

Personal and Sensitive Information

The dataset has been processed to anonymize personal information, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.

Considerations for Using the Data

Social Impact of Dataset

With the release of this dataset, we aim to make this development resource available to the community at large.

Discussion of Biases

Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level.

Additional Information

Licensing Information

The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 [CM2] license. Its availability is not an invitation to use any of the information for any illegal or unlawful purpose, or outside the scope of research or educational purposes.

Future Work

Grass is a network for the acquisition of public web data, and we plan to continue building high quality, structured datasets for use in AI/ML research[CM4] . In addition to future offerings, we will also continue to improve UpVoteWeb in future iterations.

Citation Information

If you use this dataset in your research or project, please cite it as follows:

@dataset{UpVoteWeb,
  title = {UpVoteWeb-24-600M},
  year = {2024},
  publisher = {OpenCo},
  url = {<https://huggingface.co/datasets/OpenCo7/UpVoteWeb>}
}