The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    OSError
Message:      Consistency check failed: file should be of size 5932 but has size 5968 (README.md).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1599, in dataset_module_factory
                  dataset_readme_path = api.hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 5548, in hf_hub_download
                  return hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
                  return f(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1232, in hf_hub_download
                  return _hf_hub_download_to_cache_dir(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1381, in _hf_hub_download_to_cache_dir
                  _download_to_tmp_and_move(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1915, in _download_to_tmp_and_move
                  http_get(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 570, in http_get
                  raise EnvironmentError(
              OSError: Consistency check failed: file should be of size 5932 but has size 5968 (README.md).
              We are sorry for the inconvenience. Please retry with `force_download=True`.
              If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Bittensor Subnet 13 Reddit Dataset

Data-universe: The finest collection of social media data the web has to offer
Data-universe: The finest collection of social media data the web has to offer

Dataset Summary

This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.

Supported Tasks

The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:

  • Sentiment Analysis
  • Topic Modeling
  • Community Analysis
  • Content Categorization

Languages

Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.

Dataset Structure

Data Instances

Each instance represents a single Reddit post or comment with the following fields:

Data Fields

  • text (string): The main content of the Reddit post or comment.
  • label (string): Sentiment or topic category of the content.
  • dataType (string): Indicates whether the entry is a post or a comment.
  • communityName (string): The name of the subreddit where the content was posted.
  • datetime (string): The date when the content was posted or commented.
  • username_encoded (string): An encoded version of the username to maintain user privacy.
  • url_encoded (string): An encoded version of any URLs included in the content.

Data Splits

This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.

Dataset Creation

Source Data

Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.

Personal and Sensitive Information

All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.

Considerations for Using the Data

Social Impact and Biases

Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.

Limitations

  • Data quality may vary due to the nature of media sources.
  • The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
  • Temporal biases may exist due to real-time collection methods.
  • The dataset is limited to public subreddits and does not include private or restricted communities.

Additional Information

Licensing Information

The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{SAVE0x02024datauniversereddit_dataset_191,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={SAVE0x0},
        year={2024},
        url={https://huggingface.co/datasets/SAVE0x0/reddit_dataset_191},
        }

Contributions

To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.

Dataset Statistics

[This section is automatically updated]

  • Total Instances: 30647289
  • Date Range: 2019-07-24 to 2024-11-22
  • Last Updated: 2024-11-22

Data Distribution

  • Posts: 4.39%
  • Comments: 95.61%

Top 10 Subreddits

For full statistics, please refer to the reddit_stats.json file in the repository.

Rank Item Percentage
1 r/AmItheAsshole 3.12%
2 r/politics 2.92%
3 r/AskReddit 2.78%
4 r/wallstreetbets 2.75%
5 r/teenagers 2.36%
6 r/NoStupidQuestions 2.17%
7 r/nfl 2.04%
8 r/pics 1.95%
9 r/mildlyinfuriating 1.93%
10 r/gaming 1.87%

Update History

Date New Instances Total Instances
2024-11-22 1457320 1457320
2024-11-15 1687304 3144624
2024-11-08 1811665 4956289
2024-11-01 1916331 6872620
2024-10-25 1983331 8855951
2024-10-18 2116580 10972531
2024-10-11 2274761 13247292
2024-10-04 2517967 15765259
2024-09-23 10701 15775960
2024-09-30 1875509 17651469
2024-10-07 2297285 19948754
2024-10-14 2052163 22000917
2024-10-21 1898575 23899492
2024-10-28 1825347 25724839
2024-11-04 1766346 27491185
2024-11-11 1633187 29124372
2024-11-18 1522917 30647289
Downloads last month
133