Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
The split features (columns) cannot be extracted.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 6, saw 3

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 327, in get_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1261, in _resolve_features
                  features = _infer_features_from_batch(self._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 686, in _head
                  return _examples_to_batch([x for key, x in islice(self._iter(), n)])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 686, in <listcomp>
                  return _examples_to_batch([x for key, x in islice(self._iter(), n)])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 651, in wrapper
                  for key, table in generate_tables_fn(**kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 177, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1187, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1284, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1254, in read
                  index, columns, col_dict = self._engine.read(nrows)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 225, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "pandas/_libs/parsers.pyx", line 817, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 861, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 1960, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 6, saw 3

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Reddit Randomness Dataset

A dataset I created because I was curious about how "random" r/random really is. This data was collected by sending GET requests to https://www.reddit.com/r/random for a few hours on September 19th, 2021. I scraped a bit of metadata about the subreddits as well. randomness_12k_clean.csv reports the random subreddits as they happened and summary.csv lists some metadata about each subreddit.

The Data

randomness_12k_clean.csv

This file serves as a record of the 12,055 successful results I got from r/random. Each row represents one result.

Fields

  • subreddit: The name of the subreddit that the scraper recieved from r/random (string)
  • response_code: HTTP response code the scraper recieved when it sent a GET request to /r/random (int, always 302)

summary.csv

As the name suggests, this file summarizes randomness_12k_clean.csv into the information that I cared about when I analyzed this data. Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results.

Fields

  • subreddit: The name of the subreddit (string, unique)
  • subscribers: How many subscribers the subreddit had (int, max of 99_886)
  • current_users: How many users accessed the subreddit in the past 15 minutes (int, max of 999)
  • creation_date: Date that the subreddit was created (YYYY-MM-DD or Error:PrivateSub or Error:Banned)
  • date_accessed: Date that I collected the values in subscribers and current_users (YYYY-MM-DD)
  • time_accessed_UTC: Time that I collected the values in subscribers and current_users, reported in UTC+0 (HH:MM:SS)
  • appearances: How many times the subreddit shows up in randomness_12k_clean.csv (int, max of 9)

Missing Values and Quirks

In the summary.csv file, there are three missing values. After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit. In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string.

  • SomethingWasWrong (Error:PrivateSub)
  • HannahowoOnlyfans (Error:Banned)
  • JanetGuzman (Error:Banned)

I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw. As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice.

License

This dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/

Edit dataset card
Evaluate models HF Leaderboard