Datasets:

Languages:
English
Size Categories:
10K<n<100K
Tags:
Not-For-All-Audiences
License:
The Dataset Viewer is not available on this dataset.

Dump of Uncyclopedia as of 2024-02-13

This repo contains

  • main: parquet file with just the latest revisions, slightly cleaned and converted to markdown
  • cleaned: parquet file including all revision and the markdown converted latest revision
  • scripts: scripts to parse the raw xml dump as a huggingface datasets compatible dataset and the cleaning notebook that was used
  • source: the raw xml dump and other file generated by mediawiki-dump-generator

DO NOT USE GIT LFS TO DOWNLOAD THE FILES. USE HUGGINFACE-CLI INSTEAD
Cloning with git-lfs will pull the files into the index for the non-active branch and into the index and the working directory for the active branch, this will essentially lead to downloading approximately 70GBs even if you only need the main branch

The raw dump is over 50GBs and has been split into parts, they can be recombined using cat

The converted text is about 35M tokens, however it is adviced that you perform more thorough cleaning of the data before use

Uncyclopedia contains a lot of "edgy"/racist content.

Downloads last month
0
Edit dataset card