--- task_categories: - summarization tags: - nlp size_categories: - 1M=2% | | manipulation | <=98% | each time with random function we create a new random number for each line (in this case wiki pedia article). we will leave >=2% of the data untouched, to teach the model, not to "overact" on the texts. *Please consider 98/100 * 1/100 which is 0.0098%* | chance of manipulation for each function | % | |------------------------------------------|---| | `delete_word` function | 99.999% (97.99% overall) | | `delete_characters` function | 99.999% (97.99% overall) | | `insert_characters` function | 99.999% (97.99% overall) | | `replace_characters` function | 99.999% (97.99% overall) | | `swap_characters_case` function | 99.999% (97.99% overall) | ## Purpose The primary objective of the Wikipedia Corpus is to serve as a comprehensive and reliable resource for training and evaluating spell checking models. By leveraging the vast amount of text data from Wikipedia, this dataset offers a diverse range of language patterns and real-world spelling errors. This allows researchers and developers to create more effective spell checking algorithms that can handle a wide variety of texts. ## Dataset Details The Persian Wikipedia Corpus is a collection of text documents extracted from the Persian (Farsi) Wikipedia. It includes articles from various topics, covering a wide range of domains and genres. The dataset is carefully curated and preprocessed to ensure high quality and consistency. To facilitate spell checking tasks, the corpus provides both the correct versions of words and their corresponding misspelled versions. This enables the training and evaluation of spell checkers to accurately detect and correct spelling errors.