Datasets:

Modalities:
Tabular
Text
Formats:
arrow
Libraries:
Datasets
License:
File size: 1,717 Bytes
699c9ba
 
 
730b00f
700cde5
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
license: cc-by-sa-3.0
---
This is a Wikipedia dataset correct to "31-12-2020".

WikiMedia routinely publishes dumps of Wikipedia, each containing the revision history of articles. We first defined the relevant revision before extracting the article information. Specifically, we select the most recent revision as of December 31st for each year. Consequently, some revisions in our datasets date back several years from the target date since these pages haven't been edited. While this inclusion of older revisions might initially appear problematic, it is important to note that these are the existing versions of Wikipedia pages as of the cutoff date. The content of these pages was considered current enough at that time. This approach ensures that our training datasets reflect the most up-to-date information available on Wikipedia at each year's end, providing a realistic snapshot of knowledge for that specific point in time. 

Once each revision has been identified we clean the page using the code from \textit{wiki-dump-reader} \footnote{https://github.com/CyberZHG/wiki-dump-reader/tree/master}, which parses the page and outputs clean text. During the cleaning phase a number of unwanted features and attributes are removed: file links, emphasises, comments, indents, HTML, references etc. 

Please refer to and cite the following paper when using this dataset in any downstream applications:

@inproceedings{drinkall-tima-2024, title = "Time Machine GPT", author = "Drinkall, Felix and Zohren, Stefan and Pierrehumbert, Janet", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = june, year = "2024", publisher = "Association for Computational Linguistics" }