Datasets:
Antoine SIMOULIN's picture asi
/

Task Categories: sequence-modeling
Languages: fr-FR
Multilinguality: monolingual
Size Categories: unknown
Language Creators: found
Annotations Creators: no-annotation
Source Datasets: original
wikitext_fr / README.md
1 ---
2 annotations_creators:
3 - no-annotation
4 language_creators:
5 - found
6 languages:
7 - fr-FR
8 licenses:
9 - cc-by-sa-4.0
10 multilinguality:
11 - monolingual
12 pretty_name: Wikitext-fr
13 size_categories:
14 - unknown
15 source_datasets:
16 - original
17 task_categories:
18 - sequence-modeling
19 task_ids:
20 - language-modeling
21 ---
22
23 # Dataset Card Creation Guide
24
25 ## Table of Contents
26 - [Dataset Card Creation Guide](#dataset-card-creation-guide)
27 - [Table of Contents](#table-of-contents)
28 - [Dataset Description](#dataset-description)
29 - [Dataset Summary](#dataset-summary)
30 - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31 - [Languages](#languages)
32 - [Dataset Structure](#dataset-structure)
33 - [Data Instances](#data-instances)
34 - [Data Fields](#data-fields)
35 - [Data Splits](#data-splits)
36 - [Dataset Creation](#dataset-creation)
37 - [Curation Rationale](#curation-rationale)
38 - [Source Data](#source-data)
39 - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40 - [Who are the source language producers?](#who-are-the-source-language-producers)
41 - [Annotations](#annotations)
42 - [Annotation process](#annotation-process)
43 - [Who are the annotators?](#who-are-the-annotators)
44 - [Personal and Sensitive Information](#personal-and-sensitive-information)
45 - [Considerations for Using the Data](#considerations-for-using-the-data)
46 - [Social Impact of Dataset](#social-impact-of-dataset)
47 - [Discussion of Biases](#discussion-of-biases)
48 - [Other Known Limitations](#other-known-limitations)
49 - [Additional Information](#additional-information)
50 - [Dataset Curators](#dataset-curators)
51 - [Licensing Information](#licensing-information)
52 - [Citation Information](#citation-information)
53 - [Contributions](#contributions)
54
55 ## Dataset Description
56
57 - **Repository:** [https://github.com/AntoineSimoulin/gpt-fr](https://github.com/AntoineSimoulin/gpt-fr)
58 - **Paper:** [https://aclanthology.org/2021.jeptalnrecital-taln.24.pdf](https://aclanthology.org/2021.jeptalnrecital-taln.24.pdf)
59
60 ### Dataset Summary
61
62 Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles". It is designed to mirror the english benchmark from Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016.
63 [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843) The dataset is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/4.0/)
64
65 ### Supported Tasks and Leaderboards
66
67 - `language-modeling`: The dataset can be used to evaluate the generation abilites of a model. Success on this task is typically measured by achieving a *low* perplexity. The ([model name](https://huggingface.co/asi/gpt-fr-cased-base) currently achieves 12.9.
68
69 ### Languages
70
71 The dataset is in French.
72
73 ## Dataset Structure
74
75 ### Data Instances
76
77 The dataset consists in the agregation of paragraphs from wikipedia articles.
78
79 ```
80 {
81 'paragraph': ...,
82 ...
83 }
84 ```
85
86
87 ### Data Fields
88
89 - `paragraph`: This is a paragraph from the original wikipedia article.
90
91 ### Data Splits
92
93 The dataset is splited into a train/valid/test split.
94
95 | | Tain (35) | Train (72) | Valid | Test |
96 | ----- | ------ | ----- | ---- | ---- |
97 | Number of Documents | 2 126 | 5 902 | 60 | 60 |
98 | Number of tokens | 351 66 | 72 961 | 896 | 897 |
99 | Vocabulary size | 137 589 | 205 403 | | |
100 | Out of Vocabulary | 0.8% | 1.2% | | |
101
102
103 ## Dataset Creation
104
105 ### Curation Rationale
106
107 The dataset is created to evaluate French models with similart criteria than English.s
108
109 ### Source Data
110
111 Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles".
112 We did not apply specific pre-treatments as transformers models might use a dedicated tokenization.s
113
114 #### Initial Data Collection and Normalization
115
116 We used the Wikipedia API to collect the articles since cleaning Wikipedia articles from dumps is not a trivial task.
117
118
119 ### Personal and Sensitive Information
120
121 ## Considerations for Using the Data
122
123 ### Social Impact of Dataset
124
125 ### Discussion of Biases
126
127 ### Other Known Limitations
128
129 ## Additional Information
130
131 ### Dataset Curators
132
133 ### Licensing Information
134
135 The dataset is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/4.0/)
136
137 ### Citation Information
138
139 ```
140 @inproceedings{simoulin:hal-03265900,
141 TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}},
142 AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit},
143 URL = {https://hal.archives-ouvertes.fr/hal-03265900},
144 BOOKTITLE = {{Traitement Automatique des Langues Naturelles}},
145 ADDRESS = {Lille, France},
146 EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio},
147 PUBLISHER = {{ATALA}},
148 PAGES = {246-255},
149 YEAR = {2021},
150 KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}},
151 PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf},
152 HAL_ID = {hal-03265900},
153 HAL_VERSION = {v1},
154 }
155 ```
156
157
158 ### Contributions
159
160 Thanks to [@AntoineSimoulin](https://github.com/AntoineSimoulin) for adding this dataset.