Datasets:

Tasks:
Other
Size Categories:
1M<n<10M
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
osyvokon commited on
Commit
28c33f2
1 Parent(s): b6fa065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -1
README.md CHANGED
@@ -1,3 +1,129 @@
1
  ---
2
- license: cc-by-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - uk-UA
8
+ licenses:
9
+ - cc-by-3.0
10
+ multilinguality:
11
+ - monolingual
12
+ - translation
13
+ pretty_name: 'Ukrainian Wikipedia edits '
14
+ size_categories:
15
+ - 1M<n<10M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - other
20
+ task_ids: []
21
  ---
22
+
23
+ # Ukrainian Wikipedia Edits
24
+
25
+ ### Dataset summary
26
+
27
+ A corpus of sentence edits extracted from Ukrainian Wikipedia history revisions.
28
+
29
+ Edits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.
30
+
31
+
32
+ ### Supported Tasks and Leaderboards
33
+
34
+ * Ukrainian grammatical error correction (GEC) - see [UA-GEC](https://github.com/grammarly/ua-gec)
35
+ * Ukrainian spelling correction
36
+
37
+ ### Languages
38
+
39
+ Ukrainian
40
+
41
+ ## Dataset Structure
42
+
43
+ ### Data Fields
44
+
45
+ * `src` - sentence before edit
46
+ * `tgt` - sentence after edit
47
+
48
+ ### Data Splits
49
+
50
+ * `full/train` contains all the data
51
+ * `tiny/train` contains a 5000 examples sample.
52
+
53
+ ## Dataset Creation
54
+
55
+ Latest full Ukrainian Wiki dump were used as of 2022-04-30.
56
+
57
+ It was processed with the [wikiedits](https://github.com/snukky/wikiedits) and custom scripts.
58
+
59
+ ### Source Data
60
+
61
+ #### Initial Data Collection and Normalization
62
+
63
+ Wikipedia
64
+
65
+ #### Who are the source language producers?
66
+
67
+ Wikipedia writers
68
+
69
+ ### Annotations
70
+
71
+ #### Annotation process
72
+
73
+ Annotations inferred by comparing two subsequent page revisions.
74
+
75
+ #### Who are the annotators?
76
+
77
+ People who edit Wikipedia pages.
78
+
79
+ ### Personal and Sensitive Information
80
+
81
+ No
82
+
83
+ ## Considerations for Using the Data
84
+
85
+ ### Social Impact of Dataset
86
+
87
+ [More Information Needed]
88
+
89
+ ### Discussion of Biases
90
+
91
+ [More Information Needed]
92
+
93
+ ### Other Known Limitations
94
+
95
+ The data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.
96
+
97
+ More task-specific filters could help.
98
+
99
+ ## Additional Information
100
+
101
+ ### Dataset Curators
102
+
103
+ [Oleksiy Syvokon](https://github.com/asivokon)
104
+
105
+ ### Licensing Information
106
+
107
+ CC-BY-3.0
108
+
109
+ ### Citation Information
110
+
111
+ ```
112
+ @inproceedings{wiked2014,
113
+ author = {Roman Grundkiewicz and Marcin Junczys-Dowmunt},
114
+ title = {The WikEd Error Corpus: A Corpus of Corrective Wikipedia Edits and its Application to Grammatical Error Correction},
115
+ booktitle = {Advances in Natural Language Processing -- Lecture Notes in Computer Science},
116
+ editor = {Adam Przepiórkowski and Maciej Ogrodniczuk},
117
+ publisher = {Springer},
118
+ year = {2014},
119
+ volume = {8686},
120
+ pages = {478--490},
121
+ url = {http://emjotde.github.io/publications/pdf/mjd.poltal2014.draft.pdf}
122
+ }
123
+ ```
124
+
125
+ ### Contributions
126
+
127
+ [@snukky](https://github.com/snukky) created tools for dataset processing.
128
+
129
+ [@asivokon](https://github.com/asivokon) generated this dataset.