NikitaMartynov commited on
Commit
5f80e1a
1 Parent(s): 95ca45b

first commit

Browse files
README.md CHANGED
@@ -1,49 +1,258 @@
1
  ---
2
- task_categories:
3
- - text-generation
 
 
4
  language:
5
  - ru
 
 
 
 
6
  size_categories:
7
- - 1K<n<10K
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- # Spellcheck Benchmark
11
 
12
- ## Dataset Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- Spellcheck Benchmark is a collection of datasets dedicated to spelling correction problem for Russian language.
15
-
16
- **GitHub** - <link>
17
 
18
- **Paper** - <link>
 
 
19
 
20
- ## Dataset Summary
21
 
22
  Spellcheck Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language.
23
  Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
24
- Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
 
 
 
 
 
 
 
25
 
26
- All datasets were passed through two-stage manual labeling pipeline. The correction of a sentence is defined by an agreement of at least two human annotators.
27
- Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages annotators not to amend a word
28
- in favor of preserving style of a text.
29
 
30
- ## Supported Tasks and Leaderboards
31
 
32
- - Automatic spelling correction
 
 
33
 
34
  ## Dataset Structure
35
 
36
  ### Data Instances
37
 
38
- ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  ### Data Splits
41
 
42
- The benchmark consists of the following splits that reflect corresponding dataset:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- - **MultidomainGoldSet**: collected and annotated by SalutDevices Team;
45
- - **RUSpellRU**: by Sorokin et al.(2016);
46
- - **MedSpellCheck**: ...;
47
- - **GithubTypos**: ...;
48
 
49
- You can load dataset of your interest by passing # TODO
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
  language:
7
  - ru
8
+ license:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - monolingual
12
  size_categories:
13
+ - 1K<n<10k
14
+ task_categories:
15
+ - text-generation
16
+ task_ids:
17
+ - automatic-spelling-correction
18
+ pretty_name: Russian Spellcheck Benchmark
19
+ language_bcp47:
20
+ - ru-RU
21
+ tags:
22
+ - spellcheck
23
+ - russian
24
  ---
25
 
26
+ # Dataset Card for [Russian Spellcheck Benchmark]
27
 
28
+ ## Table of Contents
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
 
53
+ ## Dataset Description
 
 
54
 
55
+ - **Repository:** # TODO: insert link to SpellKit may be?
56
+ - **Paper:** # TODO: insert paper to Dialog / EMNLP paper
57
+ - **Point of Contact:** nikita.martynov.98@list.ru
58
 
59
+ ### Dataset Summary
60
 
61
  Spellcheck Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language.
62
  Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
63
+ Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
64
+
65
+ All datasets were passed through two-stage manual labeling pipeline.
66
+ The correction of a sentence is defined by an agreement of at least two human annotators.
67
+ Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages
68
+ annotators not to amend a word in favor of preserving style of a text.
69
+
70
+ ### Supported Tasks and Leaderboards
71
 
72
+ - **Task:** automatic spelling correction.
73
+ - **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
 
74
 
 
75
 
76
+ ### Languages
77
+
78
+ Russian.
79
 
80
  ## Dataset Structure
81
 
82
  ### Data Instances
83
 
84
+ #### RUSpellRU
85
+
86
+ - **Size of downloaded dataset files:** # TODO
87
+ - **Size of the generated dataset:** # TODO
88
+ - **Total amount of disk used:** # TODO
89
+
90
+ An example of "train" / "test" looks as follows
91
+ ```
92
+ {
93
+ "source": "очень классная тетка ктобы что не говорил.",
94
+ "correction": "очень классная тетка кто бы что ни говорил",
95
+ }
96
+ ```
97
+
98
+ #### MultidomainGold
99
+
100
+ - **Size of downloaded dataset files:** # TODO
101
+ - **Size of the generated dataset:** # TODO
102
+ - **Total amount of disk used:** # TODO
103
+
104
+ An example of "test" looks as follows
105
+ ```
106
+ {
107
+ "source": "Ну что могу сказать... Я заказала 2 вязанных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока одевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень тоской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.",
108
+ "correction": "Ну что могу сказать... Я заказала 2 вязаных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока надевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень доской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.",
109
+ "domain": "reviews",
110
+
111
+ }
112
+ ```
113
+
114
+ #### MedSpellcheck
115
+
116
+ - **Size of downloaded dataset files:** # TODO
117
+ - **Size of the generated dataset:** # TODO
118
+ - **Total amount of disk used:** # TODO
119
+
120
+ An example of "test" looks as follows
121
+ ```
122
+ {
123
+ # TO DO
124
+ }
125
+ ```
126
+
127
+
128
+ #### GitHubTypoCorpusRu
129
+
130
+ - **Size of downloaded dataset files:** # TODO
131
+ - **Size of the generated dataset:** # TODO
132
+ - **Total amount of disk used:** # TODO
133
+
134
+ An example of "test" looks as follows
135
+ ```
136
+ {
137
+ "source": "## Запросы и ответа содержат заголовки",
138
+ "correction": "## Запросы и ответы содержат заголовки",
139
+ }
140
+ ```
141
+
142
+
143
+ ### Data Fields
144
+
145
+ #### RUSpellRU
146
+
147
+ - `source`: a `string` feature
148
+ - `correction`: a `string` feature
149
+
150
+
151
+ #### MultidomainGold
152
+
153
+ - `source`: a `string` feature
154
+ - `correction`: a `string` feature
155
+ - `domain`: a `string` feature
156
+
157
+ #### MedSpellcheck
158
+
159
+ - `source`: a `string` feature
160
+ - `correction`: a `string` feature
161
+
162
+ #### GitHubTypoCorpusRu
163
+
164
+ - `source`: a `string` feature
165
+ - `correction`: a `string` feature
166
+
167
+
168
 
169
  ### Data Splits
170
 
171
+ #### RUSpellRU
172
+ | |train|test|
173
+ |---|---:|---:|
174
+ |RUSpellRU|2000|2008|
175
+
176
+ #### MultidomainGold
177
+
178
+ | |train|test|
179
+ |---|---:|---:|
180
+ |web|386|756|
181
+ |news|361|245|
182
+ |social_media|430|200|
183
+ |reviews|584|586|
184
+ |subtitles|1810|1810|
185
+ |strategic_documents|-|250|
186
+ |literature|-|260|
187
+
188
+ #### MedSpellcheck
189
+
190
+ | |test|
191
+ |---|---:|---:|
192
+ |MedSpellcheck|2000|
193
+
194
+ #### GitHubTypoCorpusRu
195
+
196
+ | |test|
197
+ |---|---:|---:|
198
+ |GitHubTypoCorpusRu|1136|
199
+
200
+
201
+ ## Dataset Creation
202
+
203
+ ### Curation Rationale
204
+
205
+ [More Information Needed]
206
+
207
+ ### Source Data
208
+
209
+ #### Initial Data Collection and Normalization
210
+
211
+ [More Information Needed]
212
+
213
+ #### Who are the source language producers?
214
+
215
+ [More Information Needed]
216
+
217
+ ### Annotations
218
+
219
+ #### Annotation process
220
+
221
+ [More Information Needed]
222
+
223
+ #### Who are the annotators?
224
+
225
+ [More Information Needed]
226
+
227
+ ### Personal and Sensitive Information
228
+
229
+ [More Information Needed]
230
+
231
+ ## Considerations for Using the Data
232
+
233
+ ### Social Impact of Dataset
234
+
235
+ [More Information Needed]
236
+
237
+ ### Discussion of Biases
238
+
239
+ [More Information Needed]
240
+
241
+ ### Other Known Limitations
242
+
243
+ [More Information Needed]
244
+
245
+ ## Additional Information
246
+
247
+ ### Dataset Curators
248
+
249
+ [More Information Needed]
250
+
251
+ ### Licensing Information
252
+
253
+ [More Information Needed]
254
+
255
+ ### Citation Information
256
 
257
+ [More Information Needed]
 
 
 
258
 
 
data/GitHubTypoCorpusRu/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/literature/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/news/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/news/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/reviews/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/reviews/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/social_media/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/social_media/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/strategic_documents/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/subtitles/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/subtitles/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/web/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/web/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/RUSpellRU/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/RUSpellRU/train.json ADDED
The diff for this file is too large to render. See raw diff
 
russian_spellcheck_benchmark.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """The Russian Spellcheck Benchmark"""
18
+
19
+ import os
20
+ import json
21
+ import pandas as pd
22
+ from typing import List, Dict, Optional
23
+
24
+ import datasets
25
+
26
+
27
+ _RUSSIAN_SPELLCHECK_BENCHMARK_DESCRIPTION = """
28
+ Russian Spellcheck Benchmark is a new benchmark for spelling correction in Russian language.
29
+ It includes four datasets, each of which consists of pairs of sentences in Russian language.
30
+ Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
31
+ Datasets were gathered from various sources and domains including social networks, internet blogs, github commits,
32
+ medical anamnesis, literature, news, reviews and more.
33
+ """
34
+
35
+ _MULTIDOMAIN_GOLD_DESCRIPTION = """
36
+ MultidomainGold is a dataset of 3500 sentence pairs
37
+ dedicated to a problem of automatic spelling correction in Russian language.
38
+ The dataset is gathered from seven different domains including news, Russian classic literature,
39
+ social media texts, open web, strategic documents, subtitles and reviews.
40
+ It has been passed through two-stage manual labeling process with native speakers as annotators
41
+ to correct spelling violation and preserve original style of text at the same time.
42
+ """
43
+
44
+ _GITHUB_TYPO_CORPUS_RU_DESCRIPTION = """
45
+ GitHubTypoCorpusRu is a manually labeled part of GitHub Typo Corpus https://arxiv.org/abs/1911.12893.
46
+ The sentences with "ru" tag attached to them have been extracted from GitHub Typo Corpus
47
+ and pass them through manual labeling to ensure the corresponding corrections are right.
48
+ """
49
+
50
+ _RUSPELLRU_DESCRIPTION = """
51
+ RUSpellRU is a first benchmark on the task of automatic spelling correction for Russian language
52
+ introduced in https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
53
+ Original sentences are drawn from social media domain and labeled by
54
+ human annotators.
55
+ """
56
+
57
+ _MEDSPELLCHECK_DESCRIPTION = """
58
+ The dataset is taken from GitHub repo associated with eponymos project https://github.com/DmitryPogrebnoy/MedSpellChecker.
59
+ Original sentences are taken from anonymized medical anamnesis and passed through
60
+ two-stage manual labeling pipeline.
61
+ """
62
+
63
+ _RUSSIAN_SPELLCHECK_BENCHMARK_CITATION = """ # TODO: add citation"""
64
+
65
+ _MULTIDOMAIN_GOLD_CITATION = """ # TODO: add citation from Dialog"""
66
+
67
+ _GITHUB_TYPO_CORPUS_RU_CITATION = """
68
+ @article{DBLP:journals/corr/abs-1911-12893,
69
+ author = {Masato Hagiwara and
70
+ Masato Mita},
71
+ title = {GitHub Typo Corpus: {A} Large-Scale Multilingual Dataset of Misspellings
72
+ and Grammatical Errors},
73
+ journal = {CoRR},
74
+ volume = {abs/1911.12893},
75
+ year = {2019},
76
+ url = {http://arxiv.org/abs/1911.12893},
77
+ eprinttype = {arXiv},
78
+ eprint = {1911.12893},
79
+ timestamp = {Wed, 08 Jan 2020 15:28:22 +0100},
80
+ biburl = {https://dblp.org/rec/journals/corr/abs-1911-12893.bib},
81
+ bibsource = {dblp computer science bibliography, https://dblp.org}
82
+ }
83
+ """
84
+
85
+ _RUSPELLRU_CITATION = """
86
+ @inproceedings{Shavrina2016SpellRuevalT,
87
+ title={SpellRueval : the FiRSt Competition on automatiC Spelling CoRReCtion FoR RuSSian},
88
+ author={Tatiana Shavrina and Россия Москва and Москва Яндекс and Россия and Россия Долгопрудный},
89
+ year={2016}
90
+ }
91
+ """
92
+
93
+ _LICENSE = "apache-2.0"
94
+
95
+
96
+ class RussianSpellcheckBenchmarkConfig(datasets.BuilderConfig):
97
+ """BuilderConfig for RussianSpellcheckBenchmark."""
98
+
99
+ def __init__(
100
+ self,
101
+ data_urls: Dict[str,str],
102
+ features: List[str],
103
+ citation: str,
104
+ **kwargs):
105
+ """BuilderConfig for RussianSpellcheckBenchmark.
106
+ Args:
107
+ features: *list[string]*, list of the features that will appear in the
108
+ feature dict. Should not include "label".
109
+ data_urls: *dict[string]*, urls to download the zip file from.
110
+ **kwargs: keyword arguments forwarded to super.
111
+ """
112
+ super(RussianSpellcheckBenchmarkConfig, self).__init__(version=datasets.Version("0.0.1"), **kwargs)
113
+ self.data_urls = data_urls
114
+ self.features = features
115
+ self.citation = citation
116
+
117
+
118
+ class RussianSpellcheckBenchmark(datasets.GeneratorBasedBuilder):
119
+ """Russian Spellcheck Benchmark."""
120
+
121
+ BUILDER_CONFIGS = [
122
+ RussianSpellcheckBenchmarkConfig(
123
+ name="GitHubTypoCorpusRu",
124
+ description = _GITHUB_TYPO_CORPUS_RU_DESCRIPTION,
125
+ data_urls={
126
+ "test": "data/GitHubTypoCorpusRu/test.json",
127
+ },
128
+ features=["source", "correction"],
129
+ citation=_GITHUB_TYPO_CORPUS_RU_CITATION,
130
+ ),
131
+ RussianSpellcheckBenchmarkConfig(
132
+ name="MedSpellchecker",
133
+ description = _MEDSPELLCHECK_DESCRIPTION,
134
+ data_urls={
135
+ "test": "data/MedSpellchecker/test.json",
136
+ },
137
+ features=["source", "correction"],
138
+ citation="",
139
+ ),
140
+ RussianSpellcheckBenchmarkConfig(
141
+ name="MultidomainGold",
142
+ description = _MULTIDOMAIN_GOLD_DESCRIPTION,
143
+ data_urls={
144
+ "train": "data/MultidomainGold/train.json",
145
+ "test": "data/MultidomainGold/test.json",
146
+ },
147
+ features=["source", "correction", "domain"],
148
+ citation=_MULTIDOMAIN_GOLD_CITATION,
149
+ ),
150
+ RussianSpellcheckBenchmarkConfig(
151
+ name="RUSpellRU",
152
+ description = _RUSPELLRU_DESCRIPTION,
153
+ data_urls={
154
+ "test": "data/RUSpellRU/test.json",
155
+ "train": "data/RUSpellRU/train.json",
156
+ },
157
+ features=["source", "correction"],
158
+ citation=_RUSPELLRU_CITATION,
159
+ ),
160
+ ]
161
+
162
+ def _info(self) -> datasets.DatasetInfo:
163
+ features = {
164
+ "source": datasets.Value("string"),
165
+ "correction": datasets.Value("string"),
166
+ }
167
+ if self.config.name == "MultidomainGold":
168
+ features["domain"] = datasets.Value("string")
169
+
170
+ return datasets.DatasetInfo(
171
+ features=datasets.Features(features),
172
+ description=_RUSSIAN_SPELLCHECK_BENCHMARK_DESCRIPTION + self.config.description,
173
+ license=_LICENSE,
174
+ citation=self.config.citation + "\n" + _RUSSIAN_SPELLCHECK_BENCHMARK_CITATION,
175
+ )
176
+
177
+ def _split_generators(
178
+ self, dl_manager: datasets.DownloadManager
179
+ ) -> List[datasets.SplitGenerator]:
180
+ urls_to_download = self.config.data_urls
181
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
182
+ if self.config.name == "GitHubTypoCorpusRu" or \
183
+ self.config.name == "MedSpellchecker":
184
+ return [
185
+ datasets.SplitGenerator(
186
+ name=datasets.Split.TEST,
187
+ gen_kwargs={
188
+ "data_file": downloaded_files["test"],
189
+ "split": datasets.Split.TEST,
190
+ },
191
+ )
192
+ ]
193
+ return [
194
+ datasets.SplitGenerator(
195
+ name=datasets.Split.TRAIN,
196
+ gen_kwargs={
197
+ "data_file": downloaded_files["train"],
198
+ "split": datasets.Split.TRAIN,
199
+ },
200
+ ),
201
+ datasets.SplitGenerator(
202
+ name=datasets.Split.TEST,
203
+ gen_kwargs={
204
+ "data_file": downloaded_files["test"],
205
+ "split": datasets.Split.TEST,
206
+ },
207
+ )
208
+ ]
209
+
210
+ def _generate_examples(self, data_file, split):
211
+ with open(data_file, encoding="utf-8") as f:
212
+ key = 0
213
+ for line in f:
214
+ row = json.loads(line)
215
+ example = {feature: row[feature] for feature in self.config.features}
216
+ yield key, example
217
+ key += 1