juancavallotti commited on
Commit
942f436
1 Parent(s): 92dd382

Create README.md

Browse files

Added dataset card.

Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {Juan Alberto López Cavallotti, Jan 6, 2023}
3
+ ---
4
+
5
+ # Dataset Card for Multilingual Grammar Error Correction
6
+
7
+ ## Dataset Description
8
+
9
+ - **Homepage:** https://juancavallotti.com
10
+ - **Repository:**
11
+ - **Paper:**
12
+ - **Leaderboard:**
13
+ - **Point of Contact:** Juan Alberto López Cavallotti
14
+
15
+ ### Dataset Summary
16
+
17
+ This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German.
18
+ This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform.
19
+
20
+ ### Supported Tasks and Leaderboards
21
+
22
+ * **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt.
23
+ * **Language Detection:** By appending the prefix: *language:* to the prompt.
24
+
25
+ ### Languages
26
+
27
+ * English
28
+ * Spanish
29
+ * French
30
+ * German
31
+
32
+ ## Dataset Structure
33
+
34
+ ### Data Instances
35
+
36
+ The dataset contains the following instances for each language:
37
+ * German 32282 sentences.
38
+ * English 51393 sentences.
39
+ * Spanish 67672 sentences.
40
+ * French 67157 sentences.
41
+
42
+ ### Data Fields
43
+
44
+ * `lang`: The language of the sentence
45
+ * `sentence`: The original sentence.
46
+ * `modified`: The corrupted sentence.
47
+ * `transformation`: The primary transformation used by the synthetic data generator.
48
+ * `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator.
49
+
50
+ ### Data Splits
51
+
52
+ * `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.
53
+
54
+ ## Dataset Creation
55
+
56
+ ### Curation Rationale
57
+
58
+ This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.
59
+
60
+ ### Source Data
61
+
62
+ #### Initial Data Collection and Normalization
63
+
64
+ The source grammatical sentences come from various open-source datasets, such as Tatoeba.
65
+
66
+ #### Who are the source language producers?
67
+
68
+ * Juan Alberto López Cavallotti
69
+
70
+ ### Annotations
71
+
72
+ #### Annotation process
73
+
74
+ The annotation is automatic and produced by the generation script.
75
+
76
+ #### Who are the annotators?
77
+
78
+ * Data generation script by Juan Alberto López Cavallotti
79
+
80
+ ### Other Known Limitations
81
+
82
+ The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.
83
+
84
+ ## Additional Information
85
+
86
+ ### Dataset Curators
87
+
88
+ [More Information Needed]
89
+
90
+ ### Licensing Information
91
+
92
+ This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0)
93
+
94
+ ### Citation Information
95
+
96
+ Please mention this original dataset and the author **Juan Alberto López Cavallotti**
97
+
98
+ ### Contributions
99
+
100
+ * Juan Alberto López Cavallotti