Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,14 +1,29 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
Dataset Summary
|
4 |
-
c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
-
As discussed before, this dataset contains 100k train and 25k test sentence pairs. Each article has these two attributes: input and output. Here is a sample of dataset:
|
10 |
|
|
|
|
|
|
|
|
|
|
|
11 |
{
|
12 |
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
|
13 |
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
|
14 |
-
}
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
source_datasets:
|
5 |
+
- allenai/c4
|
6 |
+
task_categories:
|
7 |
+
- text-generation
|
8 |
+
pretty_name: C4 200M Grammatical Error Correction Dataset
|
9 |
+
tags:
|
10 |
+
- grammatical-error-correction
|
11 |
+
---
|
12 |
+
# C4 200M
|
13 |
+
# Dataset Summary
|
14 |
|
|
|
|
|
15 |
|
16 |
+
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
|
17 |
|
18 |
+
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
|
|
|
19 |
|
20 |
+
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
|
21 |
+
|
22 |
+
# Description
|
23 |
+
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
|
24 |
+
```
|
25 |
{
|
26 |
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
|
27 |
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
|
28 |
+
}
|
29 |
+
```
|