waboucay commited on
Commit
2d98a42
1 Parent(s): 552cb9e

Add loader file + model card

Browse files
Files changed (2) hide show
  1. README.md +177 -0
  2. turk_corpus.py +120 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - text2text-generation
6
+ ---
7
+
8
+ # Turk Corpus
9
+
10
+ <!-- Provide a quick summary of the dataset. -->
11
+
12
+ HuggingFace implementation of the Turk corpus for sentence simplification gathered by Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen and Chris Callison-Burch.
13
+
14
+ /!\ I am not one of the creators of the dataset, I just needed a HF version of this dataset and uploaded it. I encourage you to read the paper introducing the dataset: [Optimizing Statistical Machine Translation for Text Simplification](https://aclanthology.org/Q16-1029/) (2016)
15
+
16
+ <!-- ## Dataset Details
17
+
18
+ ### Dataset Description -->
19
+
20
+ <!-- Provide a longer summary of what this dataset is. -->
21
+
22
+
23
+
24
+ <!-- - **Curated by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Language(s) (NLP):** [More Information Needed]
28
+ - **License:** [More Information Needed]
29
+
30
+ ### Dataset Sources [optional] -->
31
+
32
+ <!-- Provide the basic links for the dataset. -->
33
+
34
+ <!-- - **Repository:** [More Information Needed]
35
+ - **Paper [optional]:** [More Information Needed]
36
+ - **Demo [optional]:** [More Information Needed] -->
37
+
38
+ ## Uses
39
+
40
+ This dataset can be used to evaluate sentence simplification models.
41
+
42
+ <!-- ### Direct Use -->
43
+
44
+ <!-- This section describes suitable use cases for the dataset. -->
45
+
46
+ <!-- [More Information Needed]
47
+
48
+ ### Out-of-Scope Use -->
49
+
50
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
51
+
52
+ <!-- [More Information Needed] -->
53
+
54
+ ## Dataset Structure
55
+
56
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
57
+
58
+ - **Size of the generated dataset:** 2.4 MB
59
+
60
+ An example of 'test' looks as follows.
61
+ ```
62
+ {
63
+ 'complex': 'One side of the armed conflicts is composed mainly of the Sudanese military and the Janjaweed , a Sudanese militia group recruited mostly from the Afro-Arab Abbala tribes of the northern Rizeigat region in Sudan .',
64
+ 'simple': [
65
+ 'One side of the armed conflicts is made of Sudanese military and the Janjaweed , a Sudanese militia recruited from the Afro-Arab Abbala tribes of the northern Rizeigat region in Sudan .',
66
+ 'One side of the armed conflicts is composed mainly of the Sudanese military and the Janjaweed, a Sudanese militia group recruited mostly from the Afro-Arab Abbala tribes of the northern Rizeigat regime in Sudan.',
67
+ 'One side of the armed conflicts is made up mostly of the Sudanese military and the Janjaweed, a Sudanese militia group whose recruits mostly come from the Afro-Arab Abbala tribes from the northern Rizeigat region in Sudan.',
68
+ 'One side of the armed conflicts is composed mainly of the Sudanese military and the Janjaweed , a Sudanese militia group recruited mostly from the Afro-Arab Abbala tribes in Sudan .',
69
+ 'One side of the armed conflicts is composed mainly of the Sudanese military and the Janjaweed , a Sudanese militia group recruited mostly from the Afro-Arab Abbala tribes of the northern Rizeigat region in Sudan .',
70
+ 'One side of the armed conflicts consist of the Sudanese military and the Sudanese militia group Janjaweed.',
71
+ 'The Sudanese military and the Janjaweed make up one of the armed conflicts, mostly from the Afro-Arab Abbal tribes in Sudan.',
72
+ 'One side of the armed conflicts is mainly Sudanese military and the Janjaweed, which recruited from the Afro-Arab Abbala tribes.'
73
+ ]
74
+ }
75
+ ```
76
+
77
+ <!-- ## Dataset Creation
78
+
79
+ ### Curation Rationale
80
+
81
+ <!-- Motivation for the creation of this dataset. -->
82
+
83
+ <!-- [More Information Needed]
84
+
85
+ ### Source Data
86
+
87
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
88
+
89
+ <!-- #### Data Collection and Processing
90
+
91
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
92
+
93
+ <!-- [More Information Needed]
94
+
95
+ #### Who are the source data producers?
96
+
97
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
98
+
99
+ <!-- [More Information Needed]
100
+
101
+ ### Annotations [optional]
102
+
103
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
104
+
105
+ <!-- #### Annotation process
106
+
107
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
108
+
109
+ <!-- [More Information Needed]
110
+
111
+ #### Who are the annotators?
112
+
113
+ <!-- This section describes the people or systems who created the annotations. -->
114
+
115
+ <!-- [More Information Needed]
116
+
117
+ #### Personal and Sensitive Information
118
+
119
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
120
+
121
+ <!-- [More Information Needed]
122
+
123
+ ## Bias, Risks, and Limitations
124
+
125
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
126
+
127
+ <!-- [More Information Needed]
128
+
129
+ ### Recommendations
130
+
131
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
132
+
133
+ <!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
134
+
135
+ ## Citation
136
+
137
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
138
+
139
+ **BibTeX:**
140
+
141
+ ```
142
+ @article{xu-etal-2016-optimizing,
143
+ title = "Optimizing Statistical Machine Translation for Text Simplification",
144
+ author = "Xu, Wei and Napoles, Courtney and Pavlick, Ellie and Chen, Quanze and Callison-Burch, Chris",
145
+ editor = "Lee, Lillian and Johnson, Mark and Toutanova, Kristina",
146
+ journal = "Transactions of the Association for Computational Linguistics",
147
+ volume = "4",
148
+ year = "2016",
149
+ address = "Cambridge, MA",
150
+ publisher = "MIT Press",
151
+ url = "https://aclanthology.org/Q16-1029",
152
+ doi = "10.1162/tacl_a_00107",
153
+ pages = "401--415",
154
+ }
155
+ ```
156
+
157
+ **ACL:**
158
+
159
+ Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing Statistical Machine Translation for Text Simplification. Transactions of the Association for Computational Linguistics, 4:401–415.
160
+
161
+ <!-- ## Glossary [optional]
162
+
163
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
164
+
165
+ <!-- [More Information Needed]
166
+
167
+ ## More Information [optional]
168
+
169
+ [More Information Needed]
170
+
171
+ ## Dataset Card Authors [optional]
172
+
173
+ [More Information Needed]
174
+
175
+ ## Dataset Card Contact
176
+
177
+ [More Information Needed] -->
turk_corpus.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import json
3
+ # Lint as: python3
4
+ import os
5
+
6
+ import datasets
7
+
8
+
9
+ logger = datasets.logging.get_logger(__name__)
10
+
11
+
12
+ _CITATION = """
13
+ @article{Xu-EtAl:2016:TACL,
14
+ author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
15
+ title = {Optimizing Statistical Machine Translation for Text Simplification},
16
+ journal = {Transactions of the Association for Computational Linguistics},
17
+ volume = {4},
18
+ year = {2016},
19
+ url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},
20
+ pages = {401--415}
21
+ }
22
+ """
23
+
24
+ _DESCRIPTION = """Corpus of sentences gathered from Wikipedia and simplifications proposed by Amazon MTurk workers.
25
+
26
+ Data gathered by Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen and Chris Callison-Burch."""
27
+
28
+ _URLS = {
29
+ "tune": "https://huggingface.co/datasets/waboucay/turk_corpus/raw/main/tune.8turkers.organized.tsv",
30
+ "test": "https://huggingface.co/datasets/waboucay/turk_corpus/raw/main/test.8turkers.organized.tsv"
31
+ }
32
+ _TUNE_FILE = "tune.json"
33
+ _TEST_FILE = "test.json"
34
+
35
+
36
+ class TurkCorpusConfig(datasets.BuilderConfig):
37
+ """BuilderConfig for WikiLarge dataset"""
38
+
39
+ def __init__(self, **kwargs):
40
+ """BuilderConfig for Turk Corpus dataset
41
+ Args:
42
+ **kwargs: keyword arguments forwarded to super.
43
+ """
44
+ super(TurkCorpusConfig, self).__init__(**kwargs)
45
+
46
+
47
+ class TurkCorpus(datasets.GeneratorBasedBuilder):
48
+ VERSION = datasets.Version("1.0.0", "")
49
+ BUILDER_CONFIG_CLASS = TurkCorpusConfig
50
+ BUILDER_CONFIGS = [
51
+ TurkCorpusConfig(
52
+ name="turk_corpus",
53
+ version=datasets.Version("1.0.0", ""),
54
+ description=_DESCRIPTION,
55
+ )
56
+ ]
57
+
58
+ def _info(self):
59
+ features = datasets.Features(
60
+ {
61
+ "complex": datasets.Value("string"),
62
+ "simple": datasets.Sequence(datasets.Value("string")),
63
+ }
64
+ )
65
+
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=features,
69
+ supervised_keys=None,
70
+ homepage="https://github.com/cocoxu/simplification/tree/master",
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ dl_files = dl_manager.download(_URLS)
75
+
76
+ tune_path = os.path.join(os.path.dirname(dl_files["test"]), _TUNE_FILE)
77
+ test_path = os.path.join(os.path.dirname(dl_files["test"]), _TEST_FILE)
78
+
79
+ tune_data_path = os.path.abspath(dl_files["tune"])
80
+ test_data_path = os.path.abspath(dl_files["test"])
81
+
82
+ with open(tune_data_path, encoding="utf-8") as tune_data, open(test_data_path, encoding="utf-8") as test_data, \
83
+ open(tune_path, "w", encoding="utf-8") as tune_json, open(test_path, "w", encoding="utf-8") as test_json:
84
+
85
+ tune_reader = csv.reader(tune_data, delimiter="\t")
86
+ test_reader = csv.reader(test_data, delimiter="\t")
87
+
88
+ tune_data = []
89
+ for line in tune_reader:
90
+ tune_data.append({"complex": line[1], "simple": line[2:]})
91
+ json.dump(tune_data, tune_json)
92
+
93
+ test_data = []
94
+ for line in test_reader:
95
+ test_data.append({"complex": line[1], "simple": line[2:]})
96
+ json.dump(test_data, test_json)
97
+
98
+ data_files = {
99
+ "tune": tune_path,
100
+ "test": test_path,
101
+ }
102
+
103
+ return [
104
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["tune"]}),
105
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
106
+ ]
107
+
108
+ def _generate_examples(self, filepath):
109
+ """This function returns the examples in the raw (text) form."""
110
+
111
+ with open(filepath, encoding="utf-8") as f:
112
+ guid = 0
113
+
114
+ data = json.load(f)
115
+ for obj in data:
116
+ yield guid, {
117
+ "complex": obj["complex"],
118
+ "simple": obj["simple"]
119
+ }
120
+ guid += 1