aaronmueller commited on
Commit
91a19aa
1 Parent(s): 2d273b1
Files changed (2) hide show
  1. README.md +146 -0
  2. syntactic_transformations.py +203 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en,de
8
+ licenses:
9
+ - MIT
10
+ multilinguality:
11
+ - 2 languages
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - syntactic-evaluation
18
+ task_ids:
19
+ - syntactic-transformations
20
+ ---
21
+
22
+ # Dataset Card for syntactic_transformations
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [Needs More Information]
50
+ - **Repository:** https://github.com/sebschu/multilingual-transformations
51
+ - **Paper:** [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/)
52
+ - **Leaderboard:** [Needs More Information]
53
+ - **Point of Contact:** [Aaron Mueller](mailto:amueller@jhu.edu)
54
+
55
+ ### Dataset Summary
56
+
57
+ This contains the the syntactic transformations datasets used in [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/). It consists of English and German question formation and passivization transformations. This dataset also contains zero-shot cross-lingual transfer training and evaluation data.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [Needs More Information]
62
+
63
+ ### Languages
64
+
65
+ English and German.
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ A typical data point consists of a source sequence ("src"), a target sequence ("tgt"), and a task prefix ("prefix"). The prefix indicates whether a given sequence should be kept the same in the target (indicated by the "decl:" prefix) or transformed into a question/passive ("quest:"/"passiv:", respectively). An example follows:
72
+
73
+ {"src": "the yak has entertained the walruses that have amused the newt.",
74
+ "tgt": "has the yak entertained the walruses that have amused the newt?",
75
+ "prefix": "quest: "
76
+ }
77
+
78
+ ### Data Fields
79
+
80
+ - src: the original source sequence.
81
+ - tgt: the transformed target sequence.
82
+ - prefix: indicates which transformation to perform to map from the source to target sequences.
83
+
84
+ ### Data Splits
85
+
86
+ The datasets are split into training, dev, test, and gen ("generalization") sets. The training sets are for fine-tuning the model. The dev and test sets are for evaluating model abilities on in-domain transformations. The generalization sets are for evaluating the inductive biases of the model.
87
+
88
+ NOTE: for the zero-shot cross-lingual transfer datasets, the generalization sets are split into in-domain and out-of-domain syntactic structures. For in-domain transformations, use "gen_rc_o" for question formation or "gen_pp_o" for passivization. For out-of-domain transformations, use "gen_rc_s" for question formation or "gen_pp_s" for passivization.
89
+
90
+ ## Dataset Creation
91
+
92
+ ### Curation Rationale
93
+
94
+ [Needs More Information]
95
+
96
+ ### Source Data
97
+
98
+ #### Initial Data Collection and Normalization
99
+
100
+ [Needs More Information]
101
+
102
+ #### Who are the source language producers?
103
+
104
+ [Needs More Information]
105
+
106
+ ### Annotations
107
+
108
+ #### Annotation process
109
+
110
+ [Needs More Information]
111
+
112
+ #### Who are the annotators?
113
+
114
+ [Needs More Information]
115
+
116
+ ### Personal and Sensitive Information
117
+
118
+ [Needs More Information]
119
+
120
+ ## Considerations for Using the Data
121
+
122
+ ### Social Impact of Dataset
123
+
124
+ [Needs More Information]
125
+
126
+ ### Discussion of Biases
127
+
128
+ [Needs More Information]
129
+
130
+ ### Other Known Limitations
131
+
132
+ [Needs More Information]
133
+
134
+ ## Additional Information
135
+
136
+ ### Dataset Curators
137
+
138
+ [Needs More Information]
139
+
140
+ ### Licensing Information
141
+
142
+ [Needs More Information]
143
+
144
+ ### Citation Information
145
+
146
+ [Needs More Information]
syntactic_transformations.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import dataclasses
2
+ import datasets
3
+ import json
4
+
5
+ logger = datasets.logging.get_logger(__name__)
6
+
7
+ _CITATION = """\
8
+ @inproceedings{mueller-etal-2022-coloring,
9
+ title = "Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models",
10
+ author = "Mueller, Aaron and
11
+ Frank, Robert and
12
+ Linzen, Tal and
13
+ Wang, Luheng and
14
+ Schuster, Sebastian",
15
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
16
+ month = may,
17
+ year = "2022",
18
+ address = "Dublin, Ireland",
19
+ publisher = "Association for Computational Linguistics",
20
+ url = "https://aclanthology.org/2022.findings-acl.106",
21
+ doi = "10.18653/v1/2022.findings-acl.106",
22
+ pages = "1352--1368",
23
+ }
24
+ """
25
+
26
+ _DESCRIPTION = """\
27
+ This is the dataset used for Coloring the Blank Slate:
28
+ Pre-training Imparts a Hierarchical Inductive Bias to
29
+ Sequence-to-sequence Models.
30
+ """
31
+
32
+ class SyntacticTransformationsConfig(datasets.BuilderConfig):
33
+ def __init__(self, description, features, data_url, citation, url, **kwargs):
34
+ super(SyntacticTransformationsConfig, self).__init__(version=datasets.Version("1.18.3"), **kwargs)
35
+ self.description = description
36
+ self.text_features = features
37
+ self.citation = citation
38
+ self.data_url = data_url
39
+ self.url = url
40
+
41
+
42
+
43
+ class SyntacticTransformations(datasets.GeneratorBasedBuilder):
44
+ standard_features = datasets.Features(
45
+ {
46
+ "src": datasets.Value("string"),
47
+ "tgt": datasets.Value("string"),
48
+ "prefix": datasets.Value("string")
49
+ }
50
+ )
51
+
52
+ BUILDER_CONFIGS = [
53
+ SyntacticTransformationsConfig(
54
+ name="passiv-en-nps",
55
+ description="English passivization transformations.",
56
+ features=standard_features,
57
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/passiv_en_nps/",
58
+ url="https://github.com/sebschu/multilingual-transformations/",
59
+ citation=_CITATION
60
+ ),
61
+ SyntacticTransformationsConfig(
62
+ name="passiv-de-nps",
63
+ description="German passivization transformations.",
64
+ features=standard_features,
65
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/passiv_de_nps/",
66
+ url="https://github.com/sebschu/multilingual-transformations/",
67
+ citation=_CITATION
68
+ ),
69
+ SyntacticTransformationsConfig(
70
+ name="question-en",
71
+ description="English question formation transformations.",
72
+ features=standard_features,
73
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/question_have-havent_en/",
74
+ url="https://github.com/sebschu/multilingual-transformations/",
75
+ citation=_CITATION
76
+ ),
77
+ SyntacticTransformationsConfig(
78
+ name="question-de",
79
+ description="German question formation transformations.",
80
+ features=standard_features,
81
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/question_have-can_withquest_de/",
82
+ url="https://github.com/sebschu/multilingual-transformations/",
83
+ citation=_CITATION
84
+ ),
85
+ SyntacticTransformationsConfig(
86
+ name="passiv-en_de-nps",
87
+ description="Zero-shot English-to-German passivization transformations.",
88
+ features=standard_features,
89
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/passiv_en-de_nps/",
90
+ url="https://github.com/sebschu/multilingual-transformations/",
91
+ citation=_CITATION
92
+ ),
93
+ SyntacticTransformationsConfig(
94
+ name="question-en_de",
95
+ description="Zero-shot English-to-German question formation transformations.",
96
+ features=standard_features,
97
+ data_url="https://raw.githubusercontent.com/sebschu/multilingual-transformations/main/data/question_have-can_de/",
98
+ url="https://github.com/sebschu/multilingual-transformations/",
99
+ citation=_CITATION
100
+ )
101
+ ]
102
+
103
+ def _split_generators(self, dl_manager):
104
+ if self.config.name == "passiv-en-nps":
105
+ template = "passiv_en_nps.{}.json"
106
+ _URLS = {
107
+ "train": self.config.data_url + template.format("train"),
108
+ "dev": self.config.data_url + template.format("dev"),
109
+ "test": self.config.data_url + template.format("test"),
110
+ "gen": self.config.data_url + template.format("gen"),
111
+ }
112
+ elif self.config.name == "passiv-de-nps":
113
+ template = "passiv_de_nps.{}.json"
114
+ _URLS = {
115
+ "train": self.config.data_url + template.format("train"),
116
+ "dev": self.config.data_url + template.format("dev"),
117
+ "test": self.config.data_url + template.format("test"),
118
+ "gen": self.config.data_url + template.format("gen"),
119
+ }
120
+ elif self.config.name == "question-en":
121
+ template = "question_have.{}.json"
122
+ _URLS = {
123
+ "train": self.config.data_url + template.format("train"),
124
+ "dev": self.config.data_url + template.format("dev"),
125
+ "test": self.config.data_url + template.format("test"),
126
+ "gen": self.config.data_url + template.format("gen"),
127
+ }
128
+ elif self.config.name == "question-de":
129
+ template = "question_have_can.de.{}.json"
130
+ _URLS = {
131
+ "train": self.config.data_url + template.format("train"),
132
+ "dev": self.config.data_url + template.format("dev"),
133
+ "test": self.config.data_url + template.format("test"),
134
+ "gen": self.config.data_url + template.format("gen"),
135
+ }
136
+ elif self.config.name == "question-en_de":
137
+ template = "question_have_can.de.{}.json"
138
+ _URLS = {
139
+ "train": self.config.data_url + "question_have_can.en-de.train.json",
140
+ "dev": self.config.data_url + template.format("dev"),
141
+ "test": self.config.data_url + template.format("test"),
142
+ "gen_rc_s": self.config.data_url + template.format("gen_rc_s"),
143
+ "gen_rc_o": self.config.data_url + template.format("gen_rc_o"),
144
+ }
145
+ elif self.config.name == "passiv-en_de-nps":
146
+ template = "passiv_de_nps.{}.json"
147
+ _URLS = {
148
+ "train": self.config.data_url + "passiv_en-de_nps.train.json",
149
+ "dev": self.config.data_url + template.format("dev"),
150
+ "test": self.config.data_url + template.format("test"),
151
+ "gen_pp_s": self.config.data_url + template.format("gen_pp_s"),
152
+ "gen_pp_o": self.config.data_url + template.format("gen_pp_o"),
153
+ }
154
+
155
+
156
+ data_files = dl_manager.download(_URLS)
157
+
158
+ if "en_de" not in self.config.name:
159
+ return [
160
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
161
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
162
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
163
+ datasets.SplitGenerator(name=datasets.NamedSplit('generalization'), gen_kwargs={"filepath": data_files["gen"]}),
164
+ ]
165
+ else:
166
+ gen_s = "gen_pp_s" if "passiv" in self.config.name else "gen_rc_s"
167
+ gen_o = "gen_pp_o" if "passiv" in self.config.name else "gen_rc_o"
168
+ return [
169
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
170
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
171
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
172
+ datasets.SplitGenerator(name=datasets.NamedSplit('generalization_s'), gen_kwargs={"filepath": data_files[gen_s]}),
173
+ datasets.SplitGenerator(name=datasets.NamedSplit('generalization_o'), gen_kwargs={"filepath": data_files[gen_o]}),
174
+ ]
175
+
176
+ def _info(self):
177
+ features = {text_feature: datasets.Value("string") for text_feature in self.config.text_features.keys()}
178
+ return datasets.DatasetInfo(
179
+ description=_DESCRIPTION,
180
+ features=datasets.Features(features),
181
+ homepage=self.config.url,
182
+ citation=_CITATION,
183
+ )
184
+
185
+ def _generate_examples(self, filepath):
186
+ """This function returns the examples in the raw (text) form."""
187
+ logger.info("generating examples from = %s", filepath)
188
+ with open(filepath) as f:
189
+ id_ = 0
190
+ for line in f:
191
+ example = json.loads(line)
192
+ src = example["translation"]["src"]
193
+ tgt = example["translation"]["tgt"]
194
+ prefix = example["translation"]["prefix"]
195
+
196
+ # Features currently used are "context", "question", and "answers".
197
+ # Others are extracted here for the ease of future expansions.
198
+ yield id_, {
199
+ "src": src,
200
+ "tgt": tgt,
201
+ "prefix": prefix,
202
+ }
203
+ id_ += 1