Sebastian Gehrmann commited on
Commit
fd57884
1 Parent(s): 0683d73

data card.

Browse files
Files changed (1) hide show
  1. README.md +464 -107
README.md CHANGED
@@ -1,45 +1,82 @@
1
  ---
2
- title: 'BiSECT'
3
- type: 'Split and Rephrase'
4
- motivation: 'Why is the dataset part of GEM?'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
- ## Table of Contents
8
-
9
- [Leave this blank, we autogenerate this section and overwrite content]
10
 
11
  ## Dataset Description
12
 
13
- - **Homepage:** [https://github.com/mounicam/BiSECT/](https://github.com/mounicam/BiSECT/)
14
- - **Repository:** [https://github.com/mounicam/BiSECT/](https://github.com/mounicam/BiSECT/)
15
- - **Paper:** [https://aclanthology.org/2021.emnlp-main.500/](https://aclanthology.org/2021.emnlp-main.500/)
16
- - **Points of Contact:** [Joongwon Kim](mailto:jkim0118@seas.upenn.edu), [Mounica Maddela](mailto:mmaddela3@gatech.edu), [Reno Kriz](mailto:rkriz1@jh.edu)
 
 
 
 
 
17
 
18
- ### Dataset and Task Summary
19
- This dataset captures the ‘Split and Rephrase’ task, which involves taking long, complex sentences and splitting them into shorter, simpler, and potentially rephrased meaning-equivalent sentences.
20
 
21
- **BiSECT** was created via bilingual pivoting using subsets of the OPUS dataset ([Tiedemann and Nygaard, 2004](https://aclanthology.org/L04-1174/)). It spans multiple domains, from web crawl to government documents. The data released here is in English, but data for other European languages are also available upon request.
22
 
23
- Compared to previous resources for this task, the resulting dataset was found to contain examples with higher quality, as well as splits that require more significant modifications.
 
 
 
 
 
24
 
25
- ### Why is this dataset part of GEM?
 
26
 
27
- **BiSECT** is the largest available corpora for the Split and Rephrase task. In addition, it has been shown that **BiSECT** is of higher quality than previous Split and Rephrase corpora, contains a wider variety of splitting operations, and is also available in four languages.
 
28
 
29
- ### Languages
30
- **BiSECT** is available in English (en-US), French, Spanish, German.
31
 
32
- ## Meta Information
33
 
34
- ### Dataset Curators
35
 
36
- BiSECT was developed by researchers at the University of Pennsylvania and Georgia Institute of Technology. This work is supported in part by the NSF awards IIS-2055699, ODNI and IARPA via the BETTER program (contract 19051600004), and the DARPA KAIROS Program (contract FA8750-19-2-1004).
 
 
37
 
38
- ### Licensing Information
39
 
40
- The dataset is not licensed by itself, and the source Opus data consists solely of publicly available parallel corpora.
 
 
41
 
42
- ### Citation Information
 
 
 
 
 
 
 
 
 
43
  ```
44
  @inproceedings{kim-etal-2021-bisect,
45
  title = "{B}i{SECT}: Learning to Split and Rephrase Sentences with Bitexts",
@@ -57,39 +94,94 @@ The dataset is not licensed by itself, and the source Opus data consists solely
57
  pages = "6193--6209"
58
  }
59
  ```
60
- This work also evaluates on the HSplit-Wiki evaluation set, first introduced in the papers below.
61
- ```
62
- @article{Xu-EtAl:2016:TACL,
63
- author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
64
- title = {Optimizing Statistical Machine Translation for Text Simplification},
65
- journal = {Transactions of the Association for Computational Linguistics},
66
- volume = {4},
67
- year = {2016},
68
- pages = {401--415}
69
- },
70
- @inproceedings{sulem-etal-2018-bleu,
71
- title = "{BLEU} is Not Suitable for the Evaluation of Text Simplification",
72
- author = "Sulem, Elior and
73
- Abend, Omri and
74
- Rappoport, Ari",
75
- booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
76
- month = oct # "-" # nov,
77
- year = "2018",
78
- address = "Brussels, Belgium",
79
- publisher = "Association for Computational Linguistics",
80
- url = "https://aclanthology.org/D18-1081",
81
- doi = "10.18653/v1/D18-1081",
82
- pages = "738--744"
83
- }​​
84
- ```
85
 
86
- ### Leaderboard
87
- There is currently no leaderboard for this task.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
- ## Dataset Structure
90
 
91
- ### Data Instances
92
- Example of an instance:
 
 
 
 
 
 
 
 
 
 
93
  ```
94
  {
95
  "gem_id": "bisect-train-0",
@@ -98,97 +190,362 @@ Example of an instance:
98
  }
99
  ```
100
 
101
- ### Data Fields
102
- The fields are the same across all splits.
103
- - `gem_id` - (string) a unique identifier for the instance
104
- - `source_sentence` - (string) sentence to be simplified
105
- - `target_sentence` - (string) simplified text that was split and rephrased
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
 
 
 
107
 
108
- ### Data Statistics
109
- |dataset |train |validation |test |
110
- |--------:|:-----:|:---------:|:---:|
111
- |BiSECT-en|928,440| 9,079|583 |
112
- |BiSECT-de|184,638| 864|735 |
113
- |BiSECT-es|282,944| 3,638|3,081|
114
- |BiSECT-fr|491,035| 2,400|1,036|
115
- |HSplit |-- |-- |359 |
116
- |Challenge Set|-- |-- |1,798|
117
 
118
- ## Dataset Creation
119
 
120
- ### Curation Rationale
121
 
122
- **BiSECT** was constructed to satisfy the need of a Split and Rephrase corpus that is both large-scale and high-quality. Most previous Split and Rephrase corpora ([HSplit-Wiki](https://www.aclweb.org/anthology/D18-1081), [Cont-Benchmark](https://www.aclweb.org/anthology/2020.emnlp-main.91), and [Wiki-Benchmark](https://www.aclweb.org/anthology/2020.emnlp-main.91)) were manually written at a small scale and focused on evaluation, while the one corpus of comparable size, [WikiSplit](https://www.aclweb.org/anthology/D18-1080), contains around 25\% of pairs contain significant errors. This is because Wikipedia editors are not only trying to split a sentence, but also often simultaneously modifying the sentence for other purposes, which results in changes of the initial meaning.
123
 
124
- ### Communicative Goal
125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  The goal of Split and Rephrase is to break down longer sentences into multiple shorter sentences, which has downstream applications for many NLP tasks, including machine translation and dependency parsing.
127
 
128
- ### Source Data
129
 
130
- #### Initial Data Collection and Normalization
 
 
131
 
132
- The construction of the **BiSECT** corpus relies on leveraging the sentence-level alignments from [*OPUS*](http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf)), a collection of bilingual parallel corpora over many language pairs. Given a target language *A*, this work extracts all 1-2 and 2-1 sentence alignments from parallel corpora between *A* and a set of foreign languages ***B***.
133
 
134
- Next, the foreign sentences are translated into English using Google Translate's [Web API service](https://pypi.org/project/googletrans/) to obtain sentence alignments between a single long sentence $l$ and two corresponding split sentences $s= (s_1, s_2)$, both in the desired language.
135
 
136
- To remove noise, the authors remove pairs where $l$ contains a token with a punctuation after the first two and before the last two alphabetic characters, as well as where $l$ contains more than one unconnected component in its dependency tree, generated via [SpaCy](https://spacy.io).
137
 
138
- #### Who are the source language producers?
 
 
139
 
140
- Opus corpora are from a variety of sources. The **BiSECT** English training set contains pairs extracted from five datasets: *CCAligned*, parallel English-French documents from common crawl; *Europarl*, an English-French dataset from European Parliament; *10^9 FR-EN*, an English-French newswire corpus; *ParaCrawl*, a multilingual web crawl dataset; and *UN*, multilingual translated UN documents. The **BiSECT** English test set contains pairs extracted from two additional datasets: *EMEA*, an English-French parallel corpus made out of PDF documents from the European Medicines Agency; and *JRC-Acquis*, a multilingual collection of European Union legislative text. Details about the French, Spanish, and German versions can be found in the paper.
141
 
142
- ### Annotations
 
 
143
 
144
- #### Annotation process
145
 
146
- The training data was automatically extracted, so no annotators were needed. For the English test set, the authors manually selected 583 high-quality sentence splits from 1000 random source-target pairs from the *EMEA* and *JRC-Acquis* corpora.
 
 
147
 
148
- #### Who are the annotators?
149
 
150
- None.
 
 
151
 
152
- ### Personal and Sensitive Information
153
 
154
- Since this data is collected from [*OPUS*](http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf), all pairs are already in the public domain.
 
 
155
 
156
- ## Changes to the Original Dataset for GEM
157
 
158
- The original **BiSECT** training, validation, and test splits are maintained in each language to ensure a fair comparison. Note that the original **BiSECT** English test set was created by manually selecting 583 high-quality Split and Rephrase instances from 1000 random source-target pairs sampled from the *EMEA* and *JRC-Acquis* corpora from [*OPUS*](http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf).
 
 
159
 
160
- As the first English challenge set, we include the *HSPLIT-Wiki* test set, containing 359 pairs. For each complex sentence, there are four reference splits; To ensure replicability, as reference splits, we again follow the BiSECT paper and present only the references from [HSplit2-full](https://github.com/eliorsulem/HSplit-corpus/blob/master/HSplit/HSplit2_full).
161
 
162
- ### Special Test Sets
163
 
164
- In addition to the two evaluation sets used in the original **BiSECT** paper, we also introduce a second English challenge set. For this, we initially consider all 7,293 pairs from the *EMEA* and *JRC-Acquis* corpora. From there, we classify each pair using the classification algorithm from Section 4.2 of the original **BiSECT** paper. The three classes are as follows:
165
 
166
- 1) **Direct Insertion**: when a long sentence *l* contains two independent clauses and requires only minor changes in order to make a fluent and meaning-preserving split *s*.
167
- 2) **Changes near Split**, when *l* contains one independent and one dependent clause, but modifications are restricted to the region where *l* is split.
168
- 3) **Changes across Sentences**, where major changes are required throughout *l* in order to create a fluent split *s*.
169
 
170
- We keep only pairs labeled as Type 3, and after filtering out pairs with significant length differences (signaling potential content addition/deletion), we present a second challenge set of 1,798 pairs.
171
 
172
- ## Considerations for Using the Data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
 
174
- ### Social Impact of the Dataset
175
- Understanding long and complex sentences is challenging for both humans and NLP models. The **BiSECT** dataset helps facilitate more research on Split and Rephrase as a task within itself, as well as how it can benefit downstream NLP applications.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
- ### Impact on Underserved Communities
178
- The data as provided in GEMv2 is in English, French, Spanish, and German, languages with abundant existing resources. However, the dataset creation process introduced in the original paper provides a framework for leveraging bilingual corpora from any language pair found within [*OPUS*](http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf).
179
 
180
  ### Discussion of Biases
181
 
182
- The *Opus* corpora used are from a limited set of relatively formal domains, so it is possible that high performance on the BiSECT test set may not transfer to more informal text.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
 
184
- ### Other Known Limitations
185
 
186
- The creation of English **BiSECT** relies on translating non-English text back to English. While machine translation systems tend to perform well on high-resource languages, there is still a non-negligible chance that there these systems make errors; through a manual evaluation of a subset of **BiSECT**, it was found that 15% of pairs contained significant errors, while an additional 22% contained minor adequacy/fluency errors. This problem is exacerbated slightly when creating German **BiSECT** (22% significant errors, 24% minor errors), and these numbers would likely get larger if lower-resource languages were used.
187
 
188
- ## Getting started with in-depth research on the task
189
 
190
- The dataset can be downloaded from the [original repository](https://github.com/mounicam/BiSECT) by the authors.
 
 
191
 
192
- The [original **BiSECT** paper](https://aclanthology.org/2021.emnlp-main.500/) proposes several transformer-based models that can be used as baselines, which also compares against [Copy512](https://www.aclweb.org/anthology/P18-2114), an LSTM-based model and the previous state-of-the-art.
193
 
194
- The common metric used for automatic evaluation of Split and Rephrase, and sentence simplification more generally is [SARI](https://www.aclweb.org/anthology/Q15-1021). The **BiSECT** paper also evaluates using [BERTScore](https://openreview.net/forum?id=SkeHuCVFDr). Note that automatic evaluations tend to not correlate well with human judgments, so a human evaluation for quality is generally expected for publication. The original **BiSECT** paper provides templates for collecting quality annotations from Amazon Mechanical Turk.
 
1
  ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - other
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: BiSECT
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - simplification
19
+ task_ids:
20
+ - unknown
21
  ---
22
 
23
+ # Dataset Card for GEM/BiSECT
 
 
24
 
25
  ## Dataset Description
26
 
27
+ - **Homepage:** https://github.com/mounicam/BiSECT
28
+ - **Repository:** https://github.com/mounicam/BiSECT/tree/main/bisect
29
+ - **Paper:** https://aclanthology.org/2021.emnlp-main.500/
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Joongwon Kim, Mounica Maddela, Reno Kriz
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/BiSECT).
36
 
37
+ ### Dataset Summary
 
38
 
39
+ This dataset is composed of 1 million complex sentences with the task to split and simplify them while retaining the full meaning. Compared to other simplification corpora, BiSECT requires more significant edits. BiSECT offers splits in English, German, French, and Spanish.
40
 
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/BiSECT')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/BiSECT).
47
 
48
+ #### website
49
+ [Link](https://github.com/mounicam/BiSECT)
50
 
51
+ #### paper
52
+ [Link](https://aclanthology.org/2021.emnlp-main.500/)
53
 
54
+ ## Dataset Overview
 
55
 
56
+ ### Where to find the Data and its Documentation
57
 
58
+ #### Webpage
59
 
60
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
61
+ <!-- scope: telescope -->
62
+ [Link](https://github.com/mounicam/BiSECT)
63
 
64
+ #### Download
65
 
66
+ <!-- info: What is the link to where the original dataset is hosted? -->
67
+ <!-- scope: telescope -->
68
+ [Link](https://github.com/mounicam/BiSECT/tree/main/bisect)
69
 
70
+ #### Paper
71
+
72
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
73
+ <!-- scope: telescope -->
74
+ [Link](https://aclanthology.org/2021.emnlp-main.500/)
75
+
76
+ #### BibTex
77
+
78
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
79
+ <!-- scope: microscope -->
80
  ```
81
  @inproceedings{kim-etal-2021-bisect,
82
  title = "{B}i{SECT}: Learning to Split and Rephrase Sentences with Bitexts",
 
94
  pages = "6193--6209"
95
  }
96
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
+ #### Contact Name
99
+
100
+ <!-- quick -->
101
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
102
+ <!-- scope: periscope -->
103
+ Joongwon Kim, Mounica Maddela, Reno Kriz
104
+
105
+ #### Contact Email
106
+
107
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
108
+ <!-- scope: periscope -->
109
+ jkim0118@seas.upenn.edu, mmaddela3@gatech.edu, rkriz1@jh.edu
110
+
111
+ #### Has a Leaderboard?
112
+
113
+ <!-- info: Does the dataset have an active leaderboard? -->
114
+ <!-- scope: telescope -->
115
+ no
116
+
117
+
118
+ ### Languages and Intended Use
119
+
120
+ #### Multilingual?
121
+
122
+ <!-- quick -->
123
+ <!-- info: Is the dataset multilingual? -->
124
+ <!-- scope: telescope -->
125
+ yes
126
+
127
+ #### Covered Languages
128
+
129
+ <!-- quick -->
130
+ <!-- info: What languages/dialects are covered in the dataset? -->
131
+ <!-- scope: telescope -->
132
+ `English`, `German`, `French`, `Spanish, Castilian`
133
+
134
+ #### License
135
+
136
+ <!-- quick -->
137
+ <!-- info: What is the license of the dataset? -->
138
+ <!-- scope: telescope -->
139
+ other: Other license
140
+
141
+ #### Intended Use
142
+
143
+ <!-- info: What is the intended use of the dataset? -->
144
+ <!-- scope: microscope -->
145
+ Split and Rephrase.
146
+
147
+ #### Add. License Info
148
+
149
+ <!-- info: What is the 'other' license of the dataset? -->
150
+ <!-- scope: periscope -->
151
+ The dataset is not licensed by itself, and the source OPUS data consists solely of publicly available parallel corpora.
152
+
153
+ #### Primary Task
154
+
155
+ <!-- info: What primary task does the dataset support? -->
156
+ <!-- scope: telescope -->
157
+ Simplification
158
+
159
+ #### Communicative Goal
160
+
161
+ <!-- quick -->
162
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
163
+ <!-- scope: periscope -->
164
+ To rewrite a long, complex sentence into shorter, readable, meaning-equivalent sentences.
165
+
166
+
167
+ ### Credit
168
+
169
+
170
 
171
+ ### Dataset Structure
172
 
173
+ #### Data Fields
174
+
175
+ <!-- info: List and describe the fields present in the dataset. -->
176
+ <!-- scope: telescope -->
177
+ - `gem_id` (string): a unique identifier for the instance
178
+ - `source_sentence` (string): sentence to be simplified
179
+ - `target_sentence` (string)" simplified text that was split and rephrased
180
+
181
+ #### Example Instance
182
+
183
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
184
+ <!-- scope: periscope -->
185
  ```
186
  {
187
  "gem_id": "bisect-train-0",
 
190
  }
191
  ```
192
 
193
+ #### Data Splits
194
+
195
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
196
+ <!-- scope: periscope -->
197
+ For the main English BiSECT dataset, the splits are as follows: 1. Train (n=928440) 2. Validation (n=9079) 3. Test (n=583) Additional challenge sets were derived from the data presented in the paper. Please refer to the challenge set sections. The train/validation/test splits for other languages are as follows: German (n=184638/n=864/n=735) Spanish (n=282944/n=3638/n=3081) French (n=491035/n=2400/n=1036)
198
+
199
+ #### Splitting Criteria
200
+
201
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
202
+ <!-- scope: microscope -->
203
+ While all training data were derived from subsets of the OPUS corpora, different source subsets were used for training v.s., validation and testing. The training set comprised more web crawl data, whereas development and test sets comprised EMEA and EU texts. Details can be found in the BiSECT paper.
204
+
205
+
206
+
207
+ ## Dataset in GEM
208
+
209
+ ### Rationale for Inclusion in GEM
210
+
211
+ #### Why is the Dataset in GEM?
212
+
213
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
214
+ <!-- scope: microscope -->
215
+ Understanding long and complex sentences is challenging for both humans and NLP models. The BiSECT dataset helps facilitate more research on Split and Rephrase as a task within itself, as well as how it can benefit downstream NLP applications.
216
+
217
+ #### Similar Datasets
218
+
219
+ <!-- info: Do other datasets for the high level task exist? -->
220
+ <!-- scope: telescope -->
221
+ yes
222
+
223
+ #### Unique Language Coverage
224
+
225
+ <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
226
+ <!-- scope: periscope -->
227
+ yes
228
+
229
+ #### Difference from other GEM datasets
230
+
231
+ <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
232
+ <!-- scope: microscope -->
233
+ BiSECT is the largest available corpora for the Split and Rephrase task. In addition, it has been shown that BiSECT is of higher quality than previous Split and Rephrase corpora and contains a wider variety of splitting operations.
234
+
235
+ Most previous Split and Rephrase corpora (HSplit-Wiki, Cont-Benchmark, and Wiki-Benchmark) were manually written at a small scale and focused on evaluation, while the one corpus of comparable size, WikiSplit, contains around 25% of pairs contain significant errors. This is because Wikipedia editors are not only trying to split a sentence, but also often simultaneously modifying the sentence for other purposes, which results in changes of the initial meaning.
236
+
237
+
238
+ ### GEM-Specific Curation
239
+
240
+ #### Modificatied for GEM?
241
+
242
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
243
+ <!-- scope: telescope -->
244
+ yes
245
+
246
+ #### GEM Modifications
247
+
248
+ <!-- info: What changes have been made to he original dataset? -->
249
+ <!-- scope: periscope -->
250
+ `data points added`
251
+
252
+ #### Modification Details
253
+
254
+ <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
255
+ <!-- scope: microscope -->
256
+ The original BiSECT training, validation, and test splits are maintained to ensure a fair comparison. Note that the original BiSECT test set was created by manually selecting 583 high-quality Split and Rephrase instances from 1000 random source-target pairs sampled from the EMEA and JRC-Acquis corpora from OPUS.
257
+
258
+ As the first challenge set, we include the HSPLIT-Wiki test set, containing 359 pairs. For each complex sentence, there are four reference splits; To ensure replicability, as reference splits, we again follow the BiSECT paper and present only the references from HSplit2-full.
259
+
260
+ In addition to the two evaluation sets used in the original BiSECT paper, we also introduce a second challenge set. For this, we initially consider all 7,293 pairs from the EMEA and JRC-Acquis corpora. From there, we classify each pair using the classification algorithm from Section 4.2 of the original BiSECT paper. The three classes are as follows:
261
+
262
+ 1. Direct Insertion: when a long sentence l contains two independent clauses and requires only minor changes in order to make a fluent and meaning-preserving split s.
263
+ 2. Changes near Split, when l contains one independent and one dependent clause, but modifications are restricted to the region where l is split.
264
+ 3. Changes across Sentences, where major changes are required throughout l in order to create a fluent split s.
265
+ We keep only pairs labeled as Type 3, and after filtering out pairs with significant length differences (signaling potential content addition/deletion), we present a second challenge set of 1,798 pairs.
266
+
267
+ #### Additional Splits?
268
+
269
+ <!-- info: Does GEM provide additional splits to the dataset? -->
270
+ <!-- scope: telescope -->
271
+ no
272
+
273
+
274
+ ### Getting Started with the Task
275
+
276
+ #### Pointers to Resources
277
 
278
+ <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
279
+ <!-- scope: microscope -->
280
+ The dataset can be downloaded from the original repository by the authors.
281
 
282
+ The original BiSECT paper proposes several transformer-based models that can be used as baselines, which also compares against Copy512, an LSTM-based model and the previous state-of-the-art.
 
 
 
 
 
 
 
 
283
 
284
+ The common metric used for automatic evaluation of Split and Rephrase, and sentence simplification more generally is SARI. The BiSECT paper also evaluates using BERTScore. Note that automatic evaluations tend to not correlate well with human judgments, so a human evaluation for quality is generally expected for publication. The original BiSECT paper provides templates for collecting quality annotations from Amazon Mechanical Turk.
285
 
 
286
 
 
287
 
288
+ ## Previous Results
289
 
290
+ ### Previous Results
291
+
292
+ #### Measured Model Abilities
293
+
294
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
295
+ <!-- scope: telescope -->
296
+ Text comprehension (needed to generate meaning-equivalent output) and notions of complexity (what is more 'readable' in terms of syntactic structure, lexical choice, punctuation).
297
+
298
+ #### Metrics
299
+
300
+ <!-- info: What metrics are typically used for this task? -->
301
+ <!-- scope: periscope -->
302
+ `Other: Other Metrics`, `BERT-Score`
303
+
304
+ #### Other Metrics
305
+
306
+ <!-- info: Definitions of other metrics -->
307
+ <!-- scope: periscope -->
308
+ SARI is a metric used for evaluating automatic text simplification systems. The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system.
309
+
310
+ #### Proposed Evaluation
311
+
312
+ <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
313
+ <!-- scope: microscope -->
314
+ Existing automatic metrics, such as BLEU (Papineni et al., 2002) and SAMSA (Sulem et al., 2018),
315
+ are not optimal for the Split and Rephrase task as
316
+ they rely on lexical overlap between the output and
317
+ the target (or source) and underestimate the splitting capability of the models that rephrase often.
318
+
319
+ As such, the dataset creators focused on BERTScore (Zhang et al., 2020) and SARI (Xu et al., 2016). BERTScore captures meaning preservation and fluency
320
+ well (Scialom et al., 2021). SARI can provide three
321
+ separate F1/precision scores that explicitly measure the correctness of inserted, kept and deleted
322
+ n-grams when compared to both the source and
323
+ the target. The authors used an extended version of SARI
324
+ that considers lexical paraphrases of the reference.
325
+
326
+ #### Previous results available?
327
+
328
+ <!-- info: Are previous results available? -->
329
+ <!-- scope: telescope -->
330
+ yes
331
+
332
+
333
+
334
+ ## Dataset Curation
335
+
336
+ ### Original Curation
337
+
338
+ #### Original Curation Rationale
339
+
340
+ <!-- info: Original curation rationale -->
341
+ <!-- scope: telescope -->
342
+ BiSECT was constructed to satisfy the need of a Split and Rephrase corpus that is both large-scale and high-quality. Most previous Split and Rephrase corpora (HSplit-Wiki, Cont-Benchmark, and Wiki-Benchmark) were manually written at a small scale and focused on evaluation, while the one corpus of comparable size, WikiSplit, contains around 25% of pairs contain significant errors. This is because Wikipedia editors are not only trying to split a sentence, but also often simultaneously modifying the sentence for other purposes, which results in changes of the initial meaning.
343
+
344
+ #### Communicative Goal
345
+
346
+ <!-- info: What was the communicative goal? -->
347
+ <!-- scope: periscope -->
348
  The goal of Split and Rephrase is to break down longer sentences into multiple shorter sentences, which has downstream applications for many NLP tasks, including machine translation and dependency parsing.
349
 
350
+ #### Sourced from Different Sources
351
 
352
+ <!-- info: Is the dataset aggregated from different data sources? -->
353
+ <!-- scope: telescope -->
354
+ no
355
 
 
356
 
357
+ ### Language Data
358
 
359
+ #### How was Language Data Obtained?
360
 
361
+ <!-- info: How was the language data obtained? -->
362
+ <!-- scope: telescope -->
363
+ `Found`
364
 
365
+ #### Where was it found?
366
 
367
+ <!-- info: If found, where from? -->
368
+ <!-- scope: telescope -->
369
+ `Other`
370
 
371
+ #### Language Producers
372
 
373
+ <!-- info: What further information do we have on the language producers? -->
374
+ <!-- scope: microscope -->
375
+ N/A.
376
 
377
+ #### Topics Covered
378
 
379
+ <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
380
+ <!-- scope: periscope -->
381
+ There is a range of topics spanning domains such as web crawl and government documents (European Parliament, United Nations, EMEA).
382
 
383
+ #### Data Validation
384
 
385
+ <!-- info: Was the text validated by a different worker or a data curator? -->
386
+ <!-- scope: telescope -->
387
+ validated by data curator
388
 
389
+ #### Data Preprocessing
390
 
391
+ <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
392
+ <!-- scope: microscope -->
393
+ The construction of the BiSECT corpus relies on leveraging the sentence-level alignments from OPUS), a collection of bilingual parallel corpora over many language pairs. Given a target language A, this work extracts all 1-2 and 2-1 sentence alignments from parallel corpora between A and a set of foreign languages B.
394
 
395
+ Next, the foreign sentences are translated into English using Google Translate’s Web API service to obtain sentence alignments between a single long sentence and two corresponding split sentences, both in the desired language.
396
 
397
+ The authors further filtered the data in a hybrid fashion.
398
 
399
+ #### Was Data Filtered?
400
 
401
+ <!-- info: Were text instances selected or filtered? -->
402
+ <!-- scope: telescope -->
403
+ hybrid
404
 
405
+ #### Filter Criteria
406
 
407
+ <!-- info: What were the selection criteria? -->
408
+ <!-- scope: microscope -->
409
+ To remove noise, the authors remove pairs where the single long sentence (l) contains a token with a punctuation after the first two and before the last two alphabetic characters. The authors also removed instances where l contains more than one unconnected component in its dependency tree, generated via SpaCy.
410
+
411
+
412
+ ### Structured Annotations
413
+
414
+ #### Additional Annotations?
415
+
416
+ <!-- quick -->
417
+ <!-- info: Does the dataset have additional annotations for each instance? -->
418
+ <!-- scope: telescope -->
419
+ none
420
+
421
+ #### Annotation Service?
422
+
423
+ <!-- info: Was an annotation service used? -->
424
+ <!-- scope: telescope -->
425
+ no
426
+
427
+
428
+ ### Consent
429
+
430
+ #### Any Consent Policy?
431
+
432
+ <!-- info: Was there a consent policy involved when gathering the data? -->
433
+ <!-- scope: telescope -->
434
+ no
435
+
436
+ #### Justification for Using the Data
437
+
438
+ <!-- info: If not, what is the justification for reusing the data? -->
439
+ <!-- scope: microscope -->
440
+ Since this data is collected from OPUS, all instances are already in the public domain.
441
+
442
+
443
+ ### Private Identifying Information (PII)
444
+
445
+ #### Contains PII?
446
 
447
+ <!-- quick -->
448
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
449
+ <!-- scope: telescope -->
450
+ unlikely
451
+
452
+ #### Categories of PII
453
+
454
+ <!-- info: What categories of PII are present or suspected in the data? -->
455
+ <!-- scope: periscope -->
456
+ `generic PII`
457
+
458
+ #### Any PII Identification?
459
+
460
+ <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
461
+ <!-- scope: periscope -->
462
+ no identification
463
+
464
+
465
+ ### Maintenance
466
+
467
+ #### Any Maintenance Plan?
468
+
469
+ <!-- info: Does the original dataset have a maintenance plan? -->
470
+ <!-- scope: telescope -->
471
+ no
472
+
473
+
474
+
475
+ ## Broader Social Context
476
+
477
+ ### Previous Work on the Social Impact of the Dataset
478
+
479
+ #### Usage of Models based on the Data
480
+
481
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
482
+ <!-- scope: telescope -->
483
+ no
484
+
485
+
486
+ ### Impact on Under-Served Communities
487
+
488
+ #### Addresses needs of underserved Communities?
489
+
490
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
491
+ <!-- scope: telescope -->
492
+ yes
493
+
494
+ #### Details on how Dataset Addresses the Needs
495
+
496
+ <!-- info: Describe how this dataset addresses the needs of underserved communities. -->
497
+ <!-- scope: microscope -->
498
+ The data as provided in GEMv2 is in English, which is a language with abundant existing resources. However, the original paper also provides Split and Rephrase pairs for French, Spanish, and German, while providing a framework for leveraging bilingual corpora from any language pair found within OPUS.
499
 
 
 
500
 
501
  ### Discussion of Biases
502
 
503
+ #### Any Documented Social Biases?
504
+
505
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
506
+ <!-- scope: telescope -->
507
+ no
508
+
509
+ #### Are the Language Producers Representative of the Language?
510
+
511
+ <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
512
+ <!-- scope: periscope -->
513
+ The language produced in the dataset is limited to what is captured in the used subset of the OPUS corpora, which might not represent the full distribution of speakers from all locations. For example, the corpora used are from a limited set of relatively formal domains, so it is possible that high performance on the BiSECT test set may not transfer to more informal text.
514
+
515
+
516
+
517
+ ## Considerations for Using the Data
518
+
519
+ ### PII Risks and Liability
520
+
521
+ #### Potential PII Risk
522
+
523
+ <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
524
+ <!-- scope: microscope -->
525
+ Since this data is collected from OPUS, all pairs are already in the public domain.
526
+
527
+
528
+ ### Licenses
529
+
530
+ #### Copyright Restrictions on the Dataset
531
+
532
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
533
+ <!-- scope: periscope -->
534
+ `public domain`
535
+
536
+ #### Copyright Restrictions on the Language Data
537
+
538
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
539
+ <!-- scope: periscope -->
540
+ `public domain`
541
 
 
542
 
543
+ ### Known Technical Limitations
544
 
545
+ #### Technical Limitations
546
 
547
+ <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
548
+ <!-- scope: microscope -->
549
+ The creation of English BiSECT relies on translating non-English text back to English. While machine translation systems tend to perform well on high-resource languages, there is still a non-negligible chance that there these systems make errors; through a manual evaluation of a subset of BiSECT, it was found that 15% of pairs contained significant errors, while an additional 22% contained minor adequacy/fluency errors. This problem is exacerbated slightly when creating German BiSECT (22% significant errors, 24% minor errors), and these numbers would likely get larger if lower-resource languages were used.
550
 
 
551