Datasets:
GEM
/

Tasks:
Other
Languages: English
Multilinguality: unknown
Size Categories: unknown
Language Creators: unknown
Annotations Creators: none
Source Datasets: original
License:
Sebastian Gehrmann commited on
Commit
833d627
1 Parent(s): e36120a

data card.

Browse files
Files changed (1) hide show
  1. README.md +311 -124
README.md CHANGED
@@ -1,21 +1,85 @@
1
- ## Dataset Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ### Where to find the data and its documentation
4
 
5
- #### What is the webpage for the dataset (if it exists)?
6
 
7
- https://huggingface.co/datasets/GEM/SciDuet
8
 
9
- #### What is the link to where the original dataset is hosted?
 
 
10
 
11
- https://github.com/IBM/document2slides/tree/main/SciDuet-ACL
12
 
13
- #### What is the link to the paper describing the dataset (open access preferred)?
 
 
14
 
15
- https://aclanthology.org/2021.naacl-main.111/
16
 
17
- #### Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.
 
 
18
 
 
 
 
 
19
  ```
20
  @inproceedings{sun-etal-2021-d2s,
21
  title = "{D}2{S}: Document-to-Slide Generation Via Query-Based Text Summarization",
@@ -34,242 +98,365 @@ https://aclanthology.org/2021.naacl-main.111/
34
  pages = "1405--1418",
35
  abstract = "Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming. There has been limited research aiming to automate the document-to-slides generation process and all face a critical challenge: no publicly available dataset for training and benchmarking. In this work, we first contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years{'} NLP and ML conferences (e.g., ACL). Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach: 1) Use slide titles to retrieve relevant and engaging text, figures, and tables; 2) Summarize the retrieved context into bullet points with long-form question answering. Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.",
36
  }
37
- ```
 
 
 
 
 
 
38
 
39
- #### Does the dataset have an active leaderboard?
40
 
41
- no
42
 
43
- ### Languages and Intended Use
44
 
45
- #### Is the dataset multilingual?
 
 
 
46
 
47
- no
48
 
49
- #### What languages/dialects are covered in the dataset?
 
 
 
50
 
51
- English
52
 
53
- #### What is the license of the dataset?
 
 
 
54
 
55
- apache-2.0: Apache License 2.0
56
 
57
- #### What is the intended use of the dataset?
 
 
58
 
59
- Promote research on the task of document-to-slides generation
60
 
61
- #### What primary task does the dataset support?
 
 
62
 
63
- Text-to-Slide
64
 
65
- ### Credit
66
 
67
- #### In what kind of organization did the dataset curation happen?
68
 
69
- industry
 
 
70
 
71
- #### Name the organization(s).
72
 
73
- IBM Research
 
 
74
 
75
- #### Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).
76
 
77
- Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
 
 
78
 
79
- #### Who funded the data creation?
80
 
81
- IBM Research
 
 
82
 
83
- #### Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.
84
 
85
- Yufang Hou (IBM Research), Dakuo Wang (IBM Research)
 
 
86
 
87
- ### Structure
88
 
89
- #### How were the labels chosen?
90
 
91
- The original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
92
 
93
- #### Describe and name the splits in the dataset if there are more than one.
 
 
94
 
95
- Training, validation and testing data contain 136, 55, and 81 papers from ACL Anthology and their corresponding slides, respectively.
96
 
97
- #### Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
 
 
98
 
 
 
 
 
99
  The dataset integrated into GEM is the ACL portion of the whole dataset described in "https://aclanthology.org/2021.naacl-main.111.pdf", It contains the full Dev and Test sets, and a portion of the Train dataset.
100
- Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code to generate the training dataset from the online ICML/NeurIPS anthologies.
 
 
 
 
 
 
101
 
102
- ## Dataset in GEM
103
 
104
- ### Rationale
 
 
105
 
106
- #### What does this dataset contribute toward better generation evaluation and why is it part of GEM?
107
 
108
- SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, which requires a model has a good ability to "understand" long-form text, choose appropriate content and generate key points.
 
 
109
 
110
- #### Do other datasets for the high level task exist?
111
 
112
- no
 
 
113
 
114
- #### What aspect of model ability can be measured with this dataset?
115
 
116
- content selection, long-form text undersanding and generation
117
 
118
- ### GEM Additional Curation
119
 
120
- #### Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data?
 
 
121
 
122
- no
123
 
124
- #### Does GEM provide additional splits to the dataset?
 
 
125
 
126
- no
127
 
128
- ### Getting Started
129
 
130
- ## Previous Results
131
 
132
- ### Previous Results
133
 
134
- #### What aspect of model ability can be measured with this dataset?
135
 
136
- content selection, long-form text undersanding and key points generation
137
 
138
- #### What metrics are typically used for this task?
139
 
140
- ROUGE
141
 
142
- #### List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task.
 
 
143
 
 
 
 
 
 
 
 
 
 
 
144
  Automatical Evaluation Metric: ROUGE
145
  Human Evaluation: (Readability, Informativeness, Consistency)
146
  1) Readability: The generated slide content is coherent, concise, and grammatically correct;
147
  2) Informativeness: The generated slide provides sufficient and necessary information that corresponds to the given slide title, regardless of its similarity to the original slide;
148
- 3) Consistency: The generated slide content is similar to the original author’s reference slide.
149
 
150
- #### Are previous results available?
151
 
152
- yes
 
 
153
 
154
- #### What evaluation approaches have others used?
155
 
156
- ROUGE + Human Evaluation
 
 
157
 
158
- #### What are the most relevant previous results for this task/dataset?
159
 
 
 
160
  Paper "D2S: Document-to-Slide Generation Via Query-Based
161
- Text Summarization" reports 20.47, 5.26 and 19.08 for ROUGE-1, ROUGE-2 and ROUGE-L (f-score).
 
162
 
163
- ## Dataset Curation
164
 
165
- ### Original Curation
166
 
167
- #### Original curation rationale
168
 
169
- Provide a benchmark dataset for the document-to-slides task.
170
 
171
- #### Is the dataset aggregated from different data sources?
 
 
172
 
173
- no
174
 
175
- ### Language Data
 
 
176
 
177
- #### How was the language data obtained?
178
 
179
- Other
180
 
181
- #### Was the text validated by a different worker or a data curator?
182
 
183
- not validated
 
 
184
 
185
- #### How was the text data pre-processed? (Enter N/A if the text was not pre-processed)
186
 
 
 
 
 
 
 
 
 
187
  Text on papers was extracted through Grobid. Figures andcaptions were extracted through pdffigures. Text on slides was extracted through IBM Watson Discovery package and OCR by pytesseract. Figures and tables that appear on slides and papers were linked through multiscale template matching by OpenCV. Further dataset
188
  cleaning was performed with standard string-based
189
- heuristics on sentence building, equation and floating caption removal, and duplicate line deletion.
 
 
190
 
191
- #### Were text instances selected or filtered?
 
 
192
 
193
- algorithmically
194
 
195
- #### What were the selection criteria?
 
 
196
 
197
- the slide context text shouldn't contain additional format information such as "*** University"
198
 
199
- ### Structured Annotations
200
 
201
- #### Does the dataset have additional annotations for each instance?
202
 
203
- none
 
 
 
204
 
205
- #### Was an annotation service used?
206
 
207
- no
 
 
208
 
209
- ### Consent
210
 
211
- #### Was there a consent policy involved when gathering the data?
212
 
213
- yes
214
 
215
- #### What was the consent policy?
 
 
216
 
 
 
 
 
217
  The original dataset was open-sourced under Apache-2.0.
218
- Some of the original dataset creators are part of the GEM v2 dataset infrastructure team and take care of integrating this dataset into GEM.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
219
 
220
- ### Private Identifying Information (PII)
221
 
222
- #### Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
223
 
224
- yes/very likely
225
 
226
- #### What categories of PII are present or suspected in the data?
227
 
228
- generic PII
 
 
229
 
230
- #### Did the curators use any automatic/manual method to identify PII in the dataset?
231
 
232
- no identification
233
 
234
- ### Maintenance
235
 
236
- #### Does the original dataset have a maintenance plan?
 
 
237
 
238
- no
239
 
240
- ## Broader Social Context
241
 
242
- ### Previous Work on the Social Impact of the Dataset
243
 
244
- #### Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems?
 
 
245
 
246
- no
247
 
248
- ### Impact on Under-Served Communities
249
 
250
- #### Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models).
251
 
252
- no
253
 
254
- ### Discussion of Biases
255
 
256
- #### Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group.
257
 
258
- unsure
259
 
260
- ## Considerations for Using the Data
261
 
262
- ### PII Risks and Liability
 
 
263
 
264
- ### Licenses
265
 
266
- #### Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset?
 
 
267
 
268
- non-commercial use only
269
 
270
- #### Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data?
271
 
272
- research use only
273
 
274
- ### Known Technical Limitations
275
 
1
+ ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: SciDuet
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-to-slide
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/SciDuet
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** https://huggingface.co/datasets/GEM/SciDuet
28
+ - **Repository:** https://github.com/IBM/document2slides/tree/main/SciDuet-ACL
29
+ - **Paper:** https://aclanthology.org/2021.naacl-main.111/
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** N/A
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/SciDuet).
36
+
37
+ ### Dataset Summary
38
+
39
+ This dataset supports the document-to-slide generation task where a model has to generate presentation slide content from the text of a document.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/SciDuet')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/SciDuet).
47
+
48
+ #### website
49
+ [Huggingface](https://huggingface.co/datasets/GEM/SciDuet)
50
+
51
+ #### paper
52
+ [ACL Anthology](https://aclanthology.org/2021.naacl-main.111/)
53
+
54
+ #### authors
55
+ Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
56
 
57
+ ## Dataset Overview
58
 
59
+ ### Where to find the Data and its Documentation
60
 
61
+ #### Webpage
62
 
63
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
64
+ <!-- scope: telescope -->
65
+ [Huggingface](https://huggingface.co/datasets/GEM/SciDuet)
66
 
67
+ #### Download
68
 
69
+ <!-- info: What is the link to where the original dataset is hosted? -->
70
+ <!-- scope: telescope -->
71
+ [Github](https://github.com/IBM/document2slides/tree/main/SciDuet-ACL)
72
 
73
+ #### Paper
74
 
75
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
76
+ <!-- scope: telescope -->
77
+ [ACL Anthology](https://aclanthology.org/2021.naacl-main.111/)
78
 
79
+ #### BibTex
80
+
81
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
82
+ <!-- scope: microscope -->
83
  ```
84
  @inproceedings{sun-etal-2021-d2s,
85
  title = "{D}2{S}: Document-to-Slide Generation Via Query-Based Text Summarization",
98
  pages = "1405--1418",
99
  abstract = "Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming. There has been limited research aiming to automate the document-to-slides generation process and all face a critical challenge: no publicly available dataset for training and benchmarking. In this work, we first contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years{'} NLP and ML conferences (e.g., ACL). Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach: 1) Use slide titles to retrieve relevant and engaging text, figures, and tables; 2) Summarize the retrieved context into bullet points with long-form question answering. Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.",
100
  }
101
+ ```
102
+
103
+ #### Has a Leaderboard?
104
+
105
+ <!-- info: Does the dataset have an active leaderboard? -->
106
+ <!-- scope: telescope -->
107
+ no
108
 
 
109
 
110
+ ### Languages and Intended Use
111
 
112
+ #### Multilingual?
113
 
114
+ <!-- quick -->
115
+ <!-- info: Is the dataset multilingual? -->
116
+ <!-- scope: telescope -->
117
+ no
118
 
119
+ #### Covered Languages
120
 
121
+ <!-- quick -->
122
+ <!-- info: What languages/dialects are covered in the dataset? -->
123
+ <!-- scope: telescope -->
124
+ `English`
125
 
126
+ #### License
127
 
128
+ <!-- quick -->
129
+ <!-- info: What is the license of the dataset? -->
130
+ <!-- scope: telescope -->
131
+ apache-2.0: Apache License 2.0
132
 
133
+ #### Intended Use
134
 
135
+ <!-- info: What is the intended use of the dataset? -->
136
+ <!-- scope: microscope -->
137
+ Promote research on the task of document-to-slides generation
138
 
139
+ #### Primary Task
140
 
141
+ <!-- info: What primary task does the dataset support? -->
142
+ <!-- scope: telescope -->
143
+ Text-to-Slide
144
 
 
145
 
146
+ ### Credit
147
 
148
+ #### Curation Organization Type(s)
149
 
150
+ <!-- info: In what kind of organization did the dataset curation happen? -->
151
+ <!-- scope: telescope -->
152
+ `industry`
153
 
154
+ #### Curation Organization(s)
155
 
156
+ <!-- info: Name the organization(s). -->
157
+ <!-- scope: periscope -->
158
+ IBM Research
159
 
160
+ #### Dataset Creators
161
 
162
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
163
+ <!-- scope: microscope -->
164
+ Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
165
 
166
+ #### Funding
167
 
168
+ <!-- info: Who funded the data creation? -->
169
+ <!-- scope: microscope -->
170
+ IBM Research
171
 
172
+ #### Who added the Dataset to GEM?
173
 
174
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
175
+ <!-- scope: microscope -->
176
+ Yufang Hou (IBM Research), Dakuo Wang (IBM Research)
177
 
 
178
 
179
+ ### Dataset Structure
180
 
181
+ #### How were labels chosen?
182
 
183
+ <!-- info: How were the labels chosen? -->
184
+ <!-- scope: microscope -->
185
+ The original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
186
 
187
+ #### Data Splits
188
 
189
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
190
+ <!-- scope: periscope -->
191
+ Training, validation and testing data contain 136, 55, and 81 papers from ACL Anthology and their corresponding slides, respectively.
192
 
193
+ #### Splitting Criteria
194
+
195
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
196
+ <!-- scope: microscope -->
197
  The dataset integrated into GEM is the ACL portion of the whole dataset described in "https://aclanthology.org/2021.naacl-main.111.pdf", It contains the full Dev and Test sets, and a portion of the Train dataset.
198
+ Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code to generate the training dataset from the online ICML/NeurIPS anthologies.
199
+
200
+
201
+
202
+ ## Dataset in GEM
203
+
204
+ ### Rationale for Inclusion in GEM
205
 
206
+ #### Why is the Dataset in GEM?
207
 
208
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
209
+ <!-- scope: microscope -->
210
+ SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, which requires a model has a good ability to "understand" long-form text, choose appropriate content and generate key points.
211
 
212
+ #### Similar Datasets
213
 
214
+ <!-- info: Do other datasets for the high level task exist? -->
215
+ <!-- scope: telescope -->
216
+ no
217
 
218
+ #### Ability that the Dataset measures
219
 
220
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
221
+ <!-- scope: periscope -->
222
+ content selection, long-form text undersanding and generation
223
 
 
224
 
225
+ ### GEM-Specific Curation
226
 
227
+ #### Modificatied for GEM?
228
 
229
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
230
+ <!-- scope: telescope -->
231
+ no
232
 
233
+ #### Additional Splits?
234
 
235
+ <!-- info: Does GEM provide additional splits to the dataset? -->
236
+ <!-- scope: telescope -->
237
+ no
238
 
 
239
 
240
+ ### Getting Started with the Task
241
 
 
242
 
 
243
 
 
244
 
245
+ ## Previous Results
246
 
247
+ ### Previous Results
248
 
249
+ #### Measured Model Abilities
250
 
251
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
252
+ <!-- scope: telescope -->
253
+ content selection, long-form text undersanding and key points generation
254
 
255
+ #### Metrics
256
+
257
+ <!-- info: What metrics are typically used for this task? -->
258
+ <!-- scope: periscope -->
259
+ `ROUGE`
260
+
261
+ #### Proposed Evaluation
262
+
263
+ <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
264
+ <!-- scope: microscope -->
265
  Automatical Evaluation Metric: ROUGE
266
  Human Evaluation: (Readability, Informativeness, Consistency)
267
  1) Readability: The generated slide content is coherent, concise, and grammatically correct;
268
  2) Informativeness: The generated slide provides sufficient and necessary information that corresponds to the given slide title, regardless of its similarity to the original slide;
269
+ 3) Consistency: The generated slide content is similar to the original author’s reference slide.
270
 
271
+ #### Previous results available?
272
 
273
+ <!-- info: Are previous results available? -->
274
+ <!-- scope: telescope -->
275
+ yes
276
 
277
+ #### Other Evaluation Approaches
278
 
279
+ <!-- info: What evaluation approaches have others used? -->
280
+ <!-- scope: periscope -->
281
+ ROUGE + Human Evaluation
282
 
283
+ #### Relevant Previous Results
284
 
285
+ <!-- info: What are the most relevant previous results for this task/dataset? -->
286
+ <!-- scope: microscope -->
287
  Paper "D2S: Document-to-Slide Generation Via Query-Based
288
+ Text Summarization" reports 20.47, 5.26 and 19.08 for ROUGE-1, ROUGE-2 and ROUGE-L (f-score).
289
+
290
 
 
291
 
292
+ ## Dataset Curation
293
 
294
+ ### Original Curation
295
 
296
+ #### Original Curation Rationale
297
 
298
+ <!-- info: Original curation rationale -->
299
+ <!-- scope: telescope -->
300
+ Provide a benchmark dataset for the document-to-slides task.
301
 
302
+ #### Sourced from Different Sources
303
 
304
+ <!-- info: Is the dataset aggregated from different data sources? -->
305
+ <!-- scope: telescope -->
306
+ no
307
 
 
308
 
309
+ ### Language Data
310
 
311
+ #### How was Language Data Obtained?
312
 
313
+ <!-- info: How was the language data obtained? -->
314
+ <!-- scope: telescope -->
315
+ `Other`
316
 
317
+ #### Data Validation
318
 
319
+ <!-- info: Was the text validated by a different worker or a data curator? -->
320
+ <!-- scope: telescope -->
321
+ not validated
322
+
323
+ #### Data Preprocessing
324
+
325
+ <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
326
+ <!-- scope: microscope -->
327
  Text on papers was extracted through Grobid. Figures andcaptions were extracted through pdffigures. Text on slides was extracted through IBM Watson Discovery package and OCR by pytesseract. Figures and tables that appear on slides and papers were linked through multiscale template matching by OpenCV. Further dataset
328
  cleaning was performed with standard string-based
329
+ heuristics on sentence building, equation and floating caption removal, and duplicate line deletion.
330
+
331
+ #### Was Data Filtered?
332
 
333
+ <!-- info: Were text instances selected or filtered? -->
334
+ <!-- scope: telescope -->
335
+ algorithmically
336
 
337
+ #### Filter Criteria
338
 
339
+ <!-- info: What were the selection criteria? -->
340
+ <!-- scope: microscope -->
341
+ the slide context text shouldn't contain additional format information such as "*** University"
342
 
 
343
 
344
+ ### Structured Annotations
345
 
346
+ #### Additional Annotations?
347
 
348
+ <!-- quick -->
349
+ <!-- info: Does the dataset have additional annotations for each instance? -->
350
+ <!-- scope: telescope -->
351
+ none
352
 
353
+ #### Annotation Service?
354
 
355
+ <!-- info: Was an annotation service used? -->
356
+ <!-- scope: telescope -->
357
+ no
358
 
 
359
 
360
+ ### Consent
361
 
362
+ #### Any Consent Policy?
363
 
364
+ <!-- info: Was there a consent policy involved when gathering the data? -->
365
+ <!-- scope: telescope -->
366
+ yes
367
 
368
+ #### Consent Policy Details
369
+
370
+ <!-- info: What was the consent policy? -->
371
+ <!-- scope: microscope -->
372
  The original dataset was open-sourced under Apache-2.0.
373
+ Some of the original dataset creators are part of the GEM v2 dataset infrastructure team and take care of integrating this dataset into GEM.
374
+
375
+
376
+ ### Private Identifying Information (PII)
377
+
378
+ #### Contains PII?
379
+
380
+ <!-- quick -->
381
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
382
+ <!-- scope: telescope -->
383
+ yes/very likely
384
+
385
+ #### Categories of PII
386
+
387
+ <!-- info: What categories of PII are present or suspected in the data? -->
388
+ <!-- scope: periscope -->
389
+ `generic PII`
390
+
391
+ #### Any PII Identification?
392
+
393
+ <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
394
+ <!-- scope: periscope -->
395
+ no identification
396
+
397
+
398
+ ### Maintenance
399
+
400
+ #### Any Maintenance Plan?
401
+
402
+ <!-- info: Does the original dataset have a maintenance plan? -->
403
+ <!-- scope: telescope -->
404
+ no
405
+
406
 
 
407
 
408
+ ## Broader Social Context
409
 
410
+ ### Previous Work on the Social Impact of the Dataset
411
 
412
+ #### Usage of Models based on the Data
413
 
414
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
415
+ <!-- scope: telescope -->
416
+ no
417
 
 
418
 
419
+ ### Impact on Under-Served Communities
420
 
421
+ #### Addresses needs of underserved Communities?
422
 
423
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
424
+ <!-- scope: telescope -->
425
+ no
426
 
 
427
 
428
+ ### Discussion of Biases
429
 
430
+ #### Any Documented Social Biases?
431
 
432
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
433
+ <!-- scope: telescope -->
434
+ unsure
435
 
 
436
 
 
437
 
438
+ ## Considerations for Using the Data
439
 
440
+ ### PII Risks and Liability
441
 
 
442
 
 
443
 
444
+ ### Licenses
445
 
446
+ #### Copyright Restrictions on the Dataset
447
 
448
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
449
+ <!-- scope: periscope -->
450
+ `non-commercial use only`
451
 
452
+ #### Copyright Restrictions on the Language Data
453
 
454
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
455
+ <!-- scope: periscope -->
456
+ `research use only`
457
 
 
458
 
459
+ ### Known Technical Limitations
460
 
 
461
 
 
462