Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
d7dc860
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (4) hide show
  1. .gitattributes +27 -0
  2. README.md +230 -0
  3. dummy/glucose/0.0.0/dummy_data.zip +3 -0
  4. glucose.py +161 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-ROC-stories
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - sequence-modeling-other-common-sense-inference
20
+ ---
21
+
22
+ # Dataset Card for [Dataset Name]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **[Repository](https://github.com/TevenLeScao/glucose)**
50
+ - **[Paper](https://arxiv.org/abs/2009.07758)**
51
+ - **Point of Contact: [glucose@elementalcognition.com](mailto:glucose@elementalcognition.com)**
52
+
53
+ ### Dataset Summary
54
+
55
+ GLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes.
56
+
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ Common sense inference of:
60
+ 1. Causes
61
+ 2. Emotions motivating an event
62
+ 3. Locations enabling an event
63
+ 4. Possession states enabling an event
64
+ 5. Other attributes enabling an event
65
+ 6. Consequences
66
+ 7. Emotions caused by an event
67
+ 8. Changes in location caused by an event
68
+ 9. Changes in possession caused by an event
69
+ 10. Other attributes that may be changed by an event
70
+
71
+ ### Languages
72
+
73
+ English, monolingual
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ ```
80
+ {
81
+ "experiment_id": "e56c7c3e-4660-40fb-80d0-052d566d676a__4",
82
+ "story_id": "e56c7c3e-4660-40fb-80d0-052d566d676a",
83
+ "worker_id": 19,
84
+ "submission_time_normalized": "20190930",
85
+ "worker_quality_rating": 3,
86
+ "selected_sentence_index": 4,
87
+ "story": "It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep."
88
+ selected_sentence: "Finally he becomes tired and falls asleep.",
89
+ "1_specificNL": "The third kid continues to get out of bed and wants to play >Causes/Enables> The kid finally becomes tired and falls asleep",
90
+ "1_specificStructured": "{The third kid}_[subject] {continues}_[verb] {to }_[preposition1] {get out of bed}_[object1] {and wants to play}_[object2] >Causes/Enables> {The kid}_[subject] {finally becomes}_[verb] {tired}_[object1] {and falls asleep}_[object2]",
91
+ "1_generalNL": "Someone_A doesn't want to go to sleep >Causes/Enables> Someone_A finally falls asleep",
92
+ "1_generalStructured": "{Someone_A}_[subject] {doesn't want}_[verb] {to }_[preposition1] {go to sleep}_[object1] >Causes/Enables> {Someone_A}_[subject] {finally falls}_[verb] {asleep}_[object1]",
93
+ "2_specificNL": "escaped",
94
+ "2_specificStructured": "escaped",
95
+ "2_generalNL": "escaped",
96
+ "2_generalStructured": "escaped",
97
+ "3_specificNL": "The third kid is in bed >Enables> The kid finally becomes tired and falls asleep",
98
+ "3_specificStructured": "{The third kid}_[subject] {is}_[verb] {in}_[preposition] {bed}_[object] >Enables> {The kid}_[subject] {finally becomes}_[verb] {tired}_[object1] {and falls asleep}_[object2]",
99
+ "3_generalNL": "Someone_A is in bed >Enables> Someone_A falls asleep",
100
+ "3_generalStructured": "{Someone_A}_[subject] {is}_[verb] {in}_[preposition] {bed}_[object] >Enables> {Someone_A}_[subject] {falls}_[verb] {asleep}_[object1]",
101
+ "4_specificNL": "escaped",
102
+ "4_specificStructured": "escaped",
103
+ "4_generalNL": "escaped",
104
+ "4_generalStructured": "escaped",
105
+ "5_specificNL": "escaped",
106
+ "5_specificStructured": "escaped",
107
+ "5_generalNL": "escaped",
108
+ "5_generalStructured": "escaped",
109
+ "6_specificNL": "escaped",
110
+ "6_specificStructured": "escaped",
111
+ "6_generalNL": "escaped",
112
+ "6_generalStructured": "escaped",
113
+ "7_specificNL": "escaped",
114
+ "7_specificStructured": "escaped",
115
+ "7_generalNL": "escaped",
116
+ "7_generalStructured": "escaped",
117
+ "8_specificNL": "escaped",
118
+ "8_specificStructured": "escaped",
119
+ "8_generalNL": "escaped",
120
+ "8_generalStructured": "escaped",
121
+ "9_specificNL": "escaped",
122
+ "9_specificStructured": "escaped",
123
+ "9_generalNL": "escaped",
124
+ "9_generalStructured": "escaped",
125
+ "10_specificNL": "escaped",
126
+ "10_specificStructured": "escaped",
127
+ "10_generalNL": "escaped",
128
+ "10_generalStructured": "escaped",
129
+ "number_filled_in": 7
130
+ }
131
+ ```
132
+
133
+ ### Data Fields
134
+
135
+ - __experiment_id__: a randomly generated alphanumeric sequence for a given story with the sentence index appended at the end after two underscores. Example: cbee2b5a-f2f9-4bca-9630-6825b1e36c13__0
136
+
137
+ - __story_id__: a random alphanumeric identifier for the story. Example: e56c7c3e-4660-40fb-80d0-052d566d676a
138
+
139
+ - __worker_id__: each worker has a unique identificaiton number. Example: 21
140
+
141
+ - __submission_time_normalized__: the time of submission in the format YYYYMMDD. Example: 20200115
142
+
143
+ - __worker_quality_assessment__: rating for the worker on the assignment in the row. Example: 2
144
+
145
+ - __selected_sentence_index__: the index of a given sentence in a story. Example: 0
146
+
147
+ - __story__: contains the full text of the ROC story that was used for the HIT. Example: It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep.
148
+
149
+ - __selected_sentence__: the sentence from the story that is being annotated. Example: It was bedtime at our house.
150
+
151
+ - __[1-10]\_[specific/general][NL/Structured]__: This is the primary data collected. It provides the common sense knowledge about the related stories and those general rules about the world derived from the specific statements. For each of the ten relationships, there are four columns. The specific columns give the specific statements from the story. The general statements give the corresponding generalization. The NL columns are formatted in natural language, whereas the structured columns contain indications of the slots used to fill in the data. Example:
152
+ - __1_specificNL__: "The school has a football team >Causes/Enables> The football game was last weekend"
153
+ - __1_specificStructured__: "{The school }\_[subject] {has }\_[verb] {a football team }\_[object1] >Causes/Enables> {The football game }\_[subject] {was last weekend }\_[verb]"
154
+ - __1_generalNL__: "Somewhere_A (that is a school ) has Something_A (that is a sports team ) >Causes/Enables> The game was last weekend"
155
+ - __1_generalStructured__: "{Somewhere_A ||that is a school ||}\_[subject] {has }\_[verb] {Something_A ||that is a sports team ||}\_[object1] >Causes/Enables> {The game }\_[subject] {was last weekend }\_[verb]"
156
+
157
+ - __number\_filled\_in__: number of dimensions filled in for the assignment. Example: 4
158
+
159
+
160
+ ### Data Splits
161
+
162
+ Train split: 65,521 examples
163
+ Test splits: 500 examples, without worker id and rating, number filled in, and structured text.
164
+
165
+ ## Dataset Creation
166
+
167
+ ### Curation Rationale
168
+
169
+ When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context.
170
+
171
+ ### Source Data
172
+
173
+ #### Initial Data Collection and Normalization
174
+
175
+ Initial text from ROCStories
176
+
177
+ #### Who are the source language producers?
178
+
179
+ Amazon Mechanical Turk.
180
+
181
+ ### Annotations
182
+
183
+ #### Annotation process
184
+
185
+ To enable developing models that can build mental models of narratives, we aimed to crowdsource a large, quality-monitored dataset. Beyond the scalability benefits, using crowd workers (as opposed to a small set of expert annotators) ensures diversity of thought, thus broadening coverage of a common-sense knowledge resource. The annotation task is complex: it requires annotators to understand different causal dimensions in a variety of contexts and to come up with generalized theories beyond the story context. For
186
+ strict quality control, we designed a three-stage knowledge acquisition pipeline for crowdsourcing the GLUCOSE dataset on the Amazon Mechanical Turk Platform. The workers first go through a qualification test where they must score at least 90% on 10 multiple-choice questions on select GLUCOSE dimensions. Next, qualified workers can work on the main GLUCOSE data collection task: given a story S and a story sentence X, they are asked to fill in (allowing for non-applicable) all ten GLUCOSE dimensions, getting step-by-step guidance from the GLUCOSE data acquisition. To ensure data consistency, the same workers answer all dimensions for an S, X pair. Finally, the submissions are reviewed by an expert who rates each worker on a scale from 0 to 3, and provides feedback on how to improve. Our final UIs are the result of more than six rounds of pilot studies, iteratively improving the interaction elements, functionality, dimension definitions, instructions, and examples.
187
+
188
+ #### Who are the annotators?
189
+
190
+ Amazon Mechanical Turk workers, with feedback from an expert.
191
+
192
+ ### Personal and Sensitive Information
193
+
194
+ No personal or sensitive information.
195
+
196
+ ## Considerations for Using the Data
197
+
198
+ ### Social Impact of Dataset
199
+
200
+ [More Information Needed]
201
+
202
+ ### Discussion of Biases
203
+
204
+ [More Information Needed]
205
+
206
+ ### Other Known Limitations
207
+
208
+ [More Information Needed]
209
+
210
+ ## Additional Information
211
+
212
+ ### Dataset Curators
213
+
214
+ Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, from Elemental Cognition
215
+
216
+ ### Licensing Information
217
+
218
+ Creative Commons Attribution-NonCommercial 4.0 International Public License
219
+
220
+ ### Citation Information
221
+
222
+ ```
223
+ @inproceedings{mostafazadeh2020glucose,
224
+ title={GLUCOSE: GeneraLized and COntextualized Story Explanations},
225
+ author={Nasrin Mostafazadeh and Aditya Kalyanpur and Lori Moon and David Buchanan and Lauren Berkowitz and Or Biran and Jennifer Chu-Carroll},
226
+ year={2020},
227
+ booktitle={The Conference on Empirical Methods in Natural Language Processing},
228
+ publisher={Association for Computational Linguistics}
229
+ }
230
+ ```
dummy/glucose/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:548af3c57738302cdc1ef010d5954805df41ecaee6c3094873c8a8d6b5a88ccd
3
+ size 3759
glucose.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """GLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # Find for instance the citation on arxiv or on the dataset repo/website
26
+ _CITATION = """\
27
+ @inproceedings{mostafazadeh2020glucose,
28
+ title={GLUCOSE: GeneraLized and COntextualized Story Explanations},
29
+ author={Nasrin Mostafazadeh and Aditya Kalyanpur and Lori Moon and David Buchanan and Lauren Berkowitz and Or Biran and Jennifer Chu-Carroll},
30
+ year={2020},
31
+ booktitle={The Conference on Empirical Methods in Natural Language Processing},
32
+ publisher={Association for Computational Linguistics}
33
+ }
34
+ """
35
+
36
+ # You can copy an official description
37
+ _DESCRIPTION = """\
38
+ When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context.
39
+ """
40
+
41
+ _HOMEPAGE = "https://github.com/ElementalCognition/glucose"
42
+
43
+ _LICENSE = "Creative Commons Attribution-NonCommercial 4.0 International Public License"
44
+
45
+ _URLs = {
46
+ "glucose": {
47
+ "test": "https://raw.githubusercontent.com/ElementalCognition/glucose/master/test/test_set_no_answers.csv",
48
+ "train": "https://github.com/TevenLeScao/glucose/blob/master/GLUCOSE_training_data.zip?raw=true",
49
+ }
50
+ }
51
+
52
+
53
+ class Glucose(datasets.GeneratorBasedBuilder):
54
+ """GLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. """
55
+
56
+ VERSION = datasets.Version("1.1.0")
57
+ BUILDER_CONFIGS = [
58
+ datasets.BuilderConfig(name="glucose", description="Main dataset"),
59
+ ]
60
+
61
+ def _info(self):
62
+ feature_dict = {
63
+ "experiment_id": datasets.Value("string"),
64
+ "story_id": datasets.Value("string"),
65
+ # The train set contains only one ID in numeric form
66
+ "worker_id": datasets.Value("int64"),
67
+ # The test set contains several IDs in string form
68
+ "worker_ids": datasets.Value("string"),
69
+ "submission_time_normalized": datasets.Value("string"),
70
+ "worker_quality_assessment": datasets.Value("int64"),
71
+ "selected_sentence_index": datasets.Value("int64"),
72
+ "story": datasets.Value("string"),
73
+ "selected_sentence": datasets.Value("string"),
74
+ "number_filled_in": datasets.Value("int64"),
75
+ }
76
+ for i in range(1, 11):
77
+ feature_dict[f"{i}_specificNL"] = datasets.Value("string")
78
+ feature_dict[f"{i}_specificStructured"] = datasets.Value("string")
79
+ feature_dict[f"{i}_generalNL"] = datasets.Value("string")
80
+ feature_dict[f"{i}_generalStructured"] = datasets.Value("string")
81
+ features = datasets.Features(feature_dict)
82
+ return datasets.DatasetInfo(
83
+ description=_DESCRIPTION,
84
+ features=features,
85
+ supervised_keys=None,
86
+ homepage=_HOMEPAGE,
87
+ license=_LICENSE,
88
+ citation=_CITATION,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ """Returns SplitGenerators."""
93
+ train_url = _URLs[self.config.name]["train"]
94
+ test_url = _URLs[self.config.name]["test"]
95
+ train_data = dl_manager.download_and_extract(train_url)
96
+ test_data = dl_manager.download_and_extract(test_url)
97
+ return [
98
+ datasets.SplitGenerator(
99
+ name=datasets.Split.TRAIN,
100
+ gen_kwargs={
101
+ "filepath": os.path.join(train_data, "GLUCOSE_training_data_final.csv"),
102
+ "split": "train",
103
+ },
104
+ ),
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.TEST,
107
+ gen_kwargs={"filepath": test_data, "split": "test"},
108
+ ),
109
+ ]
110
+
111
+ def _generate_examples(self, filepath, split):
112
+ with open(filepath, encoding="utf8") as f:
113
+ data = csv.reader(f)
114
+ next(data)
115
+ for id_, row in enumerate(data):
116
+ if split == "train":
117
+ yield id_, train_dict_from_row(row)
118
+ else:
119
+ yield id_, test_dict_from_row(row)
120
+
121
+
122
+ def train_dict_from_row(row):
123
+ return_dict = {
124
+ "experiment_id": row[0],
125
+ "story_id": row[1],
126
+ "worker_id": row[2],
127
+ "worker_ids": "",
128
+ "submission_time_normalized": row[3],
129
+ "worker_quality_assessment": row[4],
130
+ "selected_sentence_index": row[5],
131
+ "story": row[6],
132
+ "selected_sentence": row[7],
133
+ "number_filled_in": row[48],
134
+ }
135
+ for i in range(1, 11):
136
+ return_dict[f"{i}_specificNL"] = row[4 * i + 4]
137
+ return_dict[f"{i}_specificStructured"] = row[4 * i + 5]
138
+ return_dict[f"{i}_generalNL"] = row[4 * i + 6]
139
+ return_dict[f"{i}_generalStructured"] = row[4 * i + 7]
140
+ return return_dict
141
+
142
+
143
+ def test_dict_from_row(row):
144
+ return_dict = {
145
+ "experiment_id": "",
146
+ "story_id": row[0],
147
+ "worker_id": -1,
148
+ "worker_ids": row[3],
149
+ "submission_time_normalized": "",
150
+ "worker_quality_assessment": -1,
151
+ "selected_sentence_index": -1,
152
+ "story": row[1],
153
+ "selected_sentence": row[2],
154
+ "number_filled_in": -1,
155
+ }
156
+ for i in range(1, 11):
157
+ return_dict[f"{i}_specificNL"] = row[2 * i + 2]
158
+ return_dict[f"{i}_generalNL"] = row[2 * i + 3]
159
+ return_dict[f"{i}_specificStructured"] = ""
160
+ return_dict[f"{i}_generalStructured"] = ""
161
+ return return_dict