Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
19093ca
0 Parent(s):

Update files from the datasets library (from 1.11.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.11.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +190 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. time_dial.py +116 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: 'TimeDial: Temporal Commonsense Reasoning in Dialog'
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-label-classification
21
+ - text-classification-other-dialog-act-classification
22
+ paperswithcode_id: timedial
23
+ ---
24
+
25
+ # Dataset Card for TimeDial: Temporal Commonsense Reasoning in Dialog
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [TimeDial](https://github.com/google-research-datasets/timedial)
55
+ - **Paper:** [TimeDial: Temporal Commonsense Reasoning in Dialog](https://arxiv.org/abs/2106.04571)
56
+ - **Point of Contact:** [Please create an issue in the official repository](https://github.com/google-research-datasets/timedial)
57
+
58
+ ### Dataset Summary
59
+
60
+ TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the DailyDialog ([Li et al., 2017](https://www.aclweb.org/anthology/I17-1099/)), which is a multi-turn dialog corpus.
61
+
62
+ In order to establish strong baselines and provide information on future model development, the authors conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these questions (97.8\%), the best T5 model variant struggles on this challenge set (73\%). Moreover, our qualitative error analyses show that the models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context.
63
+
64
+ Detailed experiments and analyses can be found in their [paper](https://arxiv.org/pdf/2106.04571.pdf).
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ To be updated soon.
69
+
70
+ ### Languages
71
+
72
+ The dataset is in English only.
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ ```
79
+ {
80
+ "id": 1,
81
+ "conversation": [
82
+ "A: We need to take the accounts system offline to carry out the upgrade . But don't worry , it won't cause too much inconvenience . We're going to do it over the weekend .",
83
+ "B: How long will the system be down for ?",
84
+ "A: We'll be taking everything offline in about two hours ' time . It'll be down for a minimum of twelve hours . If everything goes according to plan , it should be up again by 6 pm on Saturday .",
85
+ "B: That's fine . We've allowed <MASK> to be on the safe side ."
86
+ ],
87
+ "correct1": "forty-eight hours",
88
+ "correct2": "50 hours ",
89
+ "incorrect1": "two hours ",
90
+ "incorrect1_rule": "Rule 1",
91
+ "incorrect2": "12 days ",
92
+ "incorrect2_rule": "Rule 2"
93
+ }
94
+ ```
95
+ ### Data Fields
96
+
97
+ - "id": Unique identifier, as a integer
98
+ - "conversation": Dialog context with <MASK> span, as a string
99
+ - "correct1": Original <MASK> span, as a string
100
+ - "correct2": Additional correct option provided by annotators, as a string
101
+ - "incorrect1": Incorrect option #1 provided by annotators, as a string
102
+ - "incorrect1_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
103
+ - "incorrect2": Incorrect option #2 provided by annotators, as a string
104
+ - "incorrect2_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
105
+
106
+ ### Data Splits
107
+
108
+ TimeDial dataset consists only of a test set of 1,104 dialog instances with 2 correct and 2 incorrect options with the following statistics:
109
+ | | Avg. |
110
+ |-----|-----|
111
+ |Turns per Dialog | 11.7 |
112
+ |Words per Turn | 16.5 |
113
+ |Time Spans per Dialog | 3 |
114
+
115
+
116
+ ## Dataset Creation
117
+
118
+ ### Curation Rationale
119
+
120
+ Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction, and/or dealt with limited context, such as single-sentence-based question answering and natural language inference.
121
+
122
+ In this work, they make the first systematic study of temporal commonsense reasoning in a multi-turn dialog setting. The task involves complex reasoning that requires operations like comparison and arithmetic reasoning over temporal expressions and the need for commonsense and world knowledge.
123
+
124
+ ### Source Data
125
+
126
+ #### Initial Data Collection and Normalization
127
+
128
+ The TIMEDIAL dataset is derived from DailyDialog data (Li et al., 2017), which is a multi-turn dialog corpus containing over 13K English dialogs. Dialogs in this dataset consist of turn-taking between two people on topics over 10 broad categories, ranging from daily lives to financial topics.
129
+
130
+ #### Who are the source language producers?
131
+
132
+ [More Information Needed]
133
+
134
+ ### Annotations
135
+
136
+ #### Annotation process
137
+
138
+ The data collection process involves two steps: (1) identifying dialogs that are rich in temporal expressions, and (2) asking human annotators to provide correct and incorrect options for cloze instances derived from these dialogs. More details about the two steps:
139
+
140
+ 1) Temporal expression identification: Here, they select dialogs that are rich with temporal information, in order to focus on complex temporal reasoning that arises in natural dialogs. Temporal expressions are automatically identified with SU-Time, an off-the-shelf temporal expression detector. They keep only the dialogs with more than 3 temporal expressions and at least one expression that contains numerals like “two weeks” (as opposed to non-numeric spans, like “summer”, “right now”, and “later”). In their initial experiment, they observe that language models can often correctly predict these non-numerical temporal phrases.
141
+
142
+ 2) Human annotated options: Next, they make spans in the dialogs. For a dialog, they mask out each temporal expression that contains numerals, each resulting in a cloze question that is then sent for human annotation.
143
+ This resulted in 1,526 instances for annotation. For each masked span in each dialog, they obtain human annotation to derive a fixed set of correct and incorrect options given the context. Concretely, given a masked dialog and a seed correct answer (i.e., the original text) for the masked span, the annotators were asked to (1) come up with an alternative correct answer that makes sense in the dialog adhering to commonsense, and (2) formulate two incorrect answers that have no possibility of making sense in the dialog context. They highlight all time expressions in the context to make it easier for annotators to select reasonable time expressions.
144
+
145
+ #### Who are the annotators?
146
+
147
+ They are English linguists.
148
+
149
+ ### Personal and Sensitive Information
150
+
151
+ [More Information Needed]
152
+
153
+ ## Considerations for Using the Data
154
+
155
+ ### Social Impact of Dataset
156
+
157
+ [More Information Needed]
158
+
159
+ ### Discussion of Biases
160
+
161
+ [More Information Needed]
162
+
163
+ ### Other Known Limitations
164
+
165
+ [More Information Needed]
166
+
167
+ ## Additional Information
168
+
169
+ ### Dataset Curators
170
+
171
+ [More Information Needed]
172
+
173
+ ### Licensing Information
174
+
175
+ TimeDial dataset is licensed under CC BY-NC-SA 4.0.
176
+
177
+ ### Citation Information
178
+
179
+ ```
180
+ @inproceedings{qin-etal-2021-timedial,
181
+ title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
182
+ author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
183
+ booktitle = "Proc. of ACL",
184
+ year = "2021"
185
+ }
186
+ ```
187
+
188
+ ### Contributions
189
+
190
+ Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated\nas a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from\nthe DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus.\n\nIn order to establish strong baselines and provide information on future model development, we\nconducted extensive experiments with state-of-the-art LMs. While humans can easily answer these\nquestions (97.8%), the best T5 model variant struggles on this challenge set (73%). Moreover, our\nqualitative error analyses show that the models often rely on shallow, spurious features (particularly text\nmatching), instead of truly doing reasoning over the context.\n", "citation": "@inproceedings{qin-etal-2021-timedial,\n title = \"{TimeDial: Temporal Commonsense Reasoning in Dialog}\",\n author = \"Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal\",\n booktitle = \"Proc. of ACL\",\n year = \"2021\"\n}\n", "homepage": "https://github.com/google-research-datasets/timedial", "license": "TimeDial dataset is licensed under CC BY-NC-SA 4.0", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "conversation": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "correct1": {"dtype": "string", "id": null, "_type": "Value"}, "correct2": {"dtype": "string", "id": null, "_type": "Value"}, "incorrect1": {"dtype": "string", "id": null, "_type": "Value"}, "incorrect1_rule": {"dtype": "string", "id": null, "_type": "Value"}, "incorrect2": {"dtype": "string", "id": null, "_type": "Value"}, "incorrect2_rule": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "time_dial", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1449879, "num_examples": 1446, "dataset_name": "time_dial"}}, "download_checksums": {"https://raw.githubusercontent.com/google-research-datasets/TimeDial/main/test.json": {"num_bytes": 1613806, "checksum": "771126fcbb7441fce4a6f3a1fce4a1b3c0ebaaa24ed5b443a7fa1d9723745481"}}, "download_size": 1613806, "post_processing_size": null, "dataset_size": 1449879, "size_in_bytes": 3063685}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e50e91a429dd9ca7615b89ab5c2b3752bef4cc01b51701904a2b2174e097a816
3
+ size 2058
time_dial.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Temporal Commonsense Reasoning in Dialog"""
16
+
17
+
18
+ import json
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @inproceedings{qin-etal-2021-timedial,
25
+ title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
26
+ author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
27
+ booktitle = "Proc. of ACL",
28
+ year = "2021"
29
+ }
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated
34
+ as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from
35
+ the DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus.
36
+
37
+ In order to establish strong baselines and provide information on future model development, we
38
+ conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these
39
+ questions (97.8%), the best T5 model variant struggles on this challenge set (73%). Moreover, our
40
+ qualitative error analyses show that the models often rely on shallow, spurious features (particularly text
41
+ matching), instead of truly doing reasoning over the context.
42
+ """
43
+
44
+ _HOMEPAGE = "https://github.com/google-research-datasets/timedial"
45
+
46
+ _LICENSE = "TimeDial dataset is licensed under CC BY-NC-SA 4.0"
47
+
48
+ _URL = "https://raw.githubusercontent.com/google-research-datasets/TimeDial/main/test.json"
49
+
50
+
51
+ class TimeDial(datasets.GeneratorBasedBuilder):
52
+ """Temporal Commonsense Reasoning in Dialog"""
53
+
54
+ VERSION = datasets.Version("1.1.0")
55
+
56
+ def _info(self):
57
+ features = datasets.Features(
58
+ {
59
+ "id": datasets.Value("int32"),
60
+ "conversation": datasets.features.Sequence(datasets.Value("string")),
61
+ "correct1": datasets.Value("string"),
62
+ "correct2": datasets.Value("string"),
63
+ "incorrect1": datasets.Value("string"),
64
+ "incorrect1_rule": datasets.Value("string"),
65
+ "incorrect2": datasets.Value("string"),
66
+ "incorrect2_rule": datasets.Value("string"),
67
+ }
68
+ )
69
+ return datasets.DatasetInfo(
70
+ # This is the description that will appear on the datasets page.
71
+ description=_DESCRIPTION,
72
+ # This defines the different columns of the dataset and their types
73
+ features=features, # Here we define them above because they are different between the two configurations
74
+ # If there's a common (input, target) tuple from the features,
75
+ # specify them here. They'll be used if as_supervised=True in
76
+ # builder.as_dataset.
77
+ supervised_keys=None,
78
+ # Homepage of the dataset for documentation
79
+ homepage=_HOMEPAGE,
80
+ # License for the dataset if available
81
+ license=_LICENSE,
82
+ # Citation for the dataset
83
+ citation=_CITATION,
84
+ )
85
+
86
+ def _split_generators(self, dl_manager):
87
+ """Returns SplitGenerators."""
88
+
89
+ return [
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.TEST,
92
+ # These kwargs will be passed to _generate_examples
93
+ gen_kwargs={"filepath": dl_manager.download_and_extract(_URL), "split": "test"},
94
+ ),
95
+ ]
96
+
97
+ def _generate_examples(
98
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
99
+ ):
100
+ """Yields examples as (key, example) tuples."""
101
+
102
+ with open(filepath, encoding="utf-8") as f:
103
+ glob_id = 0
104
+ row = json.load(f)
105
+ for data in row:
106
+ yield glob_id, {
107
+ "id": data["id"],
108
+ "conversation": data["conversation"],
109
+ "correct1": data["correct1"],
110
+ "correct2": data["correct2"],
111
+ "incorrect1": data["incorrect1"],
112
+ "incorrect1_rule": data["incorrect1_rule"],
113
+ "incorrect2": data["incorrect2"],
114
+ "incorrect2_rule": data["incorrect2_rule"],
115
+ }
116
+ glob_id += 1