Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
system HF staff commited on
Commit
0ff6ace
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +169 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. zest.py +117 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ - structure-prediction
19
+ task_ids:
20
+ - closed-domain-qa
21
+ - extractive-qa
22
+ - question-answering-other-yes-no-qa
23
+ - structure-prediction-other-output-structure
24
+ ---
25
+
26
+ # Dataset Card for "ZEST: ZEroShot learning from Task descriptions"
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://allenai.org/data/zest
54
+ - **Repository:** https://github.com/allenai/zest
55
+ - **Paper:** https://arxiv.org/abs/2011.08115
56
+ - **Leaderboard:** https://leaderboard.allenai.org/zest/submissions/public
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of
62
+ the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include
63
+ classification, typed entity extraction and relationship extraction, and each task is paired with 20 different
64
+ annotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize
65
+ in five different ways.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ A [leaderboard](https://leaderboard.allenai.org/zest/submissions/public) is included with accepatbility metrics for
70
+ each of the four generalization types outlined in the paper. The metrics are novel acceptability metrics also
71
+ proposed by the authors.
72
+
73
+ ### Languages
74
+
75
+ The dataset is in English.
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ [More Information Needed]
82
+
83
+ ### Data Fields
84
+
85
+ [More Information Needed]
86
+
87
+ ### Data Splits
88
+
89
+ [More Information Needed]
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ To evaluate the ability of a model to generalize to unseen tasks based only on a task description in a zero-shot
96
+ manner.
97
+
98
+ ### Source Data
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ [More Information Needed]
103
+
104
+ #### Who are the source language producers?
105
+
106
+ Mechanical Turk crowdsource workers.
107
+
108
+ ### Annotations
109
+
110
+ #### Annotation process
111
+
112
+ [More Information Needed]
113
+
114
+ #### Who are the annotators?
115
+
116
+ Mechanical Turk crowdsource workers.
117
+
118
+ ### Personal and Sensitive Information
119
+
120
+ [More Information Needed]
121
+
122
+ ## Considerations for Using the Data
123
+
124
+ ### Social Impact of Dataset
125
+
126
+ The dataset emphasizes a model's ability to generalize to unseen tasks with only a natural language description of
127
+ the task. The long-term vision of this type of evaluation is to facilitate the creation of models which can perform
128
+ arbitrary tasks with only a prompt from a non-technical user. This could broaden the frontier of what a user can
129
+ ask something like a chatbot to do for them, but it is unclear how restrictions would be put in place to prevent
130
+ users from prompting a system to perform unethical tasks.
131
+
132
+ ### Discussion of Biases
133
+
134
+ [More Information Needed]
135
+
136
+ ### Other Known Limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Additional Information
141
+
142
+ ### Dataset Curators
143
+
144
+ [More Information Needed]
145
+
146
+ ### Licensing Information
147
+
148
+ This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
149
+
150
+ ### Citation Information
151
+
152
+ ```
153
+ @inproceedings{weller-etal-2020-learning,
154
+ title = "Learning from Task Descriptions",
155
+ author = "Weller, Orion and
156
+ Lourie, Nicholas and
157
+ Gardner, Matt and
158
+ Peters, Matthew",
159
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
160
+ month = nov,
161
+ year = "2020",
162
+ address = "Online",
163
+ publisher = "Association for Computational Linguistics",
164
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.105",
165
+ pages = "1361--1375",
166
+ abstract = "Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area. We instantiate this frame- work with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Formulating task descriptions as questions, we ensure each is general enough to apply to many possible inputs, thus comprehensively evaluating a model{'}s ability to solve each task. Moreover, the dataset{'}s structure tests specific types of systematic generalization. We find that the state-of-the-art T5 model achieves a score of 12% on ZEST, leaving a significant challenge for NLP researchers.",
167
+ }
168
+ ```
169
+
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of\nthe task. It is an instantiation of our proposed framework \"learning from task descriptions\". The tasks include\nclassification, typed entity extraction and relationship extraction, and each task is paired with 20 different\nannotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize\nin five different ways.\n", "citation": "@inproceedings{weller-etal-2020-learning,\n title = \"Learning from Task Descriptions\",\n author = \"Weller, Orion and\n Lourie, Nicholas and\n Gardner, Matt and\n Peters, Matthew\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.105\",\n pages = \"1361--1375\",\n abstract = \"Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area. We instantiate this frame- work with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Formulating task descriptions as questions, we ensure each is general enough to apply to many possible inputs, thus comprehensively evaluating a model{'}s ability to solve each task. Moreover, the dataset{'}s structure tests specific types of systematic generalization. We find that the state-of-the-art T5 model achieves a score of 12{\\%} on ZEST, leaving a significant challenge for NLP researchers.\",\n}\n", "homepage": "https://allenai.org/data/zest", "license": "", "features": {"task_id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "generalization_type": {"dtype": "string", "id": null, "_type": "Value"}, "derives_from": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "domain": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "all_answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "zest", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9588987, "num_examples": 10766, "dataset_name": "zest"}, "validation": {"name": "validation", "num_bytes": 2056804, "num_examples": 2280, "dataset_name": "zest"}, "test": {"name": "test", "num_bytes": 9280845, "num_examples": 11980, "dataset_name": "zest"}}, "download_checksums": {"https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip": {"num_bytes": 5796188, "checksum": "91b8e41470281e774034b2f2a42a5cb36a8ff4f7d17517123d51208aa9af795f"}}, "download_size": 5796188, "post_processing_size": null, "dataset_size": 20926636, "size_in_bytes": 26722824}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c06e069a35cb78eebc7af2254cde3ddcdde1e16c8ff549705855b448e5caec27
3
+ size 74157
zest.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """ZEST: ZEroShot learning from Task descriptions"""
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import json
22
+ import os
23
+
24
+ import datasets
25
+
26
+
27
+ _DESCRIPTION = """\
28
+ ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of
29
+ the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include
30
+ classification, typed entity extraction and relationship extraction, and each task is paired with 20 different
31
+ annotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize
32
+ in five different ways.
33
+ """
34
+
35
+ _CITATION = """\
36
+ @inproceedings{weller-etal-2020-learning,
37
+ title = "Learning from Task Descriptions",
38
+ author = "Weller, Orion and
39
+ Lourie, Nicholas and
40
+ Gardner, Matt and
41
+ Peters, Matthew",
42
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
43
+ month = nov,
44
+ year = "2020",
45
+ address = "Online",
46
+ publisher = "Association for Computational Linguistics",
47
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.105",
48
+ pages = "1361--1375",
49
+ abstract = "Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area. We instantiate this frame- work with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Formulating task descriptions as questions, we ensure each is general enough to apply to many possible inputs, thus comprehensively evaluating a model{'}s ability to solve each task. Moreover, the dataset{'}s structure tests specific types of systematic generalization. We find that the state-of-the-art T5 model achieves a score of 12% on ZEST, leaving a significant challenge for NLP researchers.",
50
+ }
51
+ """
52
+
53
+ _DOWNLOAD_URL = "https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip"
54
+ _WEBPAGE = "https://allenai.org/data/zest"
55
+
56
+
57
+ class Zest(datasets.GeneratorBasedBuilder):
58
+ """ZEST: ZEroShot learning from Task descriptions"""
59
+
60
+ def _info(self):
61
+ return datasets.DatasetInfo(
62
+ description=_DESCRIPTION,
63
+ features=datasets.Features(
64
+ {
65
+ "task_id": datasets.Value("string"),
66
+ "question": datasets.Value("string"),
67
+ "generalization_type": datasets.Value("string"),
68
+ "derives_from": datasets.Sequence(datasets.Value("string")),
69
+ "domain": datasets.Value("string"),
70
+ "context": datasets.Value("string"),
71
+ "answer": datasets.Sequence(datasets.Value("string")),
72
+ "all_answers": datasets.Sequence(datasets.Value("string")),
73
+ }
74
+ ),
75
+ homepage=_WEBPAGE,
76
+ citation=_CITATION,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ path = dl_manager.download_and_extract(_DOWNLOAD_URL)
81
+ path = os.path.join(path, "zest")
82
+
83
+ train_path = os.path.join(path, "train.jsonl")
84
+ validation_path = os.path.join(path, "dev.jsonl")
85
+ test_path = os.path.join(path, "test_unanswered.jsonl")
86
+
87
+ return [
88
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
89
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": validation_path}),
90
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path, "is_labeled": False}),
91
+ ]
92
+
93
+ def _generate_examples(self, filepath, is_labeled=True):
94
+ """Generate AG News examples."""
95
+ counter = 0
96
+ with open(filepath, "r", encoding="utf-8") as f:
97
+ for line in f:
98
+ task = json.loads(line)
99
+ base_dict = {
100
+ "task_id": task["id"],
101
+ "question": task["question"],
102
+ "generalization_type": task["type"]["generalization_type"] if is_labeled else None,
103
+ "derives_from": task["type"]["derives_from"] if is_labeled else [],
104
+ "domain": task["type"]["domain"] if is_labeled else None,
105
+ }
106
+
107
+ for example in task["examples"]:
108
+ answer = example["answer"] if is_labeled else []
109
+ if isinstance(answer, str):
110
+ answer = [answer]
111
+ yield counter, dict(
112
+ context=example["context"],
113
+ answer=answer,
114
+ all_answers=example["all_answers"] if is_labeled else [],
115
+ **base_dict,
116
+ )
117
+ counter += 1