system HF staff commited on
Commit
aee3e40
0 Parent(s):

Update files from the datasets library (from 1.5.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.5.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - open-domain-qa
20
+ ---
21
+
22
+ # Dataset Card for Cryptonite
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for Cryptonite](#dataset-card-for-cryptonite)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [Github](https://github.com/aviaefrat/cryptonite)
57
+ - **Repository:** [Github](https://github.com/aviaefrat/cryptonite)
58
+ - **Paper:** [Arxiv](https://arxiv.org/pdf/2103.01242.pdf)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** [Twitter](https://twitter.com/AviaEfrat)
61
+
62
+ ### Dataset Summary
63
+
64
+ Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
65
+
66
+ ### Languages
67
+
68
+ English
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ This is one example from the train set.
75
+
76
+ ```python
77
+ {
78
+ 'clue': 'make progress socially in stated region (5)',
79
+ 'answer': 'climb',
80
+ 'date': 971654400000,
81
+ 'enumeration': '(5)',
82
+ 'id': 'Times-31523-6across',
83
+ 'publisher': 'Times',
84
+ 'quick': False
85
+ }
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ - `clue`: a string representing the clue provided for the crossword
91
+ - `answer`: a string representing the answer to the clue
92
+ - `enumeration`: a string representing the
93
+ - `publisher`: a string representing the publisher of the crossword
94
+ - `date`: a int64 representing the UNIX timestamp of the date of publication of the crossword
95
+ - `quick`: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)
96
+ - `id`: a string to uniquely identify a given example in the dataset
97
+
98
+ ### Data Splits
99
+
100
+ Train (470,804 examples), validation (26,156 examples), test (26,157 examples).
101
+
102
+ ## Dataset Creation
103
+
104
+ ### Curation Rationale
105
+
106
+ Crosswords from the Times and the Telegraph.
107
+
108
+ ### Source Data
109
+
110
+ #### Initial Data Collection and Normalization
111
+
112
+ [More Information Needed]
113
+
114
+ #### Who are the source language producers?
115
+
116
+ [More Information Needed]
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ [More Information Needed]
123
+
124
+ #### Who are the annotators?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Personal and Sensitive Information
129
+
130
+ [More Information Needed]
131
+
132
+ ## Considerations for Using the Data
133
+
134
+ ### Social Impact of Dataset
135
+
136
+ [More Information Needed]
137
+
138
+ ### Discussion of Biases
139
+
140
+ [More Information Needed]
141
+
142
+ ### Other Known Limitations
143
+
144
+ [More Information Needed]
145
+
146
+ ## Additional Information
147
+
148
+ ### Dataset Curators
149
+
150
+ Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy
151
+
152
+ ### Licensing Information
153
+
154
+ `cc-by-nc-4.0`
155
+
156
+ ### Citation Information
157
+
158
+ ```
159
+ @misc{efrat2021cryptonite,
160
+ title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
161
+ author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
162
+ year={2021},
163
+ eprint={2103.01242},
164
+ archivePrefix={arXiv},
165
+ primaryClass={cs.CL}
166
+ }
167
+ ```
168
+
169
+
170
+ ### Contributions
171
+
172
+ Thanks to [@theo-m](https://github.com/theo-m) for adding this dataset.
cryptonite.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @misc{efrat2021cryptonite,
26
+ title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
27
+ author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
28
+ year={2021},
29
+ eprint={2103.01242},
30
+ archivePrefix={arXiv},
31
+ primaryClass={cs.CL}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language
37
+ Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite,
38
+ a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each
39
+ example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving
40
+ requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a
41
+ challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite
42
+ is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on
43
+ par with the accuracy of a rule-based clue solver (8.6%).
44
+ """
45
+
46
+ _HOMEPAGE = "https://github.com/aviaefrat/cryptonite"
47
+
48
+ _LICENSE = "cc-by-nc-4.0"
49
+
50
+ _URL = "https://github.com/aviaefrat/cryptonite/blob/main/data/cryptonite-official-split.zip?raw=true"
51
+
52
+
53
+ class Cryptonite(datasets.GeneratorBasedBuilder):
54
+
55
+ VERSION = datasets.Version("1.1.0")
56
+
57
+ BUILDER_CONFIGS = [
58
+ datasets.BuilderConfig(name="cryptonite", version=VERSION),
59
+ ]
60
+
61
+ def _info(self):
62
+ return datasets.DatasetInfo(
63
+ # This is the description that will appear on the datasets page.
64
+ description=_DESCRIPTION,
65
+ # This defines the different columns of the dataset and their types
66
+ features=datasets.Features(
67
+ {
68
+ "clue": datasets.Value("string"),
69
+ "answer": datasets.Value("string"),
70
+ "enumeration": datasets.Value("string"),
71
+ "publisher": datasets.Value("string"),
72
+ "date": datasets.Value("int64"),
73
+ "quick": datasets.Value("bool"),
74
+ "id": datasets.Value("string"),
75
+ }
76
+ ),
77
+ supervised_keys=None,
78
+ homepage=_HOMEPAGE,
79
+ license=_LICENSE,
80
+ citation=_CITATION,
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+ """Returns SplitGenerators."""
85
+ data_dir = dl_manager.download_and_extract(_URL)
86
+ return [
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.TRAIN,
89
+ gen_kwargs={
90
+ "filepath": os.path.join(data_dir, "cryptonite-official-split/cryptonite-train.jsonl"),
91
+ "split": "train",
92
+ },
93
+ ),
94
+ datasets.SplitGenerator(
95
+ name=datasets.Split.VALIDATION,
96
+ gen_kwargs={
97
+ "filepath": os.path.join(data_dir, "cryptonite-official-split/cryptonite-val.jsonl"),
98
+ "split": "val",
99
+ },
100
+ ),
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split.TEST,
103
+ gen_kwargs={
104
+ "filepath": os.path.join(data_dir, "cryptonite-official-split/cryptonite-test.jsonl"),
105
+ "split": "test",
106
+ },
107
+ ),
108
+ ]
109
+
110
+ def _generate_examples(self, filepath, split):
111
+ """ Yields examples. """
112
+
113
+ with open(filepath, encoding="utf-8") as f:
114
+ for id_, row in enumerate(f):
115
+ data = json.loads(row)
116
+
117
+ publisher = data["publisher"]
118
+ crossword_id = data["crossword_id"]
119
+ number = data["number"]
120
+ orientation = data["orientation"]
121
+ d_id = f"{publisher}-{crossword_id}-{number}{orientation}"
122
+
123
+ yield id_, {
124
+ "clue": data["clue"],
125
+ "answer": data["answer"],
126
+ "enumeration": data["enumeration"],
127
+ "publisher": publisher,
128
+ "date": data["date"],
129
+ "quick": data["quick"],
130
+ "id": d_id,
131
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "We study negotiation dialogues where two agents, a buyer and a seller,\nnegotiate over the price of an time for sale. We collected a dataset of more\nthan 6K negotiation dialogues over multiple categories of products scraped from Craigslist.\nOur goal is to develop an agent that negotiates with humans through such conversations.\nThe challenge is to handle both the negotiation strategy and the rich language for bargaining.\n", "citation": "@misc{he2018decoupling,\n title={Decoupling Strategy and Generation in Negotiation Dialogues},\n author={He He and Derek Chen and Anusha Balakrishnan and Percy Liang},\n year={2018},\n eprint={1808.09637},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://stanfordnlp.github.io/cocoa/", "license": "", "features": {"agent_info": {"feature": {"Bottomline": {"dtype": "string", "id": null, "_type": "Value"}, "Role": {"dtype": "string", "id": null, "_type": "Value"}, "Target": {"dtype": "float32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "agent_turn": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "dialogue_acts": {"feature": {"intent": {"dtype": "string", "id": null, "_type": "Value"}, "price": {"dtype": "float32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "utterance": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "items": {"feature": {"Category": {"dtype": "string", "id": null, "_type": "Value"}, "Images": {"dtype": "string", "id": null, "_type": "Value"}, "Price": {"dtype": "float32", "id": null, "_type": "Value"}, "Description": {"dtype": "string", "id": null, "_type": "Value"}, "Title": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "craigslist_bargains", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8538836, "num_examples": 5247, "dataset_name": "craigslist_bargains"}, "test": {"name": "test", "num_bytes": 1353933, "num_examples": 838, "dataset_name": "craigslist_bargains"}, "validation": {"name": "validation", "num_bytes": 966032, "num_examples": 597, "dataset_name": "craigslist_bargains"}}, "download_checksums": {"https://worksheets.codalab.org/rest/bundles/0xd34bbbc5fb3b4fccbd19e10756ca8dd7/contents/blob/parsed.json": {"num_bytes": 20148723, "checksum": "34033ff87565b9fc9eb0efe867e9d3e32456dbe1528cd1683f94a84b09f66ace"}, "https://worksheets.codalab.org/rest/bundles/0x15c4160b43d44ee3a8386cca98da138c/contents/blob/parsed.json": {"num_bytes": 2287054, "checksum": "03b35dc18bd90d87dac46893ac4db8ab3eed51786d192975be68d3bab38e306e"}, "https://worksheets.codalab.org/rest/bundles/0x54d325bbcfb2463583995725ed8ca42b/contents/blob/": {"num_bytes": 2937841, "checksum": "c802f15f80ea3066d429375393319d7234daacbd6a26a6ad5afd0ad78a2f7736"}}, "download_size": 25373618, "post_processing_size": null, "dataset_size": 10858801, "size_in_bytes": 36232419}, "cryptonite": {"description": "Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language\nCurrent NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, \na large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each \nexample in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving \nrequires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a \nchallenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite \nis a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on \npar with the accuracy of a rule-based clue solver (8.6%).\n", "citation": "@misc{efrat2021cryptonite,\n title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language}, \n author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},\n year={2021},\n eprint={2103.01242},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/aviaefrat/cryptonite", "license": "", "features": {"clue": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "enumeration": {"dtype": "string", "id": null, "_type": "Value"}, "publisher": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "int64", "id": null, "_type": "Value"}, "quick": {"dtype": "bool", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "new_dataset", "config_name": "cryptonite", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 52228597, "num_examples": 470804, "dataset_name": "new_dataset"}, "validation": {"name": "validation", "num_bytes": 2901768, "num_examples": 26156, "dataset_name": "new_dataset"}, "test": {"name": "test", "num_bytes": 2908275, "num_examples": 26157, "dataset_name": "new_dataset"}}, "download_checksums": {"https://github.com/aviaefrat/cryptonite/blob/main/data/cryptonite-official-split.zip?raw=true": {"num_bytes": 21615952, "checksum": "c0022977effc68b3f0e72bfe639263d5aaaa36f11287f3ec018e8db42dadb410"}}, "download_size": 21615952, "post_processing_size": null, "dataset_size": 58038640, "size_in_bytes": 79654592}}
dummy/cryptonite/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d1e3c2fdabb0467e69dbe08feffa9835b72378b53d400861056a9c068a7158a
3
+ size 2793