Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
system HF staff commited on
Commit
0eea40c
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - structure-prediction-other-acronym-identification
20
+ ---
21
+
22
+ # Dataset Card Creation Guide
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task
50
+ - **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
51
+ - **Paper:** https://arxiv.org/pdf/2010.14678v1.pdf
52
+ - **Leaderboard:** https://competitions.codalab.org/competitions/26609
53
+ - **Point of Contact:** [More Information Needed]
54
+
55
+ ### Dataset Summary
56
+
57
+ [More Information Needed]
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+
65
+ [More Information Needed]
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ A sample training set is provided below
72
+
73
+ ```
74
+ {'id': 'TR-0',
75
+ 'labels': [4, 4, 4, 4, 0, 2, 2, 4, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4],
76
+ 'tokens': ['What',
77
+ 'is',
78
+ 'here',
79
+ 'called',
80
+ 'controlled',
81
+ 'natural',
82
+ 'language',
83
+ '(',
84
+ 'CNL',
85
+ ')',
86
+ 'has',
87
+ 'traditionally',
88
+ 'been',
89
+ 'given',
90
+ 'many',
91
+ 'different',
92
+ 'names',
93
+ '.']}
94
+ ```
95
+
96
+ Please note that in test set sentence only id, tokens are available. labels can be ignored for test set.
97
+ Labels in the test set are all `O`
98
+
99
+ ### Data Fields
100
+
101
+ [More Information Needed]
102
+
103
+ ### Data Splits
104
+
105
+ [More Information Needed]
106
+
107
+ ## Dataset Creation
108
+
109
+ ### Curation Rationale
110
+
111
+ [More Information Needed]
112
+
113
+ ### Source Data
114
+
115
+ [More Information Needed]
116
+
117
+ #### Initial Data Collection and Normalization
118
+
119
+ [More Information Needed]
120
+
121
+ #### Who are the source language producers?
122
+
123
+ [More Information Needed]
124
+
125
+ ### Annotations
126
+
127
+ [More Information Needed]
128
+
129
+ #### Annotation process
130
+
131
+ [More Information Needed]
132
+
133
+ #### Who are the annotators?
134
+
135
+ [More Information Needed]
136
+
137
+ ### Personal and Sensitive Information
138
+
139
+ [More Information Needed]
140
+
141
+ ## Considerations for Using the Data
142
+
143
+ ### Social Impact of Dataset
144
+
145
+ [More Information Needed]
146
+
147
+ ### Discussion of Biases
148
+
149
+ [More Information Needed]
150
+
151
+ ### Other Known Limitations
152
+
153
+ [More Information Needed]
154
+
155
+ ## Additional Information
156
+
157
+ ### Dataset Curators
158
+
159
+ [More Information Needed]
160
+
161
+ ### Licensing Information
162
+
163
+ [More Information Needed]
164
+
165
+ ### Citation Information
166
+
167
+ [More Information Needed]
acronym_identification.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ import json
18
+
19
+ import datasets
20
+
21
+
22
+ _DESCRIPTION = """\
23
+ Acronym identification training and development sets for the acronym identification task at SDU@AAAI-21.
24
+ """
25
+ _HOMEPAGE_URL = "https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI"
26
+ _CITATION = """\
27
+ @inproceedings{veyseh-et-al-2020-what,
28
+ title={{What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation}},
29
+ author={Amir Pouran Ben Veyseh and Franck Dernoncourt and Quan Hung Tran and Thien Huu Nguyen},
30
+ year={2020},
31
+ booktitle={Proceedings of COLING},
32
+ link={https://arxiv.org/pdf/2010.14678v1.pdf}
33
+ }
34
+ """
35
+
36
+ _TRAIN_URL = "https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/train.json"
37
+ _VALID_URL = "https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/dev.json"
38
+ _TEST_URL = "https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/test.json"
39
+
40
+
41
+ class AcronymIdentification(datasets.GeneratorBasedBuilder):
42
+ VERSION = datasets.Version("1.0.0")
43
+
44
+ def _info(self):
45
+ return datasets.DatasetInfo(
46
+ description=_DESCRIPTION,
47
+ features=datasets.Features(
48
+ {
49
+ "id": datasets.Value("string"),
50
+ "tokens": datasets.Sequence(datasets.Value("string")),
51
+ "labels": datasets.Sequence(
52
+ datasets.ClassLabel(names=["B-long", "B-short", "I-long", "I-short", "O"])
53
+ ),
54
+ },
55
+ ),
56
+ supervised_keys=None,
57
+ homepage=_HOMEPAGE_URL,
58
+ citation=_CITATION,
59
+ )
60
+
61
+ def _split_generators(self, dl_manager):
62
+ train_path = dl_manager.download_and_extract(_TRAIN_URL)
63
+ valid_path = dl_manager.download_and_extract(_VALID_URL)
64
+ test_path = dl_manager.download_and_extract(_TEST_URL)
65
+ return [
66
+ datasets.SplitGenerator(
67
+ name=datasets.Split.TRAIN,
68
+ gen_kwargs={"datapath": train_path, "datatype": "train"},
69
+ ),
70
+ datasets.SplitGenerator(
71
+ name=datasets.Split.VALIDATION,
72
+ gen_kwargs={"datapath": valid_path, "datatype": "valid"},
73
+ ),
74
+ datasets.SplitGenerator(
75
+ name=datasets.Split.TEST,
76
+ gen_kwargs={"datapath": test_path, "datatype": "test"},
77
+ ),
78
+ ]
79
+
80
+ def _generate_examples(self, datapath, datatype):
81
+ with open(datapath, encoding="utf-8") as f:
82
+ data = json.load(f)
83
+
84
+ for sentence_counter, d in enumerate(data):
85
+ resp = {
86
+ "id": d["id"],
87
+ "tokens": d["tokens"],
88
+ }
89
+ if datatype != "test":
90
+ resp["labels"] = d["labels"]
91
+ else:
92
+ resp["labels"] = ["O"] * len(d["tokens"])
93
+ yield sentence_counter, resp
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "Acronym identification training and development sets for the acronym identification task at SDU@AAAI-21.\n", "citation": "@inproceedings{veyseh-et-al-2020-what,\n title={{What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation}},\n author={Amir Pouran Ben Veyseh and Franck Dernoncourt and Quan Hung Tran and Thien Huu Nguyen},\n year={2020},\n booktitle={Proceedings of COLING},\n link={https://arxiv.org/pdf/2010.14678v1.pdf}\n}\n", "homepage": "https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "labels": {"feature": {"num_classes": 5, "names": ["B-long", "B-short", "I-long", "I-short", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "acronym_identification", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7792803, "num_examples": 14006, "dataset_name": "acronym_identification"}, "validation": {"name": "validation", "num_bytes": 952705, "num_examples": 1717, "dataset_name": "acronym_identification"}, "test": {"name": "test", "num_bytes": 987728, "num_examples": 1750, "dataset_name": "acronym_identification"}}, "download_checksums": {"https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/train.json": {"num_bytes": 7134043, "checksum": "2a48182187235167e8cbfa71e13c5c9882c4cabdefd2148edace2a50ccd8bbcd"}, "https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/dev.json": {"num_bytes": 873517, "checksum": "950000511ddab850170c85ae99c7ceb775e8bed6846482e06e47a8f99b16f8c2"}, "https://raw.githubusercontent.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI/master/dataset/test.json": {"num_bytes": 548904, "checksum": "5a37584eaa56ac23ffef23de7109d07bac6a19928eda96184348d89a01c82671"}}, "download_size": 8556464, "post_processing_size": null, "dataset_size": 9733236, "size_in_bytes": 18289700}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db001dad94be116f45b7bbc6314be703dd038032fe2409af2b642f89c5e69380
3
+ size 2307