parquet-converter commited on
Commit
2c0a7b9
1 Parent(s): 32c401c

Update parquet files

Browse files
README.md DELETED
@@ -1,341 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- paperswithcode_id: acronym-identification
13
- pretty_name: >-
14
- Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction
15
- and Classification in Scientific Papers
16
- size_categories:
17
- - 1K<n<10K
18
- source_datasets: []
19
- tags:
20
- - Relation Classification
21
- - Relation extraction
22
- - Scientific papers
23
- - Research papers
24
- task_categories:
25
- - text-classification
26
- task_ids:
27
- - entity-linking-classification
28
- train-eval-index:
29
- - col_mapping:
30
- labels: tags
31
- tokens: tokens
32
- config: default
33
- splits:
34
- eval_split: test
35
- task: text-classification
36
- task_id: entity_extraction
37
- ---
38
- # Dataset Card for SemEval2018Task7
39
-
40
- ## Table of Contents
41
- - [Table of Contents](#table-of-contents)
42
- - [Dataset Description](#dataset-description)
43
- - [Dataset Summary](#dataset-summary)
44
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
45
- - [Languages](#languages)
46
- - [Dataset Structure](#dataset-structure)
47
- - [Data Instances](#data-instances)
48
- - [Data Fields](#data-fields)
49
- - [Data Splits](#data-splits)
50
- - [Dataset Creation](#dataset-creation)
51
- - [Curation Rationale](#curation-rationale)
52
- - [Source Data](#source-data)
53
- - [Annotations](#annotations)
54
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
- - [Considerations for Using the Data](#considerations-for-using-the-data)
56
- - [Social Impact of Dataset](#social-impact-of-dataset)
57
- - [Discussion of Biases](#discussion-of-biases)
58
- - [Other Known Limitations](#other-known-limitations)
59
- - [Additional Information](#additional-information)
60
- - [Dataset Curators](#dataset-curators)
61
- - [Licensing Information](#licensing-information)
62
- - [Citation Information](#citation-information)
63
- - [Contributions](#contributions)
64
-
65
- ## Dataset Description
66
-
67
- - **Homepage:** [https://lipn.univ-paris13.fr/~gabor/semeval2018task7/](https://lipn.univ-paris13.fr/~gabor/semeval2018task7/)
68
- - **Repository:** [https://github.com/gkata/SemEval2018Task7/tree/testing](https://github.com/gkata/SemEval2018Task7/tree/testing)
69
- - **Paper:** [SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers](https://aclanthology.org/S18-1111/)
70
- - **Leaderboard:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
71
- - **Size of downloaded dataset files:** 1.93 MB
72
-
73
- ### Dataset Summary
74
-
75
- Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers.
76
- The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.
77
-
78
- The three subtasks are:
79
-
80
- - Subtask 1.1: Relation classification on clean data
81
- - In the training data, semantic relations are manually annotated between entities.
82
- - In the test data, only entity annotations and unlabeled relation instances are given.
83
- - Given a scientific publication, The task is to predict the semantic relation between the entities.
84
-
85
- - Subtask 1.2: Relation classification on noisy data
86
- - Entity occurrences are automatically annotated in both the training and the test data.
87
- - The task is to predict the semantic
88
- relation between the entities.
89
-
90
- - Subtask 2: Metrics for the extraction and classification scenario
91
- - Evaluation of relation extraction
92
- - Evaluation of relation classification
93
-
94
- The Relations types are USAGE, RESULT, MODEL, PART_WHOLE, TOPIC, COMPARISION.
95
-
96
- The following example shows a text snippet with the information provided in the test data:
97
- Korean, a \<entity id=”H01-1041.10”>verb final language\</entity>with\<entity id=”H01-1041.11”>overt case markers\</entity>(...)
98
- - A relation instance is identified by the unique identifier of the entities in the pair, e.g.(H01-1041.10, H01-1041.11)
99
- - The information to be predicted is the relation class label: MODEL-FEATURE(H01-1041.10, H01-1041.11).
100
- For details, see the paper https://aclanthology.org/S18-1111/.
101
-
102
- ### Supported Tasks and Leaderboards
103
-
104
- - **Tasks:** Relation extraction and classification in scientific papers
105
- - **Leaderboards:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
106
-
107
- ### Languages
108
-
109
- The language in the dataset is English.
110
-
111
- ## Dataset Structure
112
-
113
- ### Data Instances
114
-
115
- #### subtask_1.1
116
- - **Size of downloaded dataset files:** 714 KB
117
-
118
- An example of 'train' looks as follows:
119
- ```json
120
- {
121
- "id": "H01-1041",
122
- "title": "'Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING'",
123
- "abstract": 'At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
124
- "entities": [{'id': 'H01-1041.1', 'char_start': 54, 'char_end': 97},
125
- {'id': 'H01-1041.2', 'char_start': 99, 'char_end': 161},
126
- {'id': 'H01-1041.3', 'char_start': 169, 'char_end': 211},
127
- {'id': 'H01-1041.4', 'char_start': 229, 'char_end': 240},
128
- {'id': 'H01-1041.5', 'char_start': 244, 'char_end': 288},
129
- {'id': 'H01-1041.6', 'char_start': 304, 'char_end': 342},
130
- {'id': 'H01-1041.7', 'char_start': 353, 'char_end': 366},
131
- {'id': 'H01-1041.8', 'char_start': 431, 'char_end': 437},
132
- {'id': 'H01-1041.9', 'char_start': 442, 'char_end': 447},
133
- {'id': 'H01-1041.10', 'char_start': 452, 'char_end': 470},
134
- {'id': 'H01-1041.11', 'char_start': 477, 'char_end': 494},
135
- {'id': 'H01-1041.12', 'char_start': 509, 'char_end': 523},
136
- {'id': 'H01-1041.13', 'char_start': 553, 'char_end': 561},
137
- {'id': 'H01-1041.14', 'char_start': 584, 'char_end': 594},
138
- {'id': 'H01-1041.15', 'char_start': 600, 'char_end': 624},
139
- {'id': 'H01-1041.16', 'char_start': 639, 'char_end': 659},
140
- {'id': 'H01-1041.17', 'char_start': 668, 'char_end': 682},
141
- {'id': 'H01-1041.18', 'char_start': 692, 'char_end': 715},
142
- {'id': 'H01-1041.19', 'char_start': 736, 'char_end': 742},
143
- {'id': 'H01-1041.20', 'char_start': 748, 'char_end': 796},
144
- {'id': 'H01-1041.21', 'char_start': 823, 'char_end': 847},
145
- {'id': 'H01-1041.22', 'char_start': 918, 'char_end': 935},
146
- {'id': 'H01-1041.23', 'char_start': 981, 'char_end': 997}],
147
- }
148
- "relation": [{'label': 3, 'arg1': 'H01-1041.3', 'arg2': 'H01-1041.4', 'reverse': True},
149
- {'label': 0, 'arg1': 'H01-1041.8', 'arg2': 'H01-1041.9', 'reverse': False},
150
- {'label': 2, 'arg1': 'H01-1041.10', 'arg2': 'H01-1041.11', 'reverse': True},
151
- {'label': 0, 'arg1': 'H01-1041.14', 'arg2': 'H01-1041.15', 'reverse': True}]
152
-
153
- ```
154
- #### Subtask_1.2
155
- - **Size of downloaded dataset files:** 1.00 MB
156
-
157
- An example of 'train' looks as follows:
158
- ```json
159
- {'id': 'L08-1450',
160
- 'title': '\nA LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.\n',
161
- 'abstract': 'Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguousdata because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is adata model and an encoding scheme based on LAF/GrAF ( Ide and Romary, 2006 ; Ide and Suderman, 2007 ) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs ( Brants et al., 2002 ) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.\n',
162
- 'entities': [{'id': 'L08-1450.4', 'char_start': 0, 'char_end': 3},
163
- {'id': 'L08-1450.5', 'char_start': 5, 'char_end': 10},
164
- {'id': 'L08-1450.6', 'char_start': 25, 'char_end': 31},
165
- {'id': 'L08-1450.7', 'char_start': 61, 'char_end': 64},
166
- {'id': 'L08-1450.8', 'char_start': 66, 'char_end': 72},
167
- {'id': 'L08-1450.9', 'char_start': 82, 'char_end': 85},
168
- {'id': 'L08-1450.10', 'char_start': 92, 'char_end': 100},
169
- {'id': 'L08-1450.11', 'char_start': 102, 'char_end': 110},
170
- {'id': 'L08-1450.12', 'char_start': 128, 'char_end': 142},
171
- {'id': 'L08-1450.13', 'char_start': 181, 'char_end': 194},
172
- {'id': 'L08-1450.14', 'char_start': 208, 'char_end': 211},
173
- {'id': 'L08-1450.15', 'char_start': 255, 'char_end': 264},
174
- {'id': 'L08-1450.16', 'char_start': 282, 'char_end': 286},
175
- {'id': 'L08-1450.17', 'char_start': 408, 'char_end': 420},
176
- {'id': 'L08-1450.18', 'char_start': 425, 'char_end': 443},
177
- {'id': 'L08-1450.19', 'char_start': 450, 'char_end': 453},
178
- {'id': 'L08-1450.20', 'char_start': 455, 'char_end': 459},
179
- {'id': 'L08-1450.21', 'char_start': 481, 'char_end': 484},
180
- {'id': 'L08-1450.22', 'char_start': 486, 'char_end': 490},
181
- {'id': 'L08-1450.23', 'char_start': 508, 'char_end': 513},
182
- {'id': 'L08-1450.24', 'char_start': 515, 'char_end': 519},
183
- {'id': 'L08-1450.25', 'char_start': 535, 'char_end': 537},
184
- {'id': 'L08-1450.26', 'char_start': 559, 'char_end': 561},
185
- {'id': 'L08-1450.27', 'char_start': 591, 'char_end': 598},
186
- {'id': 'L08-1450.28', 'char_start': 611, 'char_end': 619},
187
- {'id': 'L08-1450.29', 'char_start': 649, 'char_end': 663},
188
- {'id': 'L08-1450.30', 'char_start': 687, 'char_end': 707},
189
- {'id': 'L08-1450.31', 'char_start': 722, 'char_end': 726},
190
- {'id': 'L08-1450.32', 'char_start': 801, 'char_end': 808},
191
- {'id': 'L08-1450.33', 'char_start': 841, 'char_end': 845},
192
- {'id': 'L08-1450.34', 'char_start': 847, 'char_end': 852},
193
- {'id': 'L08-1450.35', 'char_start': 857, 'char_end': 864},
194
- {'id': 'L08-1450.36', 'char_start': 866, 'char_end': 872},
195
- {'id': 'L08-1450.37', 'char_start': 902, 'char_end': 910},
196
- {'id': 'L08-1450.1', 'char_start': 12, 'char_end': 16},
197
- {'id': 'L08-1450.2', 'char_start': 27, 'char_end': 32},
198
- {'id': 'L08-1450.3', 'char_start': 72, 'char_end': 80}],
199
- 'relation': [{'label': 1,
200
- 'arg1': 'L08-1450.12',
201
- 'arg2': 'L08-1450.13',
202
- 'reverse': False},
203
- {'label': 5, 'arg1': 'L08-1450.17', 'arg2': 'L08-1450.18', 'reverse': False},
204
- {'label': 1, 'arg1': 'L08-1450.28', 'arg2': 'L08-1450.29', 'reverse': False},
205
- {'label': 3, 'arg1': 'L08-1450.30', 'arg2': 'L08-1450.32', 'reverse': False},
206
- {'label': 3, 'arg1': 'L08-1450.34', 'arg2': 'L08-1450.35', 'reverse': False},
207
- {'label': 3, 'arg1': 'L08-1450.36', 'arg2': 'L08-1450.37', 'reverse': True}]}
208
- [ ]
209
-
210
- ```
211
-
212
-
213
- ### Data Fields
214
-
215
- #### subtask_1_1
216
- - `id`: the instance id of this abstract, a `string` feature.
217
- - `title`: the title of this abstract, a `string` feature
218
- - `abstract`: the abstract from the scientific papers, a `string` feature
219
- - `entities`: the entity id's for the key phrases, a `list` of entity id's.
220
- - `id`: the instance id of this sentence, a `string` feature.
221
- - `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
222
- - `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
223
- - `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
224
- - `label`: the list of relations between the key phrases, a `list` of classification labels.
225
- - `arg1`: the entity id of this key phrase, a `string` feature.
226
- - `arg2`: the entity id of the related key phrase, a `string` feature.
227
- - `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
228
-
229
- ```python
230
- RELATIONS
231
- {"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
232
- ```
233
-
234
- #### subtask_1_2
235
- - `id`: the instance id of this abstract, a `string` feature.
236
- - `title`: the title of this abstract, a `string` feature
237
- - `abstract`: the abstract from the scientific papers, a `string` feature
238
- - `entities`: the entity id's for the key phrases, a `list` of entity id's.
239
- - `id`: the instance id of this sentence, a `string` feature.
240
- - `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
241
- - `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
242
- - `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
243
- - `label`: the list of relations between the key phrases, a `list` of classification labels.
244
- - `arg1`: the entity id of this key phrase, a `string` feature.
245
- - `arg2`: the entity id of the related key phrase, a `string` feature.
246
- - `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
247
-
248
- ```python
249
- RELATIONS
250
- {"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
251
- ```
252
-
253
-
254
- ### Data Splits
255
-
256
- | | | Train| Test |
257
- |-------------|-----------|------|------|
258
- | subtask_1_1 | text | 2807 | 3326 |
259
- | | relations | 1228 | 1248 |
260
- | subtask_1_2 | text | 1196 | 1193 |
261
- | | relations | 335 | 355 |
262
-
263
- ## Dataset Creation
264
-
265
- ### Curation Rationale
266
-
267
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
268
-
269
- ### Source Data
270
-
271
- #### Initial Data Collection and Normalization
272
-
273
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
274
-
275
- #### Who are the source language producers?
276
-
277
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
278
-
279
- ### Annotations
280
-
281
- #### Annotation process
282
-
283
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
-
285
- #### Who are the annotators?
286
-
287
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
288
-
289
- ### Personal and Sensitive Information
290
-
291
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
292
-
293
- ## Considerations for Using the Data
294
-
295
- ### Social Impact of Dataset
296
-
297
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
298
-
299
- ### Discussion of Biases
300
-
301
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
302
-
303
- ### Other Known Limitations
304
-
305
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
306
-
307
- ## Additional Information
308
-
309
- ### Dataset Curators
310
-
311
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
312
-
313
- ### Licensing Information
314
-
315
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
316
-
317
- ### Citation Information
318
-
319
- ```
320
- @inproceedings{gabor-etal-2018-semeval,
321
- title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
322
- author = {G{\'a}bor, Kata and
323
- Buscaldi, Davide and
324
- Schumann, Anne-Kathrin and
325
- QasemiZadeh, Behrang and
326
- Zargayouna, Ha{\"\i}fa and
327
- Charnois, Thierry},
328
- booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
329
- month = jun,
330
- year = "2018",
331
- address = "New Orleans, Louisiana",
332
- publisher = "Association for Computational Linguistics",
333
- url = "https://aclanthology.org/S18-1111",
334
- doi = "10.18653/v1/S18-1111",
335
- pages = "679--688",
336
- abstract = "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
337
- }
338
- ```
339
- ### Contributions
340
-
341
- Thanks to [@basvoju](https://github.com/basvoju) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SemEval2018Task7.py DELETED
@@ -1,308 +0,0 @@
1
- # I am trying to understand to the following code. Do not use this for any purpose as I do not support this.
2
- # Use the original source from https://huggingface.co/datasets/DFKI-SLT/science_ie/raw/main/science_ie.py
3
-
4
-
5
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
6
- #
7
- # Licensed under the Apache License, Version 2.0 (the "License");
8
- # you may not use this file except in compliance with the License.
9
- # You may obtain a copy of the License at
10
- #
11
- # http://www.apache.org/licenses/LICENSE-2.0
12
- #
13
- # Unless required by applicable law or agreed to in writing, software
14
- # distributed under the License is distributed on an "AS IS" BASIS,
15
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16
- # See the License for the specific language governing permissions and
17
- # limitations under the License.
18
- """Semeval2018Task7 is a dataset that describes the first task on semantic relation extraction and classification in scientific paper abstracts"""
19
-
20
-
21
-
22
- import glob
23
- import datasets
24
- import xml.dom.minidom
25
- import xml.etree.ElementTree as ET
26
-
27
- # Find for instance the citation on arxiv or on the dataset repo/website
28
- _CITATION = """\
29
- @inproceedings{gabor-etal-2018-semeval,
30
- title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
31
- author = {G{\'a}bor, Kata and
32
- Buscaldi, Davide and
33
- Schumann, Anne-Kathrin and
34
- QasemiZadeh, Behrang and
35
- Zargayouna, Ha{\"\i}fa and
36
- Charnois, Thierry},
37
- booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
38
- month = jun,
39
- year = "2018",
40
- address = "New Orleans, Louisiana",
41
- publisher = "Association for Computational Linguistics",
42
- url = "https://aclanthology.org/S18-1111",
43
- doi = "10.18653/v1/S18-1111",
44
- pages = "679--688",
45
- abstract = "This paper describes the first task on semantic relation extraction and classification in
46
- scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations
47
- and includes three different subtasks. The subtasks were designed so as to compare and quantify the
48
- effect of different pre-processing steps on the relation classification results. We expect the task to
49
- be relevant for a broad range of researchers working on extracting specialized knowledge from domain
50
- corpora, for example but not limited to scientific or bio-medical information extraction. The task
51
- attracted a total of 32 participants, with 158 submissions across different scenarios.",
52
- }
53
- """
54
-
55
- # You can copy an official description
56
- _DESCRIPTION = """\
57
- This paper describes the first task on semantic relation extraction and classification in scientific paper
58
- abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three
59
- different subtasks. The subtasks were designed so as to compare and quantify the effect of different
60
- pre-processing steps on the relation classification results. We expect the task to be relevant for a broad
61
- range of researchers working on extracting specialized knowledge from domain corpora, for example but not
62
- limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants,
63
- with 158 submissions across different scenarios.
64
- """
65
-
66
- # Add a link to an official homepage for the dataset here
67
- _HOMEPAGE = "https://github.com/gkata/SemEval2018Task7/tree/testing"
68
-
69
- # Add the licence for the dataset here if you can find it
70
- _LICENSE = ""
71
-
72
- # Add link to the official dataset URLs here
73
- # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
74
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
75
- _URLS = {
76
- "Subtask_1_1": {
77
- "train": {
78
- "relations": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.1.relations.txt",
79
- "text": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.1.text.xml",
80
- },
81
- "test": {
82
- "relations": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.1.test.relations.txt",
83
- "text": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.1.test.text.xml",
84
- },
85
- },
86
- "Subtask_1_2": {
87
- "train": {
88
- "relations": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.2.relations.txt",
89
- "text": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.2.text.xml",
90
- },
91
- "test": {
92
- "relations": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.2.test.relations.txt",
93
- "text": "https://raw.githubusercontent.com/gkata/SemEval2018Task7/testing/1.2.test.text.xml",
94
- },
95
- },
96
-
97
- }
98
-
99
-
100
- def all_text_nodes(root):
101
- if root.text is not None:
102
- yield root.text
103
- for child in root:
104
- if child.tail is not None:
105
- yield child.tail
106
-
107
-
108
- def reading_entity_data(ET_data_to_convert):
109
- parsed_data = ET.tostring(ET_data_to_convert,"utf-8")
110
- parsed_data= parsed_data.decode('utf8').replace("b\'","")
111
- parsed_data= parsed_data.replace("<abstract>","")
112
- parsed_data= parsed_data.replace("</abstract>","")
113
- parsed_data= parsed_data.replace("<title>","")
114
- parsed_data= parsed_data.replace("</title>","")
115
- parsed_data = parsed_data.replace("\n\n\n","")
116
-
117
- parsing_tag = False
118
- final_string = ""
119
- tag_string= ""
120
- current_tag_id = ""
121
- current_tag_starting_pos = 0
122
- current_tag_ending_pos= 0
123
- entity_mapping_list=[]
124
-
125
- for i in parsed_data:
126
- if i=='<':
127
- parsing_tag = True
128
- if current_tag_id!="":
129
- current_tag_ending_pos = len(final_string)-1
130
- entity_mapping_list.append({"id":current_tag_id,
131
- "char_start":current_tag_starting_pos,
132
- "char_end":current_tag_ending_pos+1})
133
- current_tag_id= ""
134
- tag_string=""
135
-
136
-
137
- elif i=='>':
138
- parsing_tag = False
139
- tag_string_split = tag_string.split('"')
140
- if len(tag_string_split)>1:
141
- current_tag_id= tag_string.split('"')[1]
142
- current_tag_starting_pos = len(final_string)
143
-
144
- else:
145
- if parsing_tag!=True:
146
- final_string = final_string + i
147
- else:
148
- tag_string = tag_string + i
149
-
150
- return {"text_data":final_string, "entities":entity_mapping_list}
151
-
152
-
153
-
154
- class Semeval2018Task7(datasets.GeneratorBasedBuilder):
155
- """
156
- Semeval2018Task7 is a dataset for semantic relation extraction and classification in scientific paper abstracts
157
- """
158
-
159
- VERSION = datasets.Version("1.1.0")
160
-
161
- BUILDER_CONFIGS = [
162
- datasets.BuilderConfig(name="Subtask_1_1", version=VERSION,
163
- description="Relation classification on clean data"),
164
- datasets.BuilderConfig(name="Subtask_1_2", version=VERSION,
165
- description="Relation classification on noisy data"),
166
-
167
- ]
168
- DEFAULT_CONFIG_NAME = "Subtask_1_1"
169
-
170
- def _info(self):
171
- class_labels = ["","USAGE", "RESULT", "MODEL-FEATURE", "PART_WHOLE", "TOPIC", "COMPARE"]
172
- features = datasets.Features(
173
- {
174
- "id": datasets.Value("string"),
175
- "title": datasets.Value("string"),
176
- "abstract": datasets.Value("string"),
177
- "entities": [
178
- {
179
- "id": datasets.Value("string"),
180
- "char_start": datasets.Value("int32"),
181
- "char_end": datasets.Value("int32")
182
- }
183
- ],
184
- "relation": [
185
- {
186
- "label": datasets.ClassLabel(names=class_labels),
187
- "arg1": datasets.Value("string"),
188
- "arg2": datasets.Value("string"),
189
- "reverse": datasets.Value("bool")
190
- }
191
- ]
192
- }
193
- )
194
-
195
- return datasets.DatasetInfo(
196
- # This is the description that will appear on the datasets page.
197
- description=_DESCRIPTION,
198
- # This defines the different columns of the dataset and their types
199
- features=features, # Here we define them above because they are different between the two configurations
200
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
201
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
202
- # supervised_keys=("sentence", "label"),
203
- # Homepage of the dataset for documentation
204
- homepage=_HOMEPAGE,
205
- # License for the dataset if available
206
- license=_LICENSE,
207
- # Citation for the dataset
208
- citation=_CITATION,
209
- )
210
-
211
- def _split_generators(self, dl_manager):
212
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
213
-
214
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
215
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
216
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
217
- urls = _URLS[self.config.name]
218
- downloaded_files = dl_manager.download(urls)
219
- print(downloaded_files)
220
-
221
- return [
222
- datasets.SplitGenerator(
223
- name=datasets.Split.TRAIN,
224
- # These kwargs will be passed to _generate_examples
225
- gen_kwargs={
226
- "relation_filepath": downloaded_files['train']["relations"],
227
- "text_filepath": downloaded_files['train']["text"],
228
-
229
- }
230
-
231
- ),
232
- datasets.SplitGenerator(
233
- name=datasets.Split.TEST,
234
- # These kwargs will be passed to _generate_examples
235
- gen_kwargs={
236
- "relation_filepath": downloaded_files['test']["relations"],
237
- "text_filepath": downloaded_files['test']["text"],
238
-
239
- }
240
-
241
- )]
242
-
243
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
244
- def _generate_examples(self, relation_filepath, text_filepath):
245
-
246
- # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
247
- # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
248
- with open(relation_filepath, encoding="utf-8") as f:
249
- relations = []
250
- text_id_to_relations_map= {}
251
- for key, row in enumerate(f):
252
- row_split = row.strip("\n").split("(")
253
- use_case = row_split[0]
254
- second_half = row_split[1].strip(")")
255
- second_half_splits = second_half.split(",")
256
- size = len(second_half_splits)
257
-
258
- relation = {
259
- "label": use_case,
260
- "arg1": second_half_splits[0],
261
- "arg2": second_half_splits[1],
262
- "reverse": True if size == 3 else False
263
- }
264
- relations.append(relation)
265
-
266
- arg_id = second_half_splits[0].split(".")[0]
267
- if arg_id not in text_id_to_relations_map:
268
- text_id_to_relations_map[arg_id] = [relation]
269
- else:
270
- text_id_to_relations_map[arg_id].append(relation)
271
- #print("result", text_id_to_relations_map)
272
-
273
- #for arg_id, values in text_id_to_relations_map.items():
274
- #print(f"ID: {arg_id}")
275
- # for value in values:
276
- # (value)
277
-
278
-
279
-
280
- doc2 = ET.parse(text_filepath)
281
- root = doc2.getroot()
282
-
283
- for child in root:
284
- if child.find("title")==None:
285
- continue
286
- text_id = child.attrib
287
- #print("text_id", text_id)
288
-
289
- if child.find("abstract")==None:
290
- continue
291
- title = child.find("title").text
292
- child_abstract = child.find("abstract")
293
-
294
-
295
- abstract_text_and_entities = reading_entity_data(child.find("abstract"))
296
- title_text_and_entities = reading_entity_data(child.find("title"))
297
-
298
- text_relations = []
299
- if text_id['id'] in text_id_to_relations_map:
300
- text_relations = text_id_to_relations_map[text_id['id']]
301
-
302
- yield text_id['id'], {
303
- "id": text_id['id'],
304
- "title": title_text_and_entities['text_data'],
305
- "abstract": abstract_text_and_entities['text_data'],
306
- "entities": abstract_text_and_entities['entities'] + title_text_and_entities['entities'],
307
- "relation": text_relations
308
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Subtask_1_1/sem_eval2018_task7-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d132af27e1ee05a2784d8e9faddc4424b7fc99643b05088fae6bf378ae602406
3
+ size 115623
Subtask_1_1/sem_eval2018_task7-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3a5d12513d228abbae4b7a3b7de02ebe27d3e07f652639bf5a43bc89ee33b7f
3
+ size 235436
Subtask_1_2/sem_eval2018_task7-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8580cfccb9573ad63bee29ec5d2e113b29f201fc69d06c345269cabdb7048f54
3
+ size 137252
Subtask_1_2/sem_eval2018_task7-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5a5586a43e725891f28f5b0488343e28f6858044cf069ba03df8ceeea944bf2
3
+ size 305577
datasets_info.json DELETED
@@ -1,16 +0,0 @@
1
- "features": {"id": {"dtype": "string", "id": null, "_type": "Value"},
2
- "title": {"dtype": "string", "id": null, "_type": "Value"},
3
- "abstract": {"dtype": "string", "id": null, "_type": "Value"},
4
- "entities": {feature:{"id": "string", "id": null, "_type": "Value"}, "char_start": {"dtype": "int32", "id": null, "_type": "Value"}, "char_end": {"dtype": "int32", "id": null, "_type": "Value"}},
5
- "relation": {"feature": {"label": {"dtype": "list", "id": null, "_type": "ClassLabel"},
6
- "arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "reverse": {"dtype": "Bool", "id": null, "_type": "Bool"}}},
7
-
8
-
9
- "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "relation_classification"}],
10
- "builder_name": "Basvoju/SemEval2018Task7", "config_name": {"Subtask_1_1","Subtask_1_2"},
11
- "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0},
12
-
13
-
14
- "splits": {"train": {"name": "train", "num_bytes": '1240.8 KB', "num_examples": 8609, "dataset_name": "Basvoju/SemEval2018Task7"},
15
- "test": {"name": "test", "num_bytes": '506.93 KB' , "num_examples": 3079, "dataset_name": "Basvoju/SemEval2018Task7"}},
16
- "download_size": '1.93 MB', "post_processing_size": null, "size_in_bytes": '1.93 MB'}