system HF staff commited on
Commit
da652a3
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - es
8
+ licenses:
9
+ - cc-by-nc-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ - structure-prediction-other-relation-prediction
21
+ ---
22
+
23
+ # Dataset Card for eHealth-KD
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [eHealth-KD homepage](https://knowledge-learning.github.io/ehealthkd-2020/)
51
+ - **Repository:** [eHealth-KD repository](https://github.com/knowledge-learning/ehealthkd-2020)
52
+ - **Paper:** [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf)
53
+ - **Leaderboard:** [eHealth-KD Challenge 2020 official results](https://knowledge-learning.github.io/ehealthkd-2020/results)
54
+ - **Point of Contact:** [Yoan Gutiérrez Vázquez](mailto:ygutierrez@dlsi.ua.es) (Organization Committee), [María Grandury](mailto:yacine@huggingface.co) (Dataset Submitter)
55
+
56
+ ### Dataset Summary
57
+
58
+ Dataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic
59
+ entities and relations in Spanish health documents.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ The eHealth-KD challenge proposes two computational subtasks:
64
+
65
+ - `named-entity-recognition`: Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to
66
+ identify all the entities and their types.
67
+
68
+ - `relation-prediction`: The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised.
69
+
70
+ For an analysis of the most successful approaches of this challenge, read the [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf).
71
+
72
+ ### Languages
73
+
74
+ The text in the dataset is in Spanish (BCP-47 code: `es`).
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+
80
+ The first example of the eHeatlh-KD Corpus train set looks as follows:
81
+ ```
82
+ {
83
+ 'sentence': 'En la leucemia linfocítica crónica, hay demasiados linfocitos, un tipo de glóbulos blancos.',
84
+ 'entities': {
85
+ [
86
+ 'ent_id: 'T1',
87
+ 'ent_text': 'leucemia linfocítica crónica',
88
+ 'ent_label': 0,
89
+ 'start_character': 6,
90
+ 'end_character': 34
91
+ ],
92
+ [
93
+ 'ent_id: 'T2',
94
+ 'ent_text': 'linfocitos',
95
+ 'ent_label': 0,
96
+ 'start_character': 51,
97
+ 'end_character': 61
98
+ ],
99
+ [
100
+ 'ent_id: 'T3',
101
+ 'ent_text': 'glóbulos blancos',
102
+ 'ent_label': 0,
103
+ 'start_character': 74,
104
+ 'end_character': 90
105
+ ]
106
+ },
107
+ relations: {
108
+ [
109
+ 'rel_id: 'R0'
110
+ 'rel_label': 0,
111
+ 'arg1': T2
112
+ 'arg2': T3
113
+ ],
114
+ [
115
+ 'rel_id': 'R1'
116
+ 'rel_label': 5,
117
+ 'arg1': T1,
118
+ 'arg2': T2
119
+ ]
120
+ }
121
+ }
122
+ ```
123
+
124
+ ### Data Fields
125
+
126
+ - `sentence`: sentence of an eHealth document written in Spanish
127
+ - `entities`: list of entities identified in the sentence
128
+ - `ent_id`: entity identifier (`T`+ a number)
129
+ - `ent_text`: entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will
130
+ never include any surrounding punctuation symbols, parenthesis, etc.
131
+ - `ent_label`: type of entity (`Concept`, `Action`, `Predicate` or `Reference`)
132
+ - `start_character`: position of the first character of the entity
133
+ - `end_character`: position of the last character of the entity
134
+ - `relations`: list of semantic relationships between the entities recognised
135
+ - `rel_id`: relation identifier (`R` + a number)
136
+ - `rel_label`: type of relation, can be a general relation (`is-a`, `same-as`, `has-property`, `part-of`, `causes`, `entails`),
137
+ a contextual relation (`in-time`, `in-place`, `in-context`) an action role (`subject`, `target`) or a predicate role (`domain`, `arg`).
138
+ - `arg1`: ID of the first entity of the relation
139
+ - `arg2`: ID of the second entity of the relation
140
+
141
+ For more information about the types of entities and relations, click [here](https://knowledge-learning.github.io/ehealthkd-2020/tasks).
142
+
143
+ ### Data Splits
144
+
145
+ The data is split into a training, validation and test set. The split sizes are as follow:
146
+
147
+ | | Train | Val | Test |
148
+ | ----- | ------ | ----- | ---- |
149
+ | eHealth-KD 2020 | 800 | 199 | 100 |
150
+
151
+ In the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario.
152
+ More information about the testing data [here](https://github.com/knowledge-learning/ehealthkd-2020/tree/master/data/testing).
153
+
154
+ ## Dataset Creation
155
+
156
+ ### Curation Rationale
157
+
158
+ The vast amount of clinical text available online has motivated the development of automatic
159
+ knowledge discovery systems that can analyse this data and discover relevant facts.
160
+
161
+ The eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages
162
+ a semantic model of human language that encodes the most common expressions of factual
163
+ knowledge, via a set of four general-purpose entity types and thirteen semantic relations among
164
+ them. The challenge proposes the design of systems that can automatically annotate entities and
165
+ relations in clinical text in the Spanish language.
166
+
167
+ ### Source Data
168
+
169
+ #### Initial Data Collection and Normalization
170
+
171
+ As in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform
172
+ freely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus.
173
+ The selection has been made by sampling specific XML files from the collection available in the [Medline website](https://medlineplus.gov/xml.html).
174
+
175
+ ```
176
+ “MedlinePlus is the National Institutes of Health’s Website for patients and their families and
177
+ friends. Produced by the National Library of Medicine, the world’s largest medical library, it
178
+ brings you information about diseases, conditions, and wellness issues in language you can
179
+ understand. MedlinePlus offers reliable, up-to-date health information, anytime, anywhere, for free.”
180
+ ```
181
+
182
+ These files contain several entries related to health and medicine topics and have been processed to remove all
183
+ XML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual
184
+ item was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences,
185
+ such as headers, footers and similar elements, and to flatten HTML lists into plain sentences.
186
+
187
+ #### Who are the source language producers?
188
+
189
+ As in the previous edition, the corpus for eHealth-KD 2020 was extracted from [MedlinePlus](https://medlineplus.gov/xml.html) sources.
190
+
191
+ ### Annotations
192
+
193
+ #### Annotation process
194
+
195
+ Once the MedlinePlus files were cleaned, they were manually tagged using [BRAT](http://brat.nlplab.org/) by a group of
196
+ annotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files
197
+ in the formats needed for the challenge.
198
+
199
+ #### Who are the annotators?
200
+
201
+ The data was manually tagged.
202
+
203
+ ### Personal and Sensitive Information
204
+
205
+ [More Information Needed]
206
+
207
+ ## Considerations for Using the Data
208
+
209
+ ### Social Impact of Dataset
210
+
211
+ "The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish
212
+ electronic health documents could be machine-readable from a semantic point of view.
213
+
214
+ With this task, we expect to encourage the development of software technologies to automatically extract a large variety
215
+ of knowledge from eHealth documents written in the Spanish Language."
216
+
217
+ ### Discussion of Biases
218
+
219
+ [More Information Needed]
220
+
221
+ ### Other Known Limitations
222
+
223
+ [More Information Needed]
224
+
225
+ ## Additional Information
226
+
227
+ ### Dataset Curators
228
+
229
+ #### Organization Committee
230
+
231
+ | Name | Email | Institution |
232
+ |:---------------------------------------:|:---------------------:|:-----------------------------:|
233
+ | Yoan Gutiérrez Vázquez (contact person) | ygutierrez@dlsi.ua.es | University of Alicante, Spain |
234
+ | Suilan Estévez Velarde | sestevez@matcom.uh.cu | University of Havana, Cuba |
235
+ | Alejandro Piad Morffis | apiad@matcom.uh.cu | University of Havana, Cuba |
236
+ | Yudivián Almeida Cruz | yudy@matcom.uh.cu | University of Havana, Cuba |
237
+ | Andrés Montoyo Guijarro | montoyo@dlsi.ua.es | University of Alicante, Spain |
238
+ | Rafael Muñoz Guillena | rafael@dlsi.ua.es | University of Alicante, Spain |
239
+
240
+ #### Funding
241
+
242
+ This research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University
243
+ of Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana,
244
+ Spanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and
245
+ LIVINGLANG (RTI2018-094653-B-C22).
246
+
247
+ ### Licensing Information
248
+
249
+ This dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International
250
+ [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
251
+
252
+ To accept the distribution terms, please fill in the following [form](https://forms.gle/pUJutSDq2FYLwNWQA).
253
+
254
+ ### Citation Information
255
+
256
+ In the following link you can find the
257
+ [preliminar bibtexts of the systems’ working-notes](https://knowledge-learning.github.io/ehealthkd-2020/shared/eHealth-KD_2020_bibtexts.zip).
258
+ In addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext:
259
+
260
+ ```
261
+ @inproceedings{overview_ehealthkd2020,
262
+ author = {Piad{-}Morffis, Alejandro and
263
+ Guti{\'{e}}rrez, Yoan and
264
+ Ca{\~{n}}izares-Diaz, Hian and
265
+ Estevez{-}Velarde, Suilan and
266
+ Almeida{-}Cruz, Yudivi{\'{a}}n and
267
+ Mu{\~{n}}oz, Rafael and
268
+ Montoyo, Andr{\'{e}}s},
269
+ title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020},
270
+ booktitle = ,
271
+ year = {2020},
272
+ }
273
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"ehealth_kd": {"description": "Dataset of the eHealth Knowledge Discovery Challenge at IberLEF 2020. It is designed for\nthe identification of semantic entities and relations in Spanish health documents.\n", "citation": "@inproceedings{overview_ehealthkd2020,\n author = {Piad{-}Morffis, Alejandro and\n Guti{'{e}}rrez, Yoan and\n Ca\u00f1izares-Diaz, Hian and\n Estevez{-}Velarde, Suilan and\n Almeida{-}Cruz, Yudivi{'{a}}n and\n Mu\u00f1oz, Rafael and\n Montoyo, Andr{'{e}}s},\n title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020},\n booktitle = ,\n year = {2020},\n}\n", "homepage": "https://knowledge-learning.github.io/ehealthkd-2020/", "license": "https://creativecommons.org/licenses/by-nc-sa/4.0/", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "entities": [{"ent_id": {"dtype": "string", "id": null, "_type": "Value"}, "ent_text": {"dtype": "string", "id": null, "_type": "Value"}, "ent_label": {"num_classes": 4, "names": ["Concept", "Action", "Predicate", "Reference"], "names_file": null, "id": null, "_type": "ClassLabel"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}}], "relations": [{"rel_id": {"dtype": "string", "id": null, "_type": "Value"}, "rel_label": {"num_classes": 13, "names": ["is-a", "same-as", "has-property", "part-of", "causes", "entails", "in-time", "in-place", "in-context", "subject", "target", "domain", "arg"], "names_file": null, "id": null, "_type": "ClassLabel"}, "arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "builder_name": "ehealth_kd", "config_name": "ehealth_kd", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 425713, "num_examples": 800, "dataset_name": "ehealth_kd"}, "validation": {"name": "validation", "num_bytes": 108154, "num_examples": 199, "dataset_name": "ehealth_kd"}, "test": {"name": "test", "num_bytes": 47314, "num_examples": 100, "dataset_name": "ehealth_kd"}}, "download_checksums": {"https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/training/scenario.txt": {"num_bytes": 72905, "checksum": "247d41d7c5152d5afb3670e55ccf632d7665f772f42fbd95331b8e65efadaa4e"}, "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/training/scenario.ann": {"num_bytes": 343367, "checksum": "b4e26cd473cf54bc7e4ad2d5b98896dbeb9b7f4bb5adc426ee2014ce4fce0b88"}, "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/development/main/scenario.txt": {"num_bytes": 19060, "checksum": "184b5e9a9e69512d5332c81f22d8765ae1e26632e0f5dc089af6e101c9b04149"}, "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/development/main/scenario.ann": {"num_bytes": 85446, "checksum": "9a47927d13260a10e067d82ebca59d2a43982c7338babb01004c02329611dfb3"}, "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/testing/scenario3-taskB/scenario.txt": {"num_bytes": 8685, "checksum": "63b6e7ff05445b1fde9c8d9b3bb346a1d9e037858550b4d509fb10d702f682e6"}, "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/testing/scenario3-taskB/scenario.ann": {"num_bytes": 36437, "checksum": "37102084c1bde2b5eaebc55361b4df7fd0f012495b56f664aa0ad52292a38f00"}}, "download_size": 565900, "post_processing_size": null, "dataset_size": 581181, "size_in_bytes": 1147081}}
dummy/ehealth_kd/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4e23493872c14d002394955eedcf99832b8ae56258a851287da6b1193b94811
3
+ size 1079
ehealth_kd.py ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The eHealth-KD 2020 Corpus."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @inproceedings{overview_ehealthkd2020,
24
+ author = {Piad{-}Morffis, Alejandro and
25
+ Guti{\'{e}}rrez, Yoan and
26
+ Cañizares-Diaz, Hian and
27
+ Estevez{-}Velarde, Suilan and
28
+ Almeida{-}Cruz, Yudivi{\'{a}}n and
29
+ Muñoz, Rafael and
30
+ Montoyo, Andr{\'{e}}s},
31
+ title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020},
32
+ booktitle = ,
33
+ year = {2020},
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ Dataset of the eHealth Knowledge Discovery Challenge at IberLEF 2020. It is designed for
39
+ the identification of semantic entities and relations in Spanish health documents.
40
+ """
41
+
42
+ _HOMEPAGE = "https://knowledge-learning.github.io/ehealthkd-2020/"
43
+
44
+ _LICENSE = "https://creativecommons.org/licenses/by-nc-sa/4.0/"
45
+
46
+ _URL = "https://raw.githubusercontent.com/knowledge-learning/ehealthkd-2020/master/data/"
47
+ _TRAIN_DIR = "training/"
48
+ _DEV_DIR = "development/main/"
49
+ _TEST_DIR = "testing/scenario3-taskB/"
50
+ _TEXT_FILE = "scenario.txt"
51
+ _ANNOTATIONS_FILE = "scenario.ann"
52
+
53
+
54
+ class EhealthKD(datasets.GeneratorBasedBuilder):
55
+ """The eHealth-KD 2020 Corpus."""
56
+
57
+ VERSION = datasets.Version("1.1.0")
58
+
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(name="ehealth_kd", version=VERSION, description="eHealth-KD Corpus"),
61
+ ]
62
+
63
+ def _info(self):
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=datasets.Features(
67
+ {
68
+ "sentence": datasets.Value("string"),
69
+ "entities": [
70
+ {
71
+ "ent_id": datasets.Value("string"),
72
+ "ent_text": datasets.Value("string"),
73
+ "ent_label": datasets.ClassLabel(names=["Concept", "Action", "Predicate", "Reference"]),
74
+ "start_character": datasets.Value("int32"),
75
+ "end_character": datasets.Value("int32"),
76
+ }
77
+ ],
78
+ "relations": [
79
+ {
80
+ "rel_id": datasets.Value("string"),
81
+ "rel_label": datasets.ClassLabel(
82
+ names=[
83
+ "is-a",
84
+ "same-as",
85
+ "has-property",
86
+ "part-of",
87
+ "causes",
88
+ "entails",
89
+ "in-time",
90
+ "in-place",
91
+ "in-context",
92
+ "subject",
93
+ "target",
94
+ "domain",
95
+ "arg",
96
+ ]
97
+ ),
98
+ "arg1": datasets.Value("string"),
99
+ "arg2": datasets.Value("string"),
100
+ }
101
+ ],
102
+ }
103
+ ),
104
+ supervised_keys=None,
105
+ homepage=_HOMEPAGE,
106
+ license=_LICENSE,
107
+ citation=_CITATION,
108
+ )
109
+
110
+ def _split_generators(self, dl_manager):
111
+ """Returns SplitGenerators."""
112
+ urls_to_download = {
113
+ k: [f"{_URL}{v}{_TEXT_FILE}", f"{_URL}{v}{_ANNOTATIONS_FILE}"]
114
+ for k, v in zip(["train", "dev", "test"], [_TRAIN_DIR, _DEV_DIR, _TEST_DIR])
115
+ }
116
+
117
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
118
+
119
+ return [
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.TRAIN,
122
+ gen_kwargs={"txt_path": downloaded_files["train"][0], "ann_path": downloaded_files["train"][1]},
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.VALIDATION,
126
+ gen_kwargs={"txt_path": downloaded_files["dev"][0], "ann_path": downloaded_files["dev"][1]},
127
+ ),
128
+ datasets.SplitGenerator(
129
+ name=datasets.Split.TEST,
130
+ gen_kwargs={"txt_path": downloaded_files["test"][0], "ann_path": downloaded_files["test"][1]},
131
+ ),
132
+ ]
133
+
134
+ def _generate_examples(self, txt_path, ann_path):
135
+ """ Yields examples. """
136
+ with open(txt_path, encoding="utf-8") as txt_file, open(ann_path, encoding="utf-8") as ann_file:
137
+ _id = 0
138
+ entities = []
139
+ relations = []
140
+
141
+ annotations = ann_file.readlines()
142
+ last = annotations[-1]
143
+
144
+ # Create a variable to keep track of the last annotation (entity or relation) to know when a sentence is fully annotated
145
+ # In the annotations file, the entities are before the relations
146
+ last_annotation = ""
147
+
148
+ for annotation in annotations:
149
+ if annotation == last:
150
+ sentence = txt_file.readline().strip()
151
+ yield _id, {"sentence": sentence, "entities": entities, "relations": relations}
152
+
153
+ if annotation.startswith("T"):
154
+ if last_annotation == "relation":
155
+ sentence = txt_file.readline().strip()
156
+ yield _id, {"sentence": sentence, "entities": entities, "relations": relations}
157
+ _id += 1
158
+ entities = []
159
+ relations = []
160
+
161
+ ent_id, mid, ent_text = annotation.strip().split("\t")
162
+ ent_label, spans = mid.split(" ", 1)
163
+ start_character = spans.split(" ")[0]
164
+ end_character = spans.split(" ")[-1]
165
+
166
+ entities.append(
167
+ {
168
+ "ent_id": ent_id,
169
+ "ent_text": ent_text,
170
+ "ent_label": ent_label,
171
+ "start_character": start_character,
172
+ "end_character": end_character,
173
+ }
174
+ )
175
+
176
+ last_annotation = "entity"
177
+
178
+ else:
179
+ rel_id, rel_label, arg1, arg2 = annotation.strip().split()
180
+ if annotation.startswith("R"):
181
+ arg1 = arg1.split(":")[1]
182
+ arg2 = arg2.split(":")[1]
183
+
184
+ relations.append({"rel_id": rel_id, "rel_label": rel_label, "arg1": arg1, "arg2": arg2})
185
+
186
+ last_annotation = "relation"