Datasets:

Modalities:
Text
Languages:
Spanish
Libraries:
Datasets
License:
gabrielaltay commited on
Commit
ce8ad18
·
1 Parent(s): 95db543

upload hubscripts/codiesp_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. codiesp.py +462 -0
codiesp.py ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """
17
+ A dataset loading script for the CODIESP corpus.
18
+
19
+ The CODIESP dataset is a collection of 1,000 manually selected clinical
20
+ case studies in Spanish that was designed for the Clinical Case Coding
21
+ in Spanish Shared Task, as part of the CLEF 2020 conference. This community
22
+ task was divided into 3 sub-tasks: diagnosis coding (CodiEsp-D), procedure
23
+ coding (CodiEsp-P) and Explainable AI (CodiEsp-X). The script can also load
24
+ an additional dataset of abstracts with ICD10 codes.
25
+ """
26
+
27
+ import json
28
+ import os
29
+ from collections import defaultdict
30
+ from pathlib import Path
31
+ from typing import Dict, List, Tuple
32
+
33
+ import datasets
34
+ import pandas as pd
35
+
36
+ from .bigbiohub import kb_features
37
+ from .bigbiohub import BigBioConfig
38
+ from .bigbiohub import Tasks
39
+
40
+ _LANGUAGES = ['Spanish']
41
+ _PUBMED = False
42
+ _LOCAL = False
43
+ _CITATION = """\
44
+ @article{miranda2020overview,
45
+ title={Overview of Automatic Clinical Coding: Annotations, Guidelines, and Solutions for non-English Clinical Cases at CodiEsp Track of CLEF eHealth 2020.},
46
+ author={Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estap{\'e}, Jordi and Krallinger, Martin},
47
+ journal={CLEF (Working Notes)},
48
+ volume={2020},
49
+ year={2020}
50
+ }
51
+ """
52
+
53
+ _DATASETNAME = "codiesp"
54
+ _DISPLAYNAME = "CodiEsp"
55
+
56
+ _DESCRIPTION = """\
57
+ Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
58
+ that was designed for the Clinical Case Coding in Spanish Shared Task, as part
59
+ of the CLEF 2020 conference.
60
+
61
+ The goal of the task was to automatically assign ICD10 codes (CIE-10, in
62
+ Spanish) to clinical case documents, being evaluated against manually generated
63
+ ICD10 codifications. The CodiEsp corpus was selected manually by practicing
64
+ physicians and clinical documentalists and annotated by clinical coding
65
+ professionals meeting strict quality criteria. They reached an inter-annotator
66
+ agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
67
+ for the textual reference annotation.
68
+
69
+ The final collection of 1,000 clinical cases that make up the corpus had a total
70
+ of 16,504 sentences and 396,988 words. All documents are in Spanish language and
71
+ CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
72
+ The CodiEsp corpus has been randomly sampled into three subsets. The train set
73
+ contains 500 clinical cases, while the development and test sets have 250
74
+ clinical cases each. In addition to these, a collection of 176,294 abstracts
75
+ from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
76
+ ICD10-PCS) was provided by the task organizers. Every abstract has at least one
77
+ associated code, with an average of 2.5 ICD10 codes per abstract.
78
+
79
+ The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
80
+
81
+ - CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
82
+ [CIE10-Diagnóstico] code assignment.
83
+ - CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
84
+ [CIE10-Procedimiento] code assignment.
85
+ - CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
86
+ the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
87
+ of this novel task was not only to predict the correct codes but also to
88
+ present the reference in the text that supports the code predictions.
89
+
90
+ For further information, please visit https://temu.bsc.es/codiesp or send an
91
+ email to encargo-pln-life@bsc.es
92
+ """
93
+
94
+ _HOMEPAGE = "https://temu.bsc.es/codiesp/"
95
+
96
+ _LICENSE = 'Creative Commons Attribution 4.0 International'
97
+
98
+ _URLS = {
99
+ "codiesp": "https://zenodo.org/record/3837305/files/codiesp.zip?download=1",
100
+ "extra": "https://zenodo.org/record/3606662/files/abstractsWithCIE10_v2.zip?download=1",
101
+ }
102
+
103
+ _SUPPORTED_TASKS = [
104
+ Tasks.TEXT_CLASSIFICATION,
105
+ Tasks.NAMED_ENTITY_RECOGNITION,
106
+ Tasks.NAMED_ENTITY_DISAMBIGUATION,
107
+ ]
108
+
109
+ _SOURCE_VERSION = "1.4.0"
110
+
111
+ _BIGBIO_VERSION = "1.0.0"
112
+
113
+
114
+ class CodiespDataset(datasets.GeneratorBasedBuilder):
115
+ """Collection of 1,000 manually selected clinical case studies in Spanish."""
116
+
117
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
118
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
119
+
120
+ BUILDER_CONFIGS = [
121
+ BigBioConfig(
122
+ name="codiesp_D_source",
123
+ version=SOURCE_VERSION,
124
+ description="CodiEsp source schema for the Diagnosis Coding subtask",
125
+ schema="source",
126
+ subset_id="codiesp_d",
127
+ ),
128
+ BigBioConfig(
129
+ name="codiesp_P_source",
130
+ version=SOURCE_VERSION,
131
+ description="CodiEsp source schema for the Procedure Coding sub-task",
132
+ schema="source",
133
+ subset_id="codiesp_p",
134
+ ),
135
+ BigBioConfig(
136
+ name="codiesp_X_source",
137
+ version=SOURCE_VERSION,
138
+ description="CodiEsp source schema for the Explainable AI sub-task",
139
+ schema="source",
140
+ subset_id="codiesp_x",
141
+ ),
142
+ BigBioConfig(
143
+ name="codiesp_extra_mesh_source",
144
+ version=SOURCE_VERSION,
145
+ description="Abstracts from Lilacs and Ibecs with MESH Codes",
146
+ schema="source",
147
+ subset_id="codiesp_extra_mesh",
148
+ ),
149
+ BigBioConfig(
150
+ name="codiesp_extra_cie_source",
151
+ version=SOURCE_VERSION,
152
+ description="Abstracts from Lilacs and Ibecs with CIE10 Codes",
153
+ schema="source",
154
+ subset_id="codiesp_extra_cie",
155
+ ),
156
+ BigBioConfig(
157
+ name="codiesp_D_bigbio_text",
158
+ version=BIGBIO_VERSION,
159
+ description="CodiEsp BigBio schema for the Diagnosis Coding subtask",
160
+ schema="bigbio_text",
161
+ subset_id="codiesp_d",
162
+ ),
163
+ BigBioConfig(
164
+ name="codiesp_P_bigbio_text",
165
+ version=BIGBIO_VERSION,
166
+ description="CodiEsp BigBio schema for the Procedure Coding sub-task",
167
+ schema="bigbio_text",
168
+ subset_id="codiesp_p",
169
+ ),
170
+ BigBioConfig(
171
+ name="codiesp_X_bigbio_kb",
172
+ version=BIGBIO_VERSION,
173
+ description="CodiEsp BigBio schema for the Explainable AI sub-task",
174
+ schema="bigbio_kb",
175
+ subset_id="codiesp_x",
176
+ ),
177
+ BigBioConfig(
178
+ name="codiesp_extra_mesh_bigbio_text",
179
+ version=BIGBIO_VERSION,
180
+ description="Abstracts from Lilacs and Ibecs with MESH Codes",
181
+ schema="bigbio_text",
182
+ subset_id="codiesp_extra_mesh",
183
+ ),
184
+ BigBioConfig(
185
+ name="codiesp_extra_cie_bigbio_text",
186
+ version=BIGBIO_VERSION,
187
+ description="Abstracts from Lilacs and Ibecs with CIE10 Codes",
188
+ schema="bigbio_text",
189
+ subset_id="codiesp_extra_cie",
190
+ ),
191
+ ]
192
+
193
+ DEFAULT_CONFIG_NAME = "codiesp_source"
194
+
195
+ def _info(self) -> datasets.DatasetInfo:
196
+
197
+ if self.config.schema == "source" and self.config.name != "codiesp_X_source":
198
+ features = datasets.Features(
199
+ {
200
+ "id": datasets.Value("string"),
201
+ "document_id": datasets.Value("string"),
202
+ "text": datasets.Value("string"),
203
+ "labels": datasets.Sequence(datasets.Value("string")),
204
+ },
205
+ )
206
+
207
+ elif self.config.schema == "source" and self.config.name == "codiesp_X_source":
208
+ features = datasets.Features(
209
+ {
210
+ "id": datasets.Value("string"),
211
+ "document_id": datasets.Value("string"),
212
+ "text": datasets.Value("string"),
213
+ "task_x": [
214
+ {
215
+ "label": datasets.Value("string"),
216
+ "code": datasets.Value("string"),
217
+ "text": datasets.Value("string"),
218
+ "spans": datasets.Sequence(datasets.Value("int32")),
219
+ }
220
+ ],
221
+ },
222
+ )
223
+
224
+ elif self.config.schema == "bigbio_kb":
225
+ features = kb_features
226
+
227
+ elif self.config.schema == "bigbio_text":
228
+ features = text_features
229
+
230
+ return datasets.DatasetInfo(
231
+ description=_DESCRIPTION,
232
+ features=features,
233
+ homepage=_HOMEPAGE,
234
+ license=str(_LICENSE),
235
+ citation=_CITATION,
236
+ )
237
+
238
+ def _split_generators(self, dl_manager) -> List[datasets.SplitGenerator]:
239
+ """
240
+ Downloads/extracts the data to generate the train, validation and test splits.
241
+
242
+ Each split is created by instantiating a `datasets.SplitGenerator`, which will
243
+ call `this._generate_examples` with the keyword arguments in `gen_kwargs`.
244
+ """
245
+
246
+ data_dir = dl_manager.download_and_extract(_URLS)
247
+
248
+ if "extra" in self.config.name:
249
+ return [
250
+ datasets.SplitGenerator(
251
+ name=datasets.Split.TRAIN,
252
+ gen_kwargs={
253
+ "filepath": Path(
254
+ os.path.join(
255
+ data_dir["extra"], "abstractsWithCIE10_v2.json"
256
+ )
257
+ ),
258
+ "split": "train",
259
+ },
260
+ )
261
+ ]
262
+ else:
263
+ return [
264
+ datasets.SplitGenerator(
265
+ name=datasets.Split.TRAIN,
266
+ gen_kwargs={
267
+ "filepath": Path(
268
+ os.path.join(
269
+ data_dir["codiesp"], "final_dataset_v4_to_publish/train"
270
+ )
271
+ ),
272
+ "split": "train",
273
+ },
274
+ ),
275
+ datasets.SplitGenerator(
276
+ name=datasets.Split.TEST,
277
+ gen_kwargs={
278
+ "filepath": Path(
279
+ os.path.join(
280
+ data_dir["codiesp"], "final_dataset_v4_to_publish/test"
281
+ )
282
+ ),
283
+ "split": "test",
284
+ },
285
+ ),
286
+ datasets.SplitGenerator(
287
+ name=datasets.Split.VALIDATION,
288
+ gen_kwargs={
289
+ "filepath": Path(
290
+ os.path.join(
291
+ data_dir["codiesp"], "final_dataset_v4_to_publish/dev"
292
+ )
293
+ ),
294
+ "split": "dev",
295
+ },
296
+ ),
297
+ ]
298
+
299
+ def _generate_examples(self, filepath, split: str) -> Tuple[int, Dict]:
300
+ """
301
+ This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
302
+ Method parameters are unpacked from `gen_kwargs` as given in `_split_generators`.
303
+ """
304
+
305
+ if "extra" not in self.config.name:
306
+ paths = {"text_files": Path(os.path.join(filepath, "text_files"))}
307
+ for task in ["codiesp_d", "codiesp_p", "codiesp_x"]:
308
+ paths[task] = Path(
309
+ os.path.join(filepath, f"{split}{task[-1].upper()}.tsv")
310
+ )
311
+
312
+ if (
313
+ self.config.name == "codiesp_D_bigbio_text"
314
+ or self.config.name == "codiesp_P_bigbio_text"
315
+ ):
316
+ df = pd.read_csv(paths[self.config.subset_id], sep="\t", header=None)
317
+
318
+ file_codes_dict = defaultdict(list)
319
+ for idx, row in df.iterrows():
320
+ file, code = row[0], row[1]
321
+ file_codes_dict[file].append(code)
322
+
323
+ for guid, (file, codes) in enumerate(file_codes_dict.items()):
324
+ text_file = Path(os.path.join(paths["text_files"], f"{file}.txt"))
325
+ example = {
326
+ "id": str(guid),
327
+ "document_id": file,
328
+ "text": text_file.read_text(),
329
+ "labels": codes,
330
+ }
331
+ yield guid, example
332
+
333
+ elif self.config.name == "codiesp_X_bigbio_kb":
334
+ df = pd.read_csv(paths[self.config.subset_id], sep="\t", header=None)
335
+
336
+ task_x_dict = defaultdict(list)
337
+ for idx, row in df.iterrows():
338
+ file, label, code, text, spans = row[0], row[1], row[2], row[3], row[4]
339
+
340
+ appearances = spans.split(";")
341
+ spans = []
342
+ for a in appearances:
343
+ spans.append((int(a.split()[0]), int(a.split()[1])))
344
+
345
+ task_x_dict[file].append(
346
+ {"label": label, "code": code, "text": text, "spans": spans}
347
+ )
348
+
349
+ for guid, (file, data) in enumerate(task_x_dict.items()):
350
+ example = {
351
+ "id": str(guid),
352
+ "document_id": file,
353
+ "passages": [],
354
+ "entities": [],
355
+ "events": [],
356
+ "coreferences": [],
357
+ "relations": [],
358
+ }
359
+
360
+ for idx, d in enumerate(data):
361
+ example["entities"].append(
362
+ {
363
+ "id": str(guid) + str(idx),
364
+ "type": d["label"],
365
+ "text": [d["text"]],
366
+ "offsets": d["spans"],
367
+ "normalized": [
368
+ {
369
+ "db_name": "ICD10-PCS"
370
+ if d["label"] == "PROCEDIMIENTO"
371
+ else "ICD10-CM",
372
+ "db_id": d["code"],
373
+ }
374
+ ],
375
+ }
376
+ )
377
+
378
+ yield guid, example
379
+
380
+ elif (
381
+ self.config.name == "codiesp_D_source"
382
+ or self.config.name == "codiesp_P_source"
383
+ ):
384
+ df = pd.read_csv(paths[self.config.subset_id], sep="\t", header=None)
385
+
386
+ file_codes_dict = defaultdict(list)
387
+ for idx, row in df.iterrows():
388
+ file, code = row[0], row[1]
389
+ file_codes_dict[file].append(code)
390
+
391
+ for guid, (file, codes) in enumerate(file_codes_dict.items()):
392
+ example = {
393
+ "id": guid,
394
+ "document_id": file,
395
+ "text": Path(
396
+ os.path.join(paths["text_files"], f"{file}.txt")
397
+ ).read_text(),
398
+ "labels": codes,
399
+ }
400
+
401
+ yield guid, example
402
+
403
+ elif self.config.name == "codiesp_X_source":
404
+ df = pd.read_csv(paths[self.config.subset_id], sep="\t", header=None)
405
+ file_codes_dict = defaultdict(list)
406
+ for idx, row in df.iterrows():
407
+ file, label, code, text, spans = row[0], row[1], row[2], row[3], row[4]
408
+ appearances = spans.split(";")
409
+ spans = []
410
+ for a in appearances:
411
+ spans.append([int(a.split()[0]), int(a.split()[1])])
412
+ file_codes_dict[file].append(
413
+ {"label": label, "code": code, "text": text, "spans": spans[0]}
414
+ )
415
+
416
+ for guid, (file, codes) in enumerate(file_codes_dict.items()):
417
+ example = {
418
+ "id": guid,
419
+ "document_id": file,
420
+ "text": Path(
421
+ os.path.join(paths["text_files"], f"{file}.txt")
422
+ ).read_text(),
423
+ "task_x": file_codes_dict[file],
424
+ }
425
+
426
+ yield guid, example
427
+
428
+ elif "extra" in self.config.name:
429
+ with open(filepath) as file:
430
+ json_data = json.load(file)
431
+
432
+ if "mesh" in self.config.name:
433
+ for guid, article in enumerate(json_data["articles"]):
434
+ example = {
435
+ "id": str(guid),
436
+ "document_id": article["pmid"],
437
+ "text": str(article["title"])
438
+ + " <SEP> "
439
+ + str(article["abstractText"]),
440
+ "labels": [mesh["Code"] for mesh in article["Mesh"]],
441
+ }
442
+ yield guid, example
443
+
444
+ else: # CIE ID codes
445
+ for guid, article in enumerate(json_data["articles"]):
446
+ example = {
447
+ "id": str(guid),
448
+ "document_id": article["pmid"],
449
+ "text": str(article["title"])
450
+ + " <SEP> "
451
+ + str(article["abstractText"]),
452
+ "labels": [
453
+ code
454
+ for mesh in article["Mesh"]
455
+ if "CIE" in mesh
456
+ for code in mesh["CIE"]
457
+ ],
458
+ }
459
+ yield guid, example
460
+
461
+ else:
462
+ raise ValueError(f"Invalid config: {self.config.name}")