gabrielaltay commited on
Commit
18349d7
1 Parent(s): 8e16a41

upload hubscripts/n2c2_2009_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. n2c2_2009.py +684 -0
n2c2_2009.py ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and
3
+ #
4
+ # * Ayush Singh (singhay)
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """
19
+ A dataset loader for the n2c2 2009 medication dataset.
20
+
21
+ The dataset consists of three archive files,
22
+ ├── annotations_ground_truth.tar.gz
23
+ ├── train.test.released.8.17.09.tar.gz
24
+ ├── TeamSubmissions.zip
25
+ └── training.sets.released.tar.gz
26
+
27
+ The individual data files (inside the zip and tar archives) come in 4 types,
28
+
29
+ * entries (*.entries / no extension files): text of a patient record
30
+ * medications (*.m files): entities along with offsets used as input to a named entity recognition model
31
+
32
+ The files comprising this dataset must be on the users local machine
33
+ in a single directory that is passed to `datasets.load_dataset` via
34
+ the `data_dir` kwarg. This loader script will read the archive files
35
+ directly (i.e. the user should not uncompress, untar or unzip any of
36
+ the files).
37
+
38
+ Data Access from https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
39
+
40
+ Steps taken to build datasets:
41
+ 1. Read all data files from train.test.released.8.17.09
42
+ 2. Get IDs of all train files from training.sets.released
43
+ 3. Intersect 2 with 1 to get train set
44
+ 4. Difference 1 with 2 to get test set
45
+ 5. Enrich train set with training.ground.truth.01.06.11.2009
46
+ 6. Enrich test set with annotations_ground_truth
47
+ """
48
+
49
+ import os
50
+ import re
51
+ import tarfile
52
+ import zipfile
53
+ from collections import defaultdict
54
+ from typing import Dict, List, Match, Tuple, Union
55
+
56
+ import datasets
57
+
58
+ from .bigbiohub import kb_features
59
+ from .bigbiohub import BigBioConfig
60
+ from .bigbiohub import Tasks
61
+
62
+ _LANGUAGES = ['English']
63
+ _PUBMED = True
64
+ _LOCAL = True
65
+ _CITATION = """\
66
+ @article{DBLP:journals/jamia/UzunerSC10,
67
+ author = {
68
+ Ozlem Uzuner and
69
+ Imre Solti and
70
+ Eithon Cadag
71
+ },
72
+ title = {Extracting medication information from clinical text},
73
+ journal = {J. Am. Medical Informatics Assoc.},
74
+ volume = {17},
75
+ number = {5},
76
+ pages = {514--518},
77
+ year = {2010},
78
+ url = {https://doi.org/10.1136/jamia.2010.003947},
79
+ doi = {10.1136/jamia.2010.003947},
80
+ timestamp = {Mon, 11 May 2020 22:59:55 +0200},
81
+ biburl = {https://dblp.org/rec/journals/jamia/UzunerSC10.bib},
82
+ bibsource = {dblp computer science bibliography, https://dblp.org}
83
+ }
84
+ """
85
+
86
+ _DATASETNAME = "n2c2_2009"
87
+ _DISPLAYNAME = "n2c2 2009 Medications"
88
+
89
+ _DESCRIPTION = """\
90
+ The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records
91
+ focused on the identification of medications, their dosages, modes (routes) of administration,
92
+ frequencies, durations, and reasons for administration in discharge summaries.
93
+ The third i2b2 challenge—that is, the medication challenge—extends information
94
+ extraction to relation extraction; it requires extraction of medications and
95
+ medication-related information followed by determination of which medication
96
+ belongs to which medication-related details.
97
+
98
+ The medication challenge was designed as an information extraction task.
99
+ The goal, for each discharge summary, was to extract the following information
100
+ on medications experienced by the patient:
101
+ 1. Medications (m): including names, brand names, generics, and collective names of prescription substances,
102
+ over the counter medications, and other biological substances for which the patient is the experiencer.
103
+ 2. Dosages (do): indicating the amount of a medication used in each administration.
104
+ 3. Modes (mo): indicating the route for administering the medication.
105
+ 4. Frequencies (f): indicating how often each dose of the medication should be taken.
106
+ 5. Durations (du): indicating how long the medication is to be administered.
107
+ 6. Reasons (r): stating the medical reason for which the medication is given.
108
+ 7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,
109
+ e.g., “suggested”, or via modals, e.g., “should” indicates suggestion.
110
+ 8. Event (e): stating on whether the medication is started, stopped, or continued.
111
+ 9. Temporal (t): stating whether the medication was administered in the past,
112
+ is being administered currently, or will be administered in the future, to the extent
113
+ that this information is expressed in the tense of the verbs and auxiliary verbs used to express events.
114
+ 10. List/narrative (ln): indicating whether the medication information appears in a
115
+ list structure or in narrative running text in the discharge summary.
116
+
117
+ The medication challenge asked that systems extract the text corresponding to each of the fields
118
+ for each of the mentions of the medications that were experienced by the patients.
119
+
120
+ The values for the set of fields related to a medication mention, if presented within a
121
+ two-line window of the mention, were linked in order to create what we defined as an ‘entry’.
122
+ If the value of a field for a mention were not specified within a two-line window,
123
+ then the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.
124
+
125
+ Since the dataset annotations were crowd-sourced, it contains various violations that are handled
126
+ throughout the data loader via means of exception catching or conditional statements. e.g.
127
+ annotation: anticoagulation, while in text all words are to be separated by space which
128
+ means words at end of sentence will always contain `.` and hence won't be an exact match
129
+ i.e. `anticoagulation` != `anticoagulation.` from doc_id: 818404
130
+ """
131
+
132
+ _HOMEPAGE = "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/"
133
+
134
+ _LICENSE = 'Data User Agreement'
135
+
136
+ _SUPPORTED_TASKS = [Tasks.NAMED_ENTITY_RECOGNITION]
137
+
138
+ _SOURCE_VERSION = "1.0.0" # 18-Aug-2009
139
+ _BIGBIO_VERSION = "1.0.0"
140
+
141
+ DELIMITER = "||"
142
+ SOURCE = "source"
143
+ BIGBIO_KB = "bigbio_kb"
144
+
145
+ TEXT_DATA_FIELDNAME = "txt"
146
+ MEDICATIONS_DATA_FIELDNAME = "med"
147
+ OFFSET_PATTERN = (
148
+ r"(.+?)=\"(.+?)\"( .+)?" # captures -> do="500" 102:6 102:6 and mo="nm"
149
+ )
150
+ BINARY_PATTERN = r"(.+?)=\"(.+?)\""
151
+ ENTITY_ID = "entity_id"
152
+ MEDICATION = "m"
153
+ DOSAGE = "do"
154
+ MODE_OF_ADMIN = "mo"
155
+ FREQUENCY = "f"
156
+ DURATION = "du"
157
+ REASON = "r"
158
+ EVENT = "e"
159
+ TEMPORAL = "t"
160
+ CERTAINTY = "c"
161
+ IS_FOUND_IN_LIST_OR_NARRATIVE = "ln"
162
+ NOT_MENTIONED = "nm"
163
+
164
+
165
+ def _read_train_test_data_from_tar_gz(data_dir):
166
+ samples = defaultdict(dict)
167
+
168
+ with tarfile.open(
169
+ os.path.join(data_dir, "train.test.released.8.17.09.tar.gz"), "r:gz"
170
+ ) as tf:
171
+ for member in tf.getmembers():
172
+ if member.name != "train.test.released.8.17.09":
173
+ _, sample_id = os.path.split(member.name)
174
+
175
+ with tf.extractfile(member) as fp:
176
+ content_bytes = fp.read()
177
+ content = content_bytes.decode("utf-8")
178
+ samples[sample_id][TEXT_DATA_FIELDNAME] = content
179
+
180
+ return samples
181
+
182
+
183
+ def _get_train_set(data_dir, train_test_set):
184
+ train_sample_ids = set()
185
+
186
+ # Read training set IDs
187
+ with tarfile.open(
188
+ os.path.join(data_dir, "training.sets.released.tar.gz"), "r:gz"
189
+ ) as tf:
190
+ for member in tf.getmembers():
191
+ if member.name not in list(map(str, range(1, 11))):
192
+ _, sample_id = os.path.split(member.name)
193
+ train_sample_ids.add(sample_id)
194
+
195
+ # Extract training set samples using above IDs from combined dataset
196
+ training_set = {}
197
+ for sample_id in train_sample_ids:
198
+ training_set[sample_id] = train_test_set[sample_id]
199
+
200
+ return training_set
201
+
202
+
203
+ def _get_test_set(train_set, train_test_set):
204
+ test_set = {}
205
+ for sample_id, sample in train_test_set.items():
206
+ if sample_id not in train_set:
207
+ test_set[sample_id] = sample
208
+
209
+ return test_set
210
+
211
+
212
+ def _add_entities_to_train_set(data_dir, train_set):
213
+ with zipfile.ZipFile(
214
+ os.path.join(data_dir, "training.ground.truth.01.06.11.2009.zip")
215
+ ) as zf:
216
+ for info in zf.infolist():
217
+ base, filename = os.path.split(info.filename)
218
+ _, ext = os.path.splitext(filename)
219
+ ext = ext[1:] # get rid of dot
220
+
221
+ # Extract sample id from filename pattern `379569_gold.entries`
222
+ sample_id = filename.split(".")[0].split("_")[0]
223
+ if ext == "entries":
224
+ train_set[sample_id][MEDICATIONS_DATA_FIELDNAME] = zf.read(info).decode(
225
+ "utf-8"
226
+ )
227
+
228
+
229
+ def _add_entities_to_test_set(data_dir, test_set):
230
+ with tarfile.open(
231
+ os.path.join(data_dir, "annotations_ground_truth.tar.gz"), "r:gz"
232
+ ) as tf:
233
+ for member in tf.getmembers():
234
+ if "converted.noduplicates.sorted" in member.name:
235
+ base, filename = os.path.split(member.name)
236
+ _, ext = os.path.splitext(filename)
237
+ ext = ext[1:] # get rid of dot
238
+
239
+ sample_id = filename.split(".")[0]
240
+ if ext == "m":
241
+ with tf.extractfile(member) as fp:
242
+ content_bytes = fp.read()
243
+ test_set[sample_id][
244
+ MEDICATIONS_DATA_FIELDNAME
245
+ ] = content_bytes.decode("utf-8")
246
+
247
+
248
+ def _make_empty_schema_dict_with_text(text):
249
+ return {
250
+ "text": text,
251
+ "offsets": [{"start_line": 0, "start_token": 0, "end_line": 0, "end_token": 0}],
252
+ }
253
+
254
+
255
+ def _ct_match_to_dict(c_match: Match) -> dict:
256
+ """Return a dictionary with groups from concept and type regex matches."""
257
+ key = c_match.group(1)
258
+ text = c_match.group(2)
259
+ offsets = c_match.group(3)
260
+ if offsets:
261
+ offsets = offsets.strip()
262
+ offsets_formatted = []
263
+ # Pattern: f="monday-wednesday-friday...before hemodialysis...p.r.n." 15:7 15:7,16:0 16:1,16:5 16:5
264
+ if "," in offsets:
265
+ line_offsets = offsets.split(",")
266
+ for offset in line_offsets:
267
+ start, end = offset.split(" ")
268
+ start_line, start_token = start.split(":")
269
+ end_line, end_token = end.split(":")
270
+ offsets_formatted.append(
271
+ {
272
+ "start_line": int(start_line),
273
+ "start_token": int(start_token),
274
+ "end_line": int(end_line),
275
+ "end_token": int(end_token),
276
+ }
277
+ )
278
+ else:
279
+ """Handle another edge annotations.ground.truth>984424 which has discontinuous
280
+ annotation as 23:4 23:4 23:10 23:10 which violates annotation guideline that
281
+ discontinuous spans should be separated by comma -> 23:4 23:4,23:10 23:10
282
+ """
283
+ offset = offsets.split(" ")
284
+ for i in range(0, len(offset), 2):
285
+ start, end = offset[i : i + 2]
286
+ start_line, start_token = start.split(":")
287
+ end_line, end_token = end.split(":")
288
+
289
+ offsets_formatted.append(
290
+ {
291
+ "start_line": int(start_line),
292
+ "start_token": int(start_token),
293
+ "end_line": int(end_line),
294
+ "end_token": int(end_token),
295
+ }
296
+ )
297
+
298
+ return {"text": text, "offsets": offsets_formatted}
299
+ elif key in {CERTAINTY, EVENT, TEMPORAL, IS_FOUND_IN_LIST_OR_NARRATIVE}:
300
+ return text
301
+ else:
302
+ return _make_empty_schema_dict_with_text(text)
303
+
304
+
305
+ def _tokoff_from_line(text: str) -> List[Tuple[int, int]]:
306
+ """Produce character offsets for each token (whitespace split)
307
+ For example,
308
+ text = " one two three ."
309
+ tokoff = [(1,4), (6,9), (10,15), (16,17)]
310
+ """
311
+ tokoff = []
312
+ start = None
313
+ end = None
314
+ for ii, char in enumerate(text):
315
+ if (char != " " or char != "\t") and start is None:
316
+ start = ii
317
+ if (char == " " or char == "\t") and start is not None:
318
+ end = ii
319
+ tokoff.append((start, end))
320
+ start = None
321
+ if start is not None:
322
+ end = ii + 1
323
+ tokoff.append((start, end))
324
+ return tokoff
325
+
326
+
327
+ def _parse_line(line: str) -> dict:
328
+ """Parse one line from a *.m file.
329
+
330
+ A typical line has the form,
331
+ 'm="<string>" <start_line>:<start_token> <end_line>:<end_token>||...||e="<string>"||...'
332
+
333
+ This represents one medication.
334
+ It can be interpreted as follows,
335
+ Medication name & offset||dosage & offset||mode & offset||frequency & offset||...
336
+ ...duration & offset||reason & offset||event||temporal marker||certainty||list/narrative
337
+
338
+ If there is no information then each field will simply contain "nm" (not mentioned)
339
+
340
+ Anomalies:
341
+ 1. Files 683679 and 974209 annotations do not have 'c', 'e', 't' keys in them
342
+ 2. Some files have discontinuous annotations violating guidelines i.e. using space insead of comma as delimiter
343
+ """
344
+ entity = {
345
+ MEDICATION: _make_empty_schema_dict_with_text(""),
346
+ DOSAGE: _make_empty_schema_dict_with_text(""),
347
+ MODE_OF_ADMIN: _make_empty_schema_dict_with_text(""),
348
+ FREQUENCY: _make_empty_schema_dict_with_text(""),
349
+ DURATION: _make_empty_schema_dict_with_text(""),
350
+ REASON: _make_empty_schema_dict_with_text(""),
351
+ EVENT: "",
352
+ TEMPORAL: "",
353
+ CERTAINTY: "",
354
+ IS_FOUND_IN_LIST_OR_NARRATIVE: "",
355
+ }
356
+ for i, pattern in enumerate(line.split(DELIMITER)):
357
+ # Handle edge case of triple pipe as delimiter in 18563_gold.entries: ...7,16:0 16:1,16:5 16:5||| du="nm"...
358
+ if pattern[0] == "|":
359
+ pattern = pattern[1:]
360
+
361
+ pattern = pattern.strip()
362
+ match = re.match(OFFSET_PATTERN, pattern)
363
+ key = match.group(1)
364
+ entity[key] = _ct_match_to_dict(match)
365
+
366
+ return entity
367
+
368
+
369
+ def _form_entity_id(sample_id, split, start_line, start_token, end_line, end_token):
370
+ return "{}-entity-{}-{}-{}-{}-{}".format(
371
+ sample_id,
372
+ split,
373
+ start_line,
374
+ start_token,
375
+ end_line,
376
+ end_token,
377
+ )
378
+
379
+
380
+ def _get_entities_from_sample(sample_id, sample, split):
381
+ entities = []
382
+ if MEDICATIONS_DATA_FIELDNAME not in sample:
383
+ return entities
384
+
385
+ text = sample[TEXT_DATA_FIELDNAME]
386
+ text_lines = text.splitlines()
387
+ text_line_lengths = [len(el) for el in text_lines]
388
+ med_lines = sample[MEDICATIONS_DATA_FIELDNAME].splitlines()
389
+ # parsed concepts (sort is just a convenience)
390
+ med_parsed = sorted(
391
+ [_parse_line(line) for line in med_lines],
392
+ key=lambda x: (
393
+ x[MEDICATION]["offsets"][0]["start_line"],
394
+ x[MEDICATION]["offsets"][0]["start_token"],
395
+ ),
396
+ )
397
+
398
+ for ii_cp, cp in enumerate(med_parsed):
399
+ for entity_type in {
400
+ MEDICATION,
401
+ DOSAGE,
402
+ DURATION,
403
+ REASON,
404
+ FREQUENCY,
405
+ MODE_OF_ADMIN,
406
+ }:
407
+ offsets, texts = [], []
408
+ for txt, offset in zip(
409
+ cp[entity_type]["text"].split("..."), cp[entity_type]["offsets"]
410
+ ):
411
+ # annotations can span multiple lines
412
+ # we loop over all lines and build up the character offsets
413
+ for ii_line in range(offset["start_line"], offset["end_line"] + 1):
414
+
415
+ # character offset to the beginning of the line
416
+ # line length of each line + 1 new line character for each line
417
+ # need to subtract 1 from offset["start_line"] because line index starts at 1 in dataset
418
+ start_line_off = sum(text_line_lengths[: ii_line - 1]) + (
419
+ ii_line - 1
420
+ )
421
+
422
+ # offsets for each token relative to the beginning of the line
423
+ # "one two" -> [(0,3), (4,6)]
424
+ tokoff = _tokoff_from_line(text_lines[ii_line - 1])
425
+ try:
426
+ # if this is a single line annotation
427
+ if ii_line == offset["start_line"] == offset["end_line"]:
428
+ start_off = (
429
+ start_line_off + tokoff[offset["start_token"]][0]
430
+ )
431
+ end_off = start_line_off + tokoff[offset["end_token"]][1]
432
+
433
+ # if multi-line and on first line
434
+ # end_off gets a +1 for new line character
435
+ elif (ii_line == offset["start_line"]) and (
436
+ ii_line != offset["end_line"]
437
+ ):
438
+ start_off = (
439
+ start_line_off + tokoff[offset["start_token"]][0]
440
+ )
441
+ end_off = (
442
+ start_line_off + text_line_lengths[ii_line - 1] + 1
443
+ )
444
+
445
+ # if multi-line and on last line
446
+ elif (ii_line != offset["start_line"]) and (
447
+ ii_line == offset["end_line"]
448
+ ):
449
+ end_off += tokoff[offset["end_token"]][1]
450
+
451
+ # if mult-line and not on first or last line
452
+ # (this does not seem to occur in this corpus)
453
+ else:
454
+ end_off += text_line_lengths[ii_line - 1] + 1
455
+
456
+ except IndexError:
457
+ """This is to handle an erroneous annotation in files #974209 line 51
458
+ line is 'the PACU in stable condition. Her pain was well controlled with PCA'
459
+ whereas the annotation says 'pca analgesia' where 'analgesia' is missing from
460
+ the end of the line. This results in token not being found in `tokoff` array
461
+ and raises IndexError
462
+
463
+ similar files:
464
+ * 5091 - amputation beginning two weeks ago associated with throbbing
465
+ * 944118 - dysuria , joint pain. Reported small rash on penis for which was taking
466
+ * 918321 - endarterectomy. The patient was started on enteric coated aspirin
467
+ """
468
+ continue
469
+
470
+ offsets.append((start_off, end_off))
471
+
472
+ text_slice = text[start_off:end_off]
473
+ text_slice_norm_1 = text_slice.replace("\n", "").lower()
474
+ text_slice_norm_2 = text_slice.replace("\n", " ").lower()
475
+ text_slice_norm_3 = text_slice.replace(".", "").lower()
476
+ match = (
477
+ text_slice_norm_1 == txt.lower()
478
+ or text_slice_norm_2 == txt.lower()
479
+ or text_slice_norm_3 == txt.lower()
480
+ )
481
+ if not match:
482
+ continue
483
+
484
+ texts.append(text_slice)
485
+
486
+ entity_id = _form_entity_id(
487
+ sample_id,
488
+ split,
489
+ cp[entity_type]["offsets"][0]["start_line"],
490
+ cp[entity_type]["offsets"][0]["start_token"],
491
+ cp[entity_type]["offsets"][-1]["end_line"],
492
+ cp[entity_type]["offsets"][-1]["end_token"],
493
+ )
494
+ entity = {
495
+ "id": entity_id,
496
+ "offsets": offsets if texts else [],
497
+ "text": texts,
498
+ "type": entity_type,
499
+ "normalized": [],
500
+ }
501
+ entities.append(entity)
502
+
503
+ # IDs are constructed such that duplicate IDs indicate duplicate (i.e. redundant) entities
504
+ dedupe_entities = []
505
+ dedupe_entity_ids = set()
506
+ for entity in entities:
507
+ if entity["id"] in dedupe_entity_ids:
508
+ continue
509
+ else:
510
+ dedupe_entity_ids.add(entity["id"])
511
+ dedupe_entities.append(entity)
512
+
513
+ return dedupe_entities
514
+
515
+
516
+ class N2C22009MedicationDataset(datasets.GeneratorBasedBuilder):
517
+ """n2c2 2009 Medications NER task"""
518
+
519
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
520
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
521
+ SOURCE_CONFIG_NAME = _DATASETNAME + "_" + SOURCE
522
+ BIGBIO_CONFIG_NAME = _DATASETNAME + "_" + BIGBIO_KB
523
+
524
+ # You will be able to load the "source" or "bigbio" configurations with
525
+ # ds_source = datasets.load_dataset('my_dataset', name='source')
526
+ # ds_bigbio = datasets.load_dataset('my_dataset', name='bigbio')
527
+
528
+ # For local datasets you can make use of the `data_dir` and `data_files` kwargs
529
+ # https://huggingface.co/docs/datasets/add_dataset.html#downloading-data-files-and-organizing-splits
530
+ # ds_source = datasets.load_dataset('my_dataset', name='source', data_dir="/path/to/data/files")
531
+ # ds_bigbio = datasets.load_dataset('my_dataset', name='bigbio', data_dir="/path/to/data/files")
532
+
533
+ BUILDER_CONFIGS = [
534
+ BigBioConfig(
535
+ name=SOURCE_CONFIG_NAME,
536
+ version=SOURCE_VERSION,
537
+ description=f"{_DATASETNAME} source schema",
538
+ schema=SOURCE,
539
+ subset_id=_DATASETNAME,
540
+ ),
541
+ BigBioConfig(
542
+ name=BIGBIO_CONFIG_NAME,
543
+ version=BIGBIO_VERSION,
544
+ description=f"{_DATASETNAME} BigBio schema",
545
+ schema=BIGBIO_KB,
546
+ subset_id=_DATASETNAME,
547
+ ),
548
+ ]
549
+
550
+ DEFAULT_CONFIG_NAME = SOURCE_CONFIG_NAME
551
+
552
+ def _info(self) -> datasets.DatasetInfo:
553
+
554
+ if self.config.schema == SOURCE:
555
+ offset_text_schema = {
556
+ "text": datasets.Value("string"),
557
+ "offsets": [
558
+ {
559
+ "start_line": datasets.Value("int64"),
560
+ "start_token": datasets.Value("int64"),
561
+ "end_line": datasets.Value("int64"),
562
+ "end_token": datasets.Value("int64"),
563
+ }
564
+ ],
565
+ }
566
+ features = datasets.Features(
567
+ {
568
+ "doc_id": datasets.Value("string"),
569
+ "text": datasets.Value("string"),
570
+ "entities": [
571
+ {
572
+ MEDICATION: offset_text_schema,
573
+ DOSAGE: offset_text_schema,
574
+ MODE_OF_ADMIN: offset_text_schema,
575
+ FREQUENCY: offset_text_schema,
576
+ DURATION: offset_text_schema,
577
+ REASON: offset_text_schema,
578
+ EVENT: datasets.Value("string"),
579
+ TEMPORAL: datasets.Value("string"),
580
+ CERTAINTY: datasets.Value("string"),
581
+ IS_FOUND_IN_LIST_OR_NARRATIVE: datasets.Value("string"),
582
+ }
583
+ ],
584
+ }
585
+ )
586
+
587
+ elif self.config.schema == BIGBIO_KB:
588
+ features = kb_features
589
+
590
+ return datasets.DatasetInfo(
591
+ description=_DESCRIPTION,
592
+ features=features,
593
+ homepage=_HOMEPAGE,
594
+ license=str(_LICENSE),
595
+ citation=_CITATION,
596
+ )
597
+
598
+ def _split_generators(self, dl_manager) -> List[datasets.SplitGenerator]:
599
+ """Returns SplitGenerators."""
600
+
601
+ if self.config.data_dir is None or self.config.name is None:
602
+ raise ValueError(
603
+ "This is a local dataset. Please pass the data_dir and name kwarg to load_dataset."
604
+ )
605
+ else:
606
+ data_dir = self.config.data_dir
607
+
608
+ return [
609
+ datasets.SplitGenerator(
610
+ name=datasets.Split.TRAIN,
611
+ gen_kwargs={
612
+ "data_dir": data_dir,
613
+ "split": str(datasets.Split.TRAIN),
614
+ },
615
+ ),
616
+ datasets.SplitGenerator(
617
+ name=datasets.Split.TEST,
618
+ gen_kwargs={
619
+ "data_dir": data_dir,
620
+ "split": str(datasets.Split.TEST),
621
+ },
622
+ ),
623
+ ]
624
+
625
+ @staticmethod
626
+ def _get_source_sample(
627
+ sample_id, sample
628
+ ) -> Dict[str, Union[str, List[Dict[str, str]]]]:
629
+ entities = []
630
+ if MEDICATIONS_DATA_FIELDNAME in sample:
631
+ entities = list(
632
+ map(_parse_line, sample[MEDICATIONS_DATA_FIELDNAME].splitlines())
633
+ )
634
+ return {
635
+ "doc_id": sample_id,
636
+ "text": sample.get(TEXT_DATA_FIELDNAME, ""),
637
+ "entities": entities,
638
+ }
639
+
640
+ @staticmethod
641
+ def _get_bigbio_sample(
642
+ sample_id, sample, split
643
+ ) -> Dict[str, Union[str, List[Dict[str, Union[str, List[Tuple]]]]]]:
644
+
645
+ passage_text = sample.get(TEXT_DATA_FIELDNAME, "")
646
+ entities = _get_entities_from_sample(sample_id, sample, split)
647
+ return {
648
+ "id": sample_id,
649
+ "document_id": sample_id,
650
+ "passages": [
651
+ {
652
+ "id": f"{sample_id}-passage-0",
653
+ "type": "discharge summary",
654
+ "text": [passage_text],
655
+ "offsets": [(0, len(passage_text))],
656
+ }
657
+ ],
658
+ "entities": entities,
659
+ "relations": [],
660
+ "events": [],
661
+ "coreferences": [],
662
+ }
663
+
664
+ def _generate_examples(self, data_dir, split):
665
+ train_test_set = _read_train_test_data_from_tar_gz(data_dir)
666
+ train_set = _get_train_set(data_dir, train_test_set)
667
+ test_set = _get_test_set(train_set, train_test_set)
668
+
669
+ if split == "train":
670
+ _add_entities_to_train_set(data_dir, train_set)
671
+ samples = train_set
672
+ elif split == "test":
673
+ _add_entities_to_test_set(data_dir, test_set)
674
+ samples = test_set
675
+
676
+ _id = 0
677
+ for sample_id, sample in samples.items():
678
+
679
+ if self.config.name == N2C22009MedicationDataset.SOURCE_CONFIG_NAME:
680
+ yield _id, self._get_source_sample(sample_id, sample)
681
+ elif self.config.name == N2C22009MedicationDataset.BIGBIO_CONFIG_NAME:
682
+ yield _id, self._get_bigbio_sample(sample_id, sample, split)
683
+
684
+ _id += 1