gabrielaltay commited on
Commit
416c77a
1 Parent(s): 8e13df2

upload hubscripts/n2c2_2011_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. n2c2_2011.py +562 -0
n2c2_2011.py ADDED
@@ -0,0 +1,562 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+
17
+ """
18
+ A dataset loader for the n2c2 2011 coref dataset.
19
+
20
+ https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
21
+
22
+ The dataset consists of four archive files,
23
+
24
+ * Task_1C.zip
25
+ * Task_1C_Test_groundtruth.zip
26
+ * i2b2_Partners_Train_Release.tar.gz
27
+ * i2b2_Beth_Train_Release.tar.gz
28
+
29
+ The individual data files (inside the zip and tar archives) come in 4 types,
30
+
31
+ * docs (*.txt files): text of a patient record
32
+ * concepts (*.txt.con files): entities used as input to a coreference model
33
+ * chains (*.txt.chains files): chains (i.e. one or more) coreferent entities
34
+ * pairs (*.txt.pairs files): pairs of coreferent entities (not required)
35
+
36
+
37
+ The files comprising this dataset must be on the users local machine
38
+ in a single directory that is passed to `datasets.load_datset` via
39
+ the `data_dir` kwarg. This loader script will read the archive files
40
+ directly (i.e. the user should not uncompress, untar or unzip any of
41
+ the files). For example, if the following directory structure exists
42
+ on the users local machine,
43
+
44
+
45
+ n2c2_2011_coref
46
+ ├── i2b2_Beth_Train_Release.tar.gz
47
+ ├── i2b2_Partners_Train_Release.tar.gz
48
+ ├── Task_1C_Test_groundtruth.zip
49
+ └── Task_1C.zip
50
+
51
+
52
+ Data Access
53
+
54
+ from https://www.i2b2.org/NLP/DataSets/Main.php
55
+
56
+ "As always, you must register AND submit a DUA for access. If you previously
57
+ accessed the data sets here on i2b2.org, you will need to set a new password
58
+ for your account on the Data Portal, but your original DUA will be retained."
59
+
60
+
61
+ """
62
+
63
+ import os
64
+ import re
65
+ import tarfile
66
+ import zipfile
67
+ from collections import defaultdict
68
+ from typing import Dict, List, Match, Tuple
69
+
70
+ import datasets
71
+ from datasets import Features, Value
72
+
73
+ from .bigbiohub import kb_features
74
+ from .bigbiohub import BigBioConfig
75
+ from .bigbiohub import Tasks
76
+
77
+ _DATASETNAME = "n2c2_2011"
78
+ _DISPLAYNAME = "n2c2 2011 Coreference"
79
+
80
+ # https://academic.oup.com/jamia/article/19/5/786/716138
81
+ _LANGUAGES = ['English']
82
+ _PUBMED = False
83
+ _LOCAL = True
84
+ _CITATION = """\
85
+ @article{uzuner2012evaluating,
86
+ author = {
87
+ Uzuner, Ozlem and
88
+ Bodnari, Andreea and
89
+ Shen, Shuying and
90
+ Forbush, Tyler and
91
+ Pestian, John and
92
+ South, Brett R
93
+ },
94
+ title = "{Evaluating the state of the art in coreference resolution for electronic medical records}",
95
+ journal = {Journal of the American Medical Informatics Association},
96
+ volume = {19},
97
+ number = {5},
98
+ pages = {786-791},
99
+ year = {2012},
100
+ month = {02},
101
+ issn = {1067-5027},
102
+ doi = {10.1136/amiajnl-2011-000784},
103
+ url = {https://doi.org/10.1136/amiajnl-2011-000784},
104
+ eprint = {https://academic.oup.com/jamia/article-pdf/19/5/786/17374287/19-5-786.pdf},
105
+ }
106
+ """
107
+
108
+ _DESCRIPTION = """\
109
+ The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
110
+ Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
111
+ Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
112
+ i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
113
+
114
+ The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
115
+ test, and treatment. Each record in the i2b2/VA corpus was annotated by two
116
+ independent annotators for coreference pairs. Then the pairs were post-processed
117
+ in order to create coreference chains. These chains were presented to an adjudicator,
118
+ who resolved the disagreements between the original annotations, and added or deleted
119
+ annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
120
+ particular attention being paid to duplicates and enforcing consistency in the annotations.
121
+
122
+ """
123
+
124
+ _HOMEPAGE = "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/"
125
+
126
+ _LICENSE = 'Data User Agreement'
127
+
128
+ _SOURCE_VERSION = "1.0.0"
129
+ _BIGBIO_VERSION = "1.0.0"
130
+
131
+ _SUPPORTED_TASKS = [Tasks.COREFERENCE_RESOLUTION]
132
+
133
+
134
+ def _read_tar_gz(file_path, samples=None):
135
+ if samples is None:
136
+ samples = defaultdict(dict)
137
+ with tarfile.open(file_path, "r:gz") as tf:
138
+ for member in tf.getmembers():
139
+
140
+ base, filename = os.path.split(member.name)
141
+ _, ext = os.path.splitext(filename)
142
+ ext = ext[1:] # get rid of dot
143
+ sample_id = filename.split(".")[0]
144
+
145
+ if ext in ["txt", "con", "pairs", "chains"]:
146
+ samples[sample_id][f"{ext}_source"] = (
147
+ os.path.basename(file_path) + "|" + member.name
148
+ )
149
+ with tf.extractfile(member) as fp:
150
+ content_bytes = fp.read()
151
+ content = content_bytes.decode("utf-8")
152
+ samples[sample_id][ext] = content
153
+
154
+ return samples
155
+
156
+
157
+ def _read_zip(file_path, samples=None):
158
+ if samples is None:
159
+ samples = defaultdict(dict)
160
+ with zipfile.ZipFile(file_path) as zf:
161
+ for info in zf.infolist():
162
+
163
+ base, filename = os.path.split(info.filename)
164
+ _, ext = os.path.splitext(filename)
165
+ ext = ext[1:] # get rid of dot
166
+ sample_id = filename.split(".")[0]
167
+
168
+ if ext in ["txt", "con", "pairs", "chains"] and not filename.startswith(
169
+ "."
170
+ ):
171
+ samples[sample_id][f"{ext}_source"] = (
172
+ os.path.basename(file_path) + "|" + info.filename
173
+ )
174
+ content = zf.read(info).decode("utf-8")
175
+ samples[sample_id][ext] = content
176
+
177
+ return samples
178
+
179
+
180
+ C_PATTERN = r"c=\"(.+?)\" (\d+):(\d+) (\d+):(\d+)"
181
+ T_PATTERN = r"t=\"(.+?)\""
182
+
183
+
184
+ def _ct_match_to_dict(c_match: Match, t_match: Match) -> dict:
185
+ """Return a dictionary with groups from concept and type regex matches."""
186
+ return {
187
+ "text": c_match.group(1),
188
+ "start_line": int(c_match.group(2)),
189
+ "start_token": int(c_match.group(3)),
190
+ "end_line": int(c_match.group(4)),
191
+ "end_token": int(c_match.group(5)),
192
+ "type": t_match.group(1),
193
+ }
194
+
195
+
196
+ def _parse_con_line(line: str) -> dict:
197
+ """Parse one line from a *.con file.
198
+
199
+ A typical line has the form,
200
+ 'c="angie cm johnson , m.d." 13:2 13:6||t="person"
201
+
202
+ This represents one concept to be placed into a coreference group.
203
+ It can be interpreted as follows,
204
+ 'c="<string>" <start_line>:<start_token> <end_line>:<end_token>||t="<type>"'
205
+
206
+ """
207
+ c_part, t_part = line.split("||")
208
+ c_match, t_match = re.match(C_PATTERN, c_part), re.match(T_PATTERN, t_part)
209
+ return _ct_match_to_dict(c_match, t_match)
210
+
211
+
212
+ def _parse_chains_line(line: str) -> List[Dict]:
213
+ """Parse one line from a *.chains file.
214
+
215
+ A typical line has a chain of concepts and then a type.
216
+ 'c="patient" 12:0 12:0||c="mr. andersen" 19:0 19:1||...||t="coref person"'
217
+ """
218
+ pieces = line.split("||")
219
+ c_parts, t_part = pieces[:-1], pieces[-1]
220
+ c_matches, t_match = (
221
+ [re.match(C_PATTERN, c_part) for c_part in c_parts],
222
+ re.match(T_PATTERN, t_part),
223
+ )
224
+ return [_ct_match_to_dict(c_match, t_match) for c_match in c_matches]
225
+
226
+
227
+ def _tokoff_from_line(text: str) -> List[Tuple[int, int]]:
228
+ """Produce character offsets for each token (whitespace split)
229
+
230
+ For example,
231
+ text = " one two three ."
232
+ tokoff = [(1,4), (6,9), (10,15), (16,17)]
233
+ """
234
+ tokoff = []
235
+ start = None
236
+ end = None
237
+ for ii, char in enumerate(text):
238
+ if char != " " and start is None:
239
+ start = ii
240
+ if char == " " and start is not None:
241
+ end = ii
242
+ tokoff.append((start, end))
243
+ start = None
244
+ if start is not None:
245
+ end = ii + 1
246
+ tokoff.append((start, end))
247
+ return tokoff
248
+
249
+
250
+ def _form_entity_id(sample_id, split, start_line, start_token, end_line, end_token):
251
+ return "{}-entity-{}-{}-{}-{}-{}".format(
252
+ sample_id,
253
+ split,
254
+ start_line,
255
+ start_token,
256
+ end_line,
257
+ end_token,
258
+ )
259
+
260
+
261
+ def _get_corefs_from_sample(sample_id, sample, sample_entity_ids, split):
262
+ """Parse the lines of a *.chains file into coreference objects
263
+
264
+ A small number of concepts from the *.con files could not be
265
+ aligned with the text and were excluded. For this reason we
266
+ pass in the full set of matched entity IDs and ensure that
267
+ no coreference refers to an exlcluded entity.
268
+ """
269
+ chains_lines = sample["chains"].splitlines()
270
+ chains_parsed = [_parse_chains_line(line) for line in chains_lines]
271
+ corefs = []
272
+ for ii_cp, cp in enumerate(chains_parsed):
273
+ coref_id = f"{sample_id}-coref-{ii_cp}"
274
+ coref_entity_ids = [
275
+ _form_entity_id(
276
+ sample_id,
277
+ split,
278
+ entity["start_line"],
279
+ entity["start_token"],
280
+ entity["end_line"],
281
+ entity["end_token"],
282
+ )
283
+ for entity in cp
284
+ ]
285
+ coref_entity_ids = [
286
+ ent_id for ent_id in coref_entity_ids if ent_id in sample_entity_ids
287
+ ]
288
+ coref = {
289
+ "id": coref_id,
290
+ "entity_ids": coref_entity_ids,
291
+ }
292
+ corefs.append(coref)
293
+
294
+ return corefs
295
+
296
+
297
+ def _get_entities_from_sample(sample_id, sample, split):
298
+ """Parse the lines of a *.con concept file into entity objects
299
+
300
+ Here we parse the *.con files and form entities. For a small
301
+ number of entities the text snippet in the concept file could not
302
+ be aligned with the slice from the full text produced by using
303
+ the line and token offsets. These entities are excluded from the
304
+ entities object and the coreferences object.
305
+ """
306
+ con_lines = sample["con"].splitlines()
307
+ text = sample["txt"]
308
+ text_lines = text.splitlines()
309
+ text_line_lengths = [len(el) for el in text_lines]
310
+
311
+ # parsed concepts (sort is just a convenience)
312
+ con_parsed = sorted(
313
+ [_parse_con_line(line) for line in con_lines],
314
+ key=lambda x: (x["start_line"], x["start_token"]),
315
+ )
316
+
317
+ entities = []
318
+ for ii_cp, cp in enumerate(con_parsed):
319
+
320
+ # annotations can span multiple lines
321
+ # we loop over all lines and build up the character offsets
322
+ for ii_line in range(cp["start_line"], cp["end_line"] + 1):
323
+
324
+ # character offset to the beginning of the line
325
+ # line length of each line + 1 new line character for each line
326
+ start_line_off = sum(text_line_lengths[: ii_line - 1]) + (ii_line - 1)
327
+
328
+ # offsets for each token relative to the beginning of the line
329
+ # "one two" -> [(0,3), (4,6)]
330
+ tokoff = _tokoff_from_line(text_lines[ii_line - 1])
331
+
332
+ # if this is a single line annotation
333
+ if ii_line == cp["start_line"] == cp["end_line"]:
334
+ start_off = start_line_off + tokoff[cp["start_token"]][0]
335
+ end_off = start_line_off + tokoff[cp["end_token"]][1]
336
+
337
+ # if multi-line and on first line
338
+ # end_off gets a +1 for new line character
339
+ elif (ii_line == cp["start_line"]) and (ii_line != cp["end_line"]):
340
+ start_off = start_line_off + tokoff[cp["start_token"]][0]
341
+ end_off = start_line_off + text_line_lengths[ii_line - 1] + 1
342
+
343
+ # if multi-line and on last line
344
+ elif (ii_line != cp["start_line"]) and (ii_line == cp["end_line"]):
345
+ end_off = end_off + tokoff[cp["end_token"]][1]
346
+
347
+ # if mult-line and not on first or last line
348
+ # (this does not seem to occur in this corpus)
349
+ else:
350
+ end_off += text_line_lengths[ii_line - 1] + 1
351
+
352
+ text_slice = text[start_off:end_off]
353
+ text_slice_norm_1 = text_slice.replace("\n", "").lower()
354
+ text_slice_norm_2 = text_slice.replace("\n", " ").lower()
355
+ match = text_slice_norm_1 == cp["text"] or text_slice_norm_2 == cp["text"]
356
+ if not match:
357
+ continue
358
+
359
+ entity_id = _form_entity_id(
360
+ sample_id,
361
+ split,
362
+ cp["start_line"],
363
+ cp["start_token"],
364
+ cp["end_line"],
365
+ cp["end_token"],
366
+ )
367
+ entity = {
368
+ "id": entity_id,
369
+ "offsets": [(start_off, end_off)],
370
+ # this is the difference between taking text from the entity
371
+ # or taking the text from the offsets. the differences are
372
+ # almost all casing with some small number of new line characters
373
+ # making up the rest
374
+ # "text": [cp["text"]],
375
+ "text": [text_slice],
376
+ "type": cp["type"],
377
+ "normalized": [],
378
+ }
379
+ entities.append(entity)
380
+
381
+ # IDs are constructed such that duplicate IDs indicate duplicate (i.e. redundant) entities
382
+ # In practive this removes one duplicate sample from the test set
383
+ # {
384
+ # 'id': 'clinical-627-entity-test-122-9-122-9',
385
+ # 'offsets': [(5600, 5603)],
386
+ # 'text': ['her'],
387
+ # 'type': 'person'
388
+ # }
389
+ dedupe_entities = []
390
+ dedupe_entity_ids = set()
391
+ for entity in entities:
392
+ if entity["id"] in dedupe_entity_ids:
393
+ continue
394
+ else:
395
+ dedupe_entity_ids.add(entity["id"])
396
+ dedupe_entities.append(entity)
397
+
398
+ return dedupe_entities
399
+
400
+
401
+ class N2C22011CorefDataset(datasets.GeneratorBasedBuilder):
402
+ """n2c2 2011 coreference task"""
403
+
404
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
405
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
406
+
407
+ BUILDER_CONFIGS = [
408
+ BigBioConfig(
409
+ name="n2c2_2011_source",
410
+ version=SOURCE_VERSION,
411
+ description="n2c2_2011 source schema",
412
+ schema="source",
413
+ subset_id="n2c2_2011",
414
+ ),
415
+ BigBioConfig(
416
+ name="n2c2_2011_bigbio_kb",
417
+ version=BIGBIO_VERSION,
418
+ description="n2c2_2011 BigBio schema",
419
+ schema="bigbio_kb",
420
+ subset_id="n2c2_2011",
421
+ ),
422
+ ]
423
+
424
+ DEFAULT_CONFIG_NAME = "n2c2_2011_source"
425
+
426
+ def _info(self):
427
+
428
+ if self.config.schema == "source":
429
+ features = Features(
430
+ {
431
+ "sample_id": Value("string"),
432
+ "txt": Value("string"),
433
+ "con": Value("string"),
434
+ "pairs": Value("string"),
435
+ "chains": Value("string"),
436
+ "metadata": {
437
+ "txt_source": Value("string"),
438
+ "con_source": Value("string"),
439
+ "pairs_source": Value("string"),
440
+ "chains_source": Value("string"),
441
+ },
442
+ }
443
+ )
444
+
445
+ elif self.config.schema == "bigbio_kb":
446
+ features = kb_features
447
+
448
+ return datasets.DatasetInfo(
449
+ description=_DESCRIPTION,
450
+ features=features,
451
+ supervised_keys=None,
452
+ homepage=_HOMEPAGE,
453
+ license=str(_LICENSE),
454
+ citation=_CITATION,
455
+ )
456
+
457
+ def _split_generators(
458
+ self, dl_manager: datasets.DownloadManager
459
+ ) -> List[datasets.SplitGenerator]:
460
+
461
+ if self.config.data_dir is None:
462
+ raise ValueError(
463
+ "This is a local dataset. Please pass the data_dir kwarg to load_dataset."
464
+ )
465
+ else:
466
+ data_dir = self.config.data_dir
467
+
468
+ return [
469
+ datasets.SplitGenerator(
470
+ name=datasets.Split.TRAIN,
471
+ gen_kwargs={
472
+ "split": "train",
473
+ "data_dir": data_dir,
474
+ },
475
+ ),
476
+ datasets.SplitGenerator(
477
+ name=datasets.Split.TEST,
478
+ gen_kwargs={
479
+ "split": "test",
480
+ "data_dir": data_dir,
481
+ },
482
+ ),
483
+ ]
484
+
485
+ @staticmethod
486
+ def _get_source_sample(sample_id, sample):
487
+ return {
488
+ "sample_id": sample_id,
489
+ "txt": sample.get("txt", ""),
490
+ "con": sample.get("con", ""),
491
+ "pairs": sample.get("pairs", ""),
492
+ "chains": sample.get("chains", ""),
493
+ "metadata": {
494
+ "txt_source": sample.get("txt_source", ""),
495
+ "con_source": sample.get("con_source", ""),
496
+ "pairs_source": sample.get("pairs_source", ""),
497
+ "chains_source": sample.get("chains_source", ""),
498
+ },
499
+ }
500
+
501
+ @staticmethod
502
+ def _get_coref_sample(sample_id, sample, split):
503
+
504
+ passage_text = sample.get("txt", "")
505
+ entities = _get_entities_from_sample(sample_id, sample, split)
506
+ entity_ids = set([entity["id"] for entity in entities])
507
+ coreferences = _get_corefs_from_sample(sample_id, sample, entity_ids, split)
508
+ return {
509
+ "id": sample_id,
510
+ "document_id": sample_id,
511
+ "passages": [
512
+ {
513
+ "id": f"{sample_id}-passage-0",
514
+ "type": "discharge summary",
515
+ "text": [passage_text],
516
+ "offsets": [(0, len(passage_text))],
517
+ }
518
+ ],
519
+ "entities": entities,
520
+ "relations": [],
521
+ "events": [],
522
+ "coreferences": coreferences,
523
+ }
524
+
525
+ def _generate_examples(self, split, data_dir):
526
+ """Generate samples using the info passed in from _split_generators."""
527
+
528
+ if split == "train":
529
+ _id = 0
530
+ # These files have complete sample info
531
+ # (so we get a fresh `samples` defaultdict from each)
532
+ paths = [
533
+ os.path.join(data_dir, "i2b2_Beth_Train_Release.tar.gz"),
534
+ os.path.join(data_dir, "i2b2_Partners_Train_Release.tar.gz"),
535
+ ]
536
+ for path in paths:
537
+ samples = _read_tar_gz(path)
538
+ for sample_id, sample in samples.items():
539
+ if self.config.schema == "source":
540
+ yield _id, self._get_source_sample(sample_id, sample)
541
+ elif self.config.schema == "bigbio_kb":
542
+ yield _id, self._get_coref_sample(sample_id, sample, split)
543
+ _id += 1
544
+
545
+ elif split == "test":
546
+ _id = 0
547
+ # Information from these files has to be combined to create a full sample
548
+ # (so we pass the `samples` defaultdict back to the `_read_zip` method)
549
+ paths = [
550
+ os.path.join(data_dir, "Task_1C.zip"),
551
+ os.path.join(data_dir, "Task_1C_Test_groundtruth.zip"),
552
+ ]
553
+ samples = defaultdict(dict)
554
+ for path in paths:
555
+ samples = _read_zip(path, samples=samples)
556
+
557
+ for sample_id, sample in samples.items():
558
+ if self.config.schema == "source":
559
+ yield _id, self._get_source_sample(sample_id, sample)
560
+ elif self.config.schema == "bigbio_kb":
561
+ yield _id, self._get_coref_sample(sample_id, sample, split)
562
+ _id += 1