parquet-converter commited on
Commit
4c1d89a
1 Parent(s): 4426468

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,51 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
LICENSE DELETED
@@ -1,26 +0,0 @@
1
- name: National Library of Medicine Terms and Conditions
2
- short_name: NLM_LICENSE
3
-
4
-
5
- National Library of Medicine Terms and Conditions
6
-
7
- INTRODUCTION
8
-
9
- Downloading data from the National Library of Medicine FTP servers indicates your acceptance of the following Terms and Conditions: No charges, usage fees or royalties are paid to NLM for this data.
10
-
11
- GENERAL TERMS AND CONDITIONS
12
-
13
- Users of the data agree to:
14
- acknowledge NLM as the source of the data by including the phrase "Courtesy of the U.S. National Library of Medicine" in a clear and conspicuous manner,
15
- properly use registration and/or trademark symbols when referring to NLM products, and
16
- not indicate or imply that NLM has endorsed its products/services/applications.
17
-
18
- Users who republish or redistribute the data (services, products or raw data) agree to:
19
- maintain the most current version of all distributed data, or
20
- make known in a clear and conspicuous manner that the products/services/applications do not reflect the most current/accurate data available from NLM.
21
-
22
- These data are produced with a reasonable standard of care, but NLM makes no warranties express or implied, including no warranty of merchantability or fitness for particular purpose, regarding the accuracy or completeness of the data. Users agree to hold NLM and the U.S. Government harmless from any liability resulting from errors in the data. NLM disclaims any liability for any consequences due to use, misuse, or interpretation of information contained or not contained in the data.
23
-
24
- NLM does not provide legal advice regarding copyright, fair use, or other aspects of intellectual property rights. See the NLM Copyright page.
25
-
26
- NLM reserves the right to change the type and format of its machine-readable data. NLM will take reasonable steps to inform users of any changes to the format of the data before the data are distributed via the announcement section or subscription to email and RSS updates.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,68 +0,0 @@
1
- ---
2
- language: en
3
- license: other
4
- multilinguality: monolingual
5
- pretty_name: BLURB
6
- ---
7
-
8
-
9
- # Dataset Card for BLURB
10
-
11
- ## Dataset Description
12
-
13
- - **Homepage:** https://microsoft.github.io/BLURB/tasks.html
14
- - **Pubmed:** True
15
- - **Public:** True
16
- - **Tasks:** Named Entity Recognition
17
-
18
- BLURB is a collection of resources for biomedical natural language processing.
19
- In general domains, such as newswire and the Web, comprehensive benchmarks and
20
- leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
21
- In biomedicine, however, such resources are ostensibly scarce. In the past,
22
- there have been a plethora of shared tasks in biomedical NLP, such as
23
- BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
24
- efforts have played a significant role in fueling interest and progress by the
25
- research community, but they typically focus on individual tasks. The advent of
26
- neural language models, such as BERT provides a unifying foundation to leverage
27
- transfer learning from unlabeled text to support a wide range of NLP
28
- applications. To accelerate progress in biomedical pretraining strategies and
29
- task-specific methods, it is thus imperative to create a broad-coverage
30
- benchmark encompassing diverse biomedical tasks.
31
-
32
- Inspired by prior efforts toward this direction (e.g., BLUE), we have created
33
- BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
34
- BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
35
- applications, as well as a leaderboard for tracking progress by the community.
36
- BLURB includes thirteen publicly available datasets in six diverse tasks. To
37
- avoid placing undue emphasis on tasks with many available datasets, such as
38
- named entity recognition (NER), BLURB reports the macro average across all tasks
39
- as the main score. The BLURB leaderboard is model-agnostic. Any system capable
40
- of producing the test predictions using the same training and development data
41
- can participate. The main goal of BLURB is to lower the entry barrier in
42
- biomedical NLP and help accelerate progress in this vitally important field for
43
- positive societal and human impact.
44
-
45
- This implementation contains a subset of 5 tasks as of 2022.10.06, with their original train, dev, and test splits.
46
-
47
-
48
- ## Citation Information
49
-
50
- ```
51
- @article{gu2021domain,
52
- title = {
53
- Domain-specific language model pretraining for biomedical natural
54
- language processing
55
- },
56
- author = {
57
- Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
58
- Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
59
- Jianfeng and Poon, Hoifung
60
- },
61
- year = 2021,
62
- journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
63
- publisher = {ACM New York, NY},
64
- volume = 3,
65
- number = 1,
66
- pages = {1--23}
67
- }
68
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc2gm/blurb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74bb46de8a2773d08136cb92732997f7009eb47436358316bce50a99ecf806ec
3
+ size 550522
bc2gm/blurb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c5f69c93ba93d6b61f632c1481c1137e15aa17329fa81943b29fba613f15b4b
3
+ size 1362235
bc2gm/blurb-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26e87e383c018c8112a4b3f841b035bc91c92f2999012efc4137e90ef727ab7c
3
+ size 274422
bc5chem/blurb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1138b577bd039ad59dbe7ff5a0423948168354c290e49b6063538f81e27b636
3
+ size 411208
bc5chem/blurb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64ea81c7fbbbf65ed95b9b5dcfcbbac0597f1e9caa85bd2436f7f93801c74d98
3
+ size 394376
bc5chem/blurb-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d27e7ac94f9bba4b31d18750d267cdb37ebf073f6ccde1177bad9eeb453e9d0
3
+ size 391453
bc5disease/blurb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:456ca5009a07b528c3e44ca15aa7345228213b36a2431d2d0fb63e751e057a18
3
+ size 410495
bc5disease/blurb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb6355563af03f97f6695d4b5f306006981a53712786482400bcc9b18c54c4ca
3
+ size 393316
bc5disease/blurb-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d8de13e11f5d9f9fcdfac5eb41e677d4aac163ac81e0347a1ba05c7e75defad
3
+ size 390322
bigbiohub.py DELETED
@@ -1,556 +0,0 @@
1
- from collections import defaultdict
2
- from dataclasses import dataclass
3
- from enum import Enum
4
- import logging
5
- from pathlib import Path
6
- from types import SimpleNamespace
7
- from typing import TYPE_CHECKING, Dict, Iterable, List, Tuple
8
-
9
- import datasets
10
-
11
- if TYPE_CHECKING:
12
- import bioc
13
-
14
- logger = logging.getLogger(__name__)
15
-
16
-
17
- BigBioValues = SimpleNamespace(NULL="<BB_NULL_STR>")
18
-
19
-
20
- @dataclass
21
- class BigBioConfig(datasets.BuilderConfig):
22
- """BuilderConfig for BigBio."""
23
-
24
- name: str = None
25
- version: datasets.Version = None
26
- description: str = None
27
- schema: str = None
28
- subset_id: str = None
29
-
30
-
31
- class Tasks(Enum):
32
- NAMED_ENTITY_RECOGNITION = "NER"
33
- NAMED_ENTITY_DISAMBIGUATION = "NED"
34
- EVENT_EXTRACTION = "EE"
35
- RELATION_EXTRACTION = "RE"
36
- COREFERENCE_RESOLUTION = "COREF"
37
- QUESTION_ANSWERING = "QA"
38
- TEXTUAL_ENTAILMENT = "TE"
39
- SEMANTIC_SIMILARITY = "STS"
40
- TEXT_PAIRS_CLASSIFICATION = "TXT2CLASS"
41
- PARAPHRASING = "PARA"
42
- TRANSLATION = "TRANSL"
43
- SUMMARIZATION = "SUM"
44
- TEXT_CLASSIFICATION = "TXTCLASS"
45
-
46
-
47
- entailment_features = datasets.Features(
48
- {
49
- "id": datasets.Value("string"),
50
- "premise": datasets.Value("string"),
51
- "hypothesis": datasets.Value("string"),
52
- "label": datasets.Value("string"),
53
- }
54
- )
55
-
56
- pairs_features = datasets.Features(
57
- {
58
- "id": datasets.Value("string"),
59
- "document_id": datasets.Value("string"),
60
- "text_1": datasets.Value("string"),
61
- "text_2": datasets.Value("string"),
62
- "label": datasets.Value("string"),
63
- }
64
- )
65
-
66
- qa_features = datasets.Features(
67
- {
68
- "id": datasets.Value("string"),
69
- "question_id": datasets.Value("string"),
70
- "document_id": datasets.Value("string"),
71
- "question": datasets.Value("string"),
72
- "type": datasets.Value("string"),
73
- "choices": [datasets.Value("string")],
74
- "context": datasets.Value("string"),
75
- "answer": datasets.Sequence(datasets.Value("string")),
76
- }
77
- )
78
-
79
- text_features = datasets.Features(
80
- {
81
- "id": datasets.Value("string"),
82
- "document_id": datasets.Value("string"),
83
- "text": datasets.Value("string"),
84
- "labels": [datasets.Value("string")],
85
- }
86
- )
87
-
88
- text2text_features = datasets.Features(
89
- {
90
- "id": datasets.Value("string"),
91
- "document_id": datasets.Value("string"),
92
- "text_1": datasets.Value("string"),
93
- "text_2": datasets.Value("string"),
94
- "text_1_name": datasets.Value("string"),
95
- "text_2_name": datasets.Value("string"),
96
- }
97
- )
98
-
99
- kb_features = datasets.Features(
100
- {
101
- "id": datasets.Value("string"),
102
- "document_id": datasets.Value("string"),
103
- "passages": [
104
- {
105
- "id": datasets.Value("string"),
106
- "type": datasets.Value("string"),
107
- "text": datasets.Sequence(datasets.Value("string")),
108
- "offsets": datasets.Sequence([datasets.Value("int32")]),
109
- }
110
- ],
111
- "entities": [
112
- {
113
- "id": datasets.Value("string"),
114
- "type": datasets.Value("string"),
115
- "text": datasets.Sequence(datasets.Value("string")),
116
- "offsets": datasets.Sequence([datasets.Value("int32")]),
117
- "normalized": [
118
- {
119
- "db_name": datasets.Value("string"),
120
- "db_id": datasets.Value("string"),
121
- }
122
- ],
123
- }
124
- ],
125
- "events": [
126
- {
127
- "id": datasets.Value("string"),
128
- "type": datasets.Value("string"),
129
- # refers to the text_bound_annotation of the trigger
130
- "trigger": {
131
- "text": datasets.Sequence(datasets.Value("string")),
132
- "offsets": datasets.Sequence([datasets.Value("int32")]),
133
- },
134
- "arguments": [
135
- {
136
- "role": datasets.Value("string"),
137
- "ref_id": datasets.Value("string"),
138
- }
139
- ],
140
- }
141
- ],
142
- "coreferences": [
143
- {
144
- "id": datasets.Value("string"),
145
- "entity_ids": datasets.Sequence(datasets.Value("string")),
146
- }
147
- ],
148
- "relations": [
149
- {
150
- "id": datasets.Value("string"),
151
- "type": datasets.Value("string"),
152
- "arg1_id": datasets.Value("string"),
153
- "arg2_id": datasets.Value("string"),
154
- "normalized": [
155
- {
156
- "db_name": datasets.Value("string"),
157
- "db_id": datasets.Value("string"),
158
- }
159
- ],
160
- }
161
- ],
162
- }
163
- )
164
-
165
-
166
- def get_texts_and_offsets_from_bioc_ann(ann: "bioc.BioCAnnotation") -> Tuple:
167
-
168
- offsets = [(loc.offset, loc.offset + loc.length) for loc in ann.locations]
169
-
170
- text = ann.text
171
-
172
- if len(offsets) > 1:
173
- i = 0
174
- texts = []
175
- for start, end in offsets:
176
- chunk_len = end - start
177
- texts.append(text[i : chunk_len + i])
178
- i += chunk_len
179
- while i < len(text) and text[i] == " ":
180
- i += 1
181
- else:
182
- texts = [text]
183
-
184
- return offsets, texts
185
-
186
-
187
- def remove_prefix(a: str, prefix: str) -> str:
188
- if a.startswith(prefix):
189
- a = a[len(prefix) :]
190
- return a
191
-
192
-
193
- def parse_brat_file(
194
- txt_file: Path,
195
- annotation_file_suffixes: List[str] = None,
196
- parse_notes: bool = False,
197
- ) -> Dict:
198
- """
199
- Parse a brat file into the schema defined below.
200
- `txt_file` should be the path to the brat '.txt' file you want to parse, e.g. 'data/1234.txt'
201
- Assumes that the annotations are contained in one or more of the corresponding '.a1', '.a2' or '.ann' files,
202
- e.g. 'data/1234.ann' or 'data/1234.a1' and 'data/1234.a2'.
203
- Will include annotator notes, when `parse_notes == True`.
204
- brat_features = datasets.Features(
205
- {
206
- "id": datasets.Value("string"),
207
- "document_id": datasets.Value("string"),
208
- "text": datasets.Value("string"),
209
- "text_bound_annotations": [ # T line in brat, e.g. type or event trigger
210
- {
211
- "offsets": datasets.Sequence([datasets.Value("int32")]),
212
- "text": datasets.Sequence(datasets.Value("string")),
213
- "type": datasets.Value("string"),
214
- "id": datasets.Value("string"),
215
- }
216
- ],
217
- "events": [ # E line in brat
218
- {
219
- "trigger": datasets.Value(
220
- "string"
221
- ), # refers to the text_bound_annotation of the trigger,
222
- "id": datasets.Value("string"),
223
- "type": datasets.Value("string"),
224
- "arguments": datasets.Sequence(
225
- {
226
- "role": datasets.Value("string"),
227
- "ref_id": datasets.Value("string"),
228
- }
229
- ),
230
- }
231
- ],
232
- "relations": [ # R line in brat
233
- {
234
- "id": datasets.Value("string"),
235
- "head": {
236
- "ref_id": datasets.Value("string"),
237
- "role": datasets.Value("string"),
238
- },
239
- "tail": {
240
- "ref_id": datasets.Value("string"),
241
- "role": datasets.Value("string"),
242
- },
243
- "type": datasets.Value("string"),
244
- }
245
- ],
246
- "equivalences": [ # Equiv line in brat
247
- {
248
- "id": datasets.Value("string"),
249
- "ref_ids": datasets.Sequence(datasets.Value("string")),
250
- }
251
- ],
252
- "attributes": [ # M or A lines in brat
253
- {
254
- "id": datasets.Value("string"),
255
- "type": datasets.Value("string"),
256
- "ref_id": datasets.Value("string"),
257
- "value": datasets.Value("string"),
258
- }
259
- ],
260
- "normalizations": [ # N lines in brat
261
- {
262
- "id": datasets.Value("string"),
263
- "type": datasets.Value("string"),
264
- "ref_id": datasets.Value("string"),
265
- "resource_name": datasets.Value(
266
- "string"
267
- ), # Name of the resource, e.g. "Wikipedia"
268
- "cuid": datasets.Value(
269
- "string"
270
- ), # ID in the resource, e.g. 534366
271
- "text": datasets.Value(
272
- "string"
273
- ), # Human readable description/name of the entity, e.g. "Barack Obama"
274
- }
275
- ],
276
- ### OPTIONAL: Only included when `parse_notes == True`
277
- "notes": [ # # lines in brat
278
- {
279
- "id": datasets.Value("string"),
280
- "type": datasets.Value("string"),
281
- "ref_id": datasets.Value("string"),
282
- "text": datasets.Value("string"),
283
- }
284
- ],
285
- },
286
- )
287
- """
288
-
289
- example = {}
290
- example["document_id"] = txt_file.with_suffix("").name
291
- with txt_file.open() as f:
292
- example["text"] = f.read()
293
-
294
- # If no specific suffixes of the to-be-read annotation files are given - take standard suffixes
295
- # for event extraction
296
- if annotation_file_suffixes is None:
297
- annotation_file_suffixes = [".a1", ".a2", ".ann"]
298
-
299
- if len(annotation_file_suffixes) == 0:
300
- raise AssertionError(
301
- "At least one suffix for the to-be-read annotation files should be given!"
302
- )
303
-
304
- ann_lines = []
305
- for suffix in annotation_file_suffixes:
306
- annotation_file = txt_file.with_suffix(suffix)
307
- if annotation_file.exists():
308
- with annotation_file.open() as f:
309
- ann_lines.extend(f.readlines())
310
-
311
- example["text_bound_annotations"] = []
312
- example["events"] = []
313
- example["relations"] = []
314
- example["equivalences"] = []
315
- example["attributes"] = []
316
- example["normalizations"] = []
317
-
318
- if parse_notes:
319
- example["notes"] = []
320
-
321
- for line in ann_lines:
322
- line = line.strip()
323
- if not line:
324
- continue
325
-
326
- if line.startswith("T"): # Text bound
327
- ann = {}
328
- fields = line.split("\t")
329
-
330
- ann["id"] = fields[0]
331
- ann["type"] = fields[1].split()[0]
332
- ann["offsets"] = []
333
- span_str = remove_prefix(fields[1], (ann["type"] + " "))
334
- text = fields[2]
335
- for span in span_str.split(";"):
336
- start, end = span.split()
337
- ann["offsets"].append([int(start), int(end)])
338
-
339
- # Heuristically split text of discontiguous entities into chunks
340
- ann["text"] = []
341
- if len(ann["offsets"]) > 1:
342
- i = 0
343
- for start, end in ann["offsets"]:
344
- chunk_len = end - start
345
- ann["text"].append(text[i : chunk_len + i])
346
- i += chunk_len
347
- while i < len(text) and text[i] == " ":
348
- i += 1
349
- else:
350
- ann["text"] = [text]
351
-
352
- example["text_bound_annotations"].append(ann)
353
-
354
- elif line.startswith("E"):
355
- ann = {}
356
- fields = line.split("\t")
357
-
358
- ann["id"] = fields[0]
359
-
360
- ann["type"], ann["trigger"] = fields[1].split()[0].split(":")
361
-
362
- ann["arguments"] = []
363
- for role_ref_id in fields[1].split()[1:]:
364
- argument = {
365
- "role": (role_ref_id.split(":"))[0],
366
- "ref_id": (role_ref_id.split(":"))[1],
367
- }
368
- ann["arguments"].append(argument)
369
-
370
- example["events"].append(ann)
371
-
372
- elif line.startswith("R"):
373
- ann = {}
374
- fields = line.split("\t")
375
-
376
- ann["id"] = fields[0]
377
- ann["type"] = fields[1].split()[0]
378
-
379
- ann["head"] = {
380
- "role": fields[1].split()[1].split(":")[0],
381
- "ref_id": fields[1].split()[1].split(":")[1],
382
- }
383
- ann["tail"] = {
384
- "role": fields[1].split()[2].split(":")[0],
385
- "ref_id": fields[1].split()[2].split(":")[1],
386
- }
387
-
388
- example["relations"].append(ann)
389
-
390
- # '*' seems to be the legacy way to mark equivalences,
391
- # but I couldn't find any info on the current way
392
- # this might have to be adapted dependent on the brat version
393
- # of the annotation
394
- elif line.startswith("*"):
395
- ann = {}
396
- fields = line.split("\t")
397
-
398
- ann["id"] = fields[0]
399
- ann["ref_ids"] = fields[1].split()[1:]
400
-
401
- example["equivalences"].append(ann)
402
-
403
- elif line.startswith("A") or line.startswith("M"):
404
- ann = {}
405
- fields = line.split("\t")
406
-
407
- ann["id"] = fields[0]
408
-
409
- info = fields[1].split()
410
- ann["type"] = info[0]
411
- ann["ref_id"] = info[1]
412
-
413
- if len(info) > 2:
414
- ann["value"] = info[2]
415
- else:
416
- ann["value"] = ""
417
-
418
- example["attributes"].append(ann)
419
-
420
- elif line.startswith("N"):
421
- ann = {}
422
- fields = line.split("\t")
423
-
424
- ann["id"] = fields[0]
425
- ann["text"] = fields[2]
426
-
427
- info = fields[1].split()
428
-
429
- ann["type"] = info[0]
430
- ann["ref_id"] = info[1]
431
- ann["resource_name"] = info[2].split(":")[0]
432
- ann["cuid"] = info[2].split(":")[1]
433
- example["normalizations"].append(ann)
434
-
435
- elif parse_notes and line.startswith("#"):
436
- ann = {}
437
- fields = line.split("\t")
438
-
439
- ann["id"] = fields[0]
440
- ann["text"] = fields[2] if len(fields) == 3 else BigBioValues.NULL
441
-
442
- info = fields[1].split()
443
-
444
- ann["type"] = info[0]
445
- ann["ref_id"] = info[1]
446
- example["notes"].append(ann)
447
-
448
- return example
449
-
450
-
451
- def brat_parse_to_bigbio_kb(brat_parse: Dict) -> Dict:
452
- """
453
- Transform a brat parse (conforming to the standard brat schema) obtained with
454
- `parse_brat_file` into a dictionary conforming to the `bigbio-kb` schema (as defined in ../schemas/kb.py)
455
- :param brat_parse:
456
- """
457
-
458
- unified_example = {}
459
-
460
- # Prefix all ids with document id to ensure global uniqueness,
461
- # because brat ids are only unique within their document
462
- id_prefix = brat_parse["document_id"] + "_"
463
-
464
- # identical
465
- unified_example["document_id"] = brat_parse["document_id"]
466
- unified_example["passages"] = [
467
- {
468
- "id": id_prefix + "_text",
469
- "type": "abstract",
470
- "text": [brat_parse["text"]],
471
- "offsets": [[0, len(brat_parse["text"])]],
472
- }
473
- ]
474
-
475
- # get normalizations
476
- ref_id_to_normalizations = defaultdict(list)
477
- for normalization in brat_parse["normalizations"]:
478
- ref_id_to_normalizations[normalization["ref_id"]].append(
479
- {
480
- "db_name": normalization["resource_name"],
481
- "db_id": normalization["cuid"],
482
- }
483
- )
484
-
485
- # separate entities and event triggers
486
- unified_example["events"] = []
487
- non_event_ann = brat_parse["text_bound_annotations"].copy()
488
- for event in brat_parse["events"]:
489
- event = event.copy()
490
- event["id"] = id_prefix + event["id"]
491
- trigger = next(
492
- tr
493
- for tr in brat_parse["text_bound_annotations"]
494
- if tr["id"] == event["trigger"]
495
- )
496
- if trigger in non_event_ann:
497
- non_event_ann.remove(trigger)
498
- event["trigger"] = {
499
- "text": trigger["text"].copy(),
500
- "offsets": trigger["offsets"].copy(),
501
- }
502
- for argument in event["arguments"]:
503
- argument["ref_id"] = id_prefix + argument["ref_id"]
504
-
505
- unified_example["events"].append(event)
506
-
507
- unified_example["entities"] = []
508
- anno_ids = [ref_id["id"] for ref_id in non_event_ann]
509
- for ann in non_event_ann:
510
- entity_ann = ann.copy()
511
- entity_ann["id"] = id_prefix + entity_ann["id"]
512
- entity_ann["normalized"] = ref_id_to_normalizations[ann["id"]]
513
- unified_example["entities"].append(entity_ann)
514
-
515
- # massage relations
516
- unified_example["relations"] = []
517
- skipped_relations = set()
518
- for ann in brat_parse["relations"]:
519
- if (
520
- ann["head"]["ref_id"] not in anno_ids
521
- or ann["tail"]["ref_id"] not in anno_ids
522
- ):
523
- skipped_relations.add(ann["id"])
524
- continue
525
- unified_example["relations"].append(
526
- {
527
- "arg1_id": id_prefix + ann["head"]["ref_id"],
528
- "arg2_id": id_prefix + ann["tail"]["ref_id"],
529
- "id": id_prefix + ann["id"],
530
- "type": ann["type"],
531
- "normalized": [],
532
- }
533
- )
534
- if len(skipped_relations) > 0:
535
- example_id = brat_parse["document_id"]
536
- logger.info(
537
- f"Example:{example_id}: The `bigbio_kb` schema allows `relations` only between entities."
538
- f" Skip (for now): "
539
- f"{list(skipped_relations)}"
540
- )
541
-
542
- # get coreferences
543
- unified_example["coreferences"] = []
544
- for i, ann in enumerate(brat_parse["equivalences"], start=1):
545
- is_entity_cluster = True
546
- for ref_id in ann["ref_ids"]:
547
- if not ref_id.startswith("T"): # not textbound -> no entity
548
- is_entity_cluster = False
549
- elif ref_id not in anno_ids: # event trigger -> no entity
550
- is_entity_cluster = False
551
- if is_entity_cluster:
552
- entity_ids = [id_prefix + i for i in ann["ref_ids"]]
553
- unified_example["coreferences"].append(
554
- {"id": id_prefix + str(i), "entity_ids": entity_ids}
555
- )
556
- return unified_example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
blurb.py DELETED
@@ -1,349 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """
16
- BLURB is a collection of resources for biomedical natural language processing.
17
- In general domains, such as newswire and the Web, comprehensive benchmarks and
18
- leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
19
- In biomedicine, however, such resources are ostensibly scarce. In the past,
20
- there have been a plethora of shared tasks in biomedical NLP, such as
21
- BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
22
- efforts have played a significant role in fueling interest and progress by the
23
- research community, but they typically focus on individual tasks. The advent of
24
- neural language models, such as BERT provides a unifying foundation to leverage
25
- transfer learning from unlabeled text to support a wide range of NLP
26
- applications. To accelerate progress in biomedical pretraining strategies and
27
- task-specific methods, it is thus imperative to create a broad-coverage
28
- benchmark encompassing diverse biomedical tasks.
29
-
30
- Inspired by prior efforts toward this direction (e.g., BLUE), we have created
31
- BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
32
- BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
33
- applications, as well as a leaderboard for tracking progress by the community.
34
- BLURB includes thirteen publicly available datasets in six diverse tasks. To
35
- avoid placing undue emphasis on tasks with many available datasets, such as
36
- named entity recognition (NER), BLURB reports the macro average across all tasks
37
- as the main score. The BLURB leaderboard is model-agnostic. Any system capable
38
- of producing the test predictions using the same training and development data
39
- can participate. The main goal of BLURB is to lower the entry barrier in
40
- biomedical NLP and help accelerate progress in this vitally important field for
41
- positive societal and human impact."""
42
-
43
- import re
44
- import pandas
45
- import datasets
46
-
47
- from .bigbiohub import BigBioConfig
48
- from .bigbiohub import Tasks
49
-
50
- _DATASETNAME = "blurb"
51
- _DISPLAYNAME = "BLURB"
52
-
53
- _LANGUAGES = ["English"]
54
- _PUBMED = True
55
- _LOCAL = False
56
- _CITATION = """\
57
- @article{gu2021domain,
58
- title = {
59
- Domain-specific language model pretraining for biomedical natural
60
- language processing
61
- },
62
- author = {
63
- Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
64
- Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
65
- Jianfeng and Poon, Hoifung
66
- },
67
- year = 2021,
68
- journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
69
- publisher = {ACM New York, NY},
70
- volume = 3,
71
- number = 1,
72
- pages = {1--23}
73
- }
74
- """
75
-
76
-
77
- _BC2GM_DESCRIPTION = """\
78
- The BioCreative II Gene Mention task. The training corpus for the current task \
79
- consists mainly of the training and testing corpora (text collections) from the \
80
- BCI task, and the testing corpus for the current task consists of an additional \
81
- 5,000 sentences that were held 'in reserve' from the previous task. In the \
82
- current corpus, tokenization is not provided; instead participants are asked to \
83
- identify a gene mention in a sentence by giving its start and end characters. As \
84
- before, the training set consists of a set of sentences, and for each sentence a \
85
- set of gene mentions (GENE annotations).
86
-
87
- - Homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
88
- - Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
89
- - Paper: Overview of BioCreative II gene mention recognition
90
- https://link.springer.com/article/10.1186/gb-2008-9-s2-s2
91
- """
92
-
93
- _BC5_CHEM_DESCRIPTION = """\
94
- The corpus consists of three separate sets of articles with diseases, chemicals \
95
- and their relations annotated. The training (500 articles) and development (500 \
96
- articles) sets were released to task participants in advance to support \
97
- text-mining method development. The test set (500 articles) was used for final \
98
- system performance evaluation.
99
-
100
- - Homepage: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
101
- - Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
102
- - Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
103
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/
104
- """
105
-
106
- _BC5_DISEASE_DESCRIPTION = """\
107
- The corpus consists of three separate sets of articles with diseases, chemicals \
108
- and their relations annotated. The training (500 articles) and development (500 \
109
- articles) sets were released to task participants in advance to support \
110
- text-mining method development. The test set (500 articles) was used for final \
111
- system performance evaluation.
112
-
113
- - Homepage: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
114
- - Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
115
- - Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
116
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/
117
- """
118
-
119
- _JNLPBA_DESCRIPTION = """\
120
- The BioNLP / JNLPBA Shared Task 2004 involves the identification and classification \
121
- of technical terms referring to concepts of interest to biologists in the domain of \
122
- molecular biology. The task was organized by GENIA Project based on the annotations \
123
- of the GENIA Term corpus (version 3.02).
124
-
125
- - Homepage: http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
126
- - Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
127
- - Paper: Introduction to the Bio-entity Recognition Task at JNLPBA
128
- https://aclanthology.org/W04-1213
129
- """
130
-
131
- _NCBI_DISEASE_DESCRIPTION = """\
132
- [T]he NCBI disease corpus contains 6,892 disease mentions, which are mapped to \
133
- 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the \
134
- rest contain an OMIM identifier. We were able to link 91% of the mentions to a \
135
- single disease concept, while the rest are described as a combination of \
136
- concepts.
137
-
138
- - Homepage: https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
139
- - Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
140
- - Paper: NCBI disease corpus: a resource for disease name recognition and concept normalization
141
- https://pubmed.ncbi.nlm.nih.gov/24393765/
142
- """
143
-
144
- _EBM_PICO_DESCRIPTION = """"""
145
-
146
- _CHEMPROT_DESCRIPTION = """"""
147
- _DDI_DESCRIPTION = """"""
148
- _GAD_DESCRIPTION = """"""
149
-
150
- _BIOSSES_DESCRIPTION = """"""
151
-
152
- _HOC_DESCRIPTION = """"""
153
-
154
- _PUBMEDQA_DESCRIPTION = """"""
155
- _BIOASQ_DESCRIPTION = """"""
156
-
157
- _DESCRIPTION = {
158
- "bc2gm": _BC2GM_DESCRIPTION,
159
- "bc5disease": _BC5_DISEASE_DESCRIPTION,
160
- "bc5chem": _BC5_CHEM_DESCRIPTION,
161
- "jnlpba": _JNLPBA_DESCRIPTION,
162
- "ncbi_disease": _NCBI_DISEASE_DESCRIPTION,
163
- }
164
-
165
- _HOMEPAGE = "https://microsoft.github.io/BLURB/tasks.html"
166
-
167
- _LICENSE = "MIXED"
168
-
169
-
170
- _URLs = {
171
- "bc2gm": [
172
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/train.tsv",
173
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/devel.tsv",
174
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/test.tsv",
175
- ],
176
- "bc5disease": [
177
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/train.tsv",
178
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/devel.tsv",
179
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/test.tsv",
180
- ],
181
- "bc5chem": [
182
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/train.tsv",
183
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/devel.tsv",
184
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/test.tsv",
185
- ],
186
- "jnlpba": [
187
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/train.tsv",
188
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/devel.tsv",
189
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/test.tsv",
190
- ],
191
- "ncbi_disease": [
192
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/train.tsv",
193
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/devel.tsv",
194
- "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/test.tsv",
195
- ],
196
- }
197
-
198
- _SUPPORTED_TASKS = [Tasks.NAMED_ENTITY_RECOGNITION]
199
- _SOURCE_VERSION = "1.0.0"
200
- _BIGBIO_VERSION = "1.0.0"
201
-
202
-
203
- class BlurbDataset(datasets.GeneratorBasedBuilder):
204
- """Source splits for BLURB data (train/val/test) for easy access."""
205
-
206
- DEFAULT_CONFIG_NAME = "bc5chem"
207
- SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
208
- BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
209
-
210
- BUILDER_CONFIGS = [
211
- BigBioConfig(
212
- name="bc5chem",
213
- version=SOURCE_VERSION,
214
- description="BC5CDR Chemical IO Tagging",
215
- schema="ner",
216
- subset_id="bc5chem",
217
- ),
218
- BigBioConfig(
219
- name="bc5disease",
220
- version=SOURCE_VERSION,
221
- description="BC5CDR Chemical IO Tagging",
222
- schema="ner",
223
- subset_id="bc5disease",
224
- ),
225
- BigBioConfig(
226
- name="bc2gm",
227
- version=SOURCE_VERSION,
228
- description="BC2 Gene IO Tagging",
229
- schema="ner",
230
- subset_id="bc2gm",
231
- ),
232
- BigBioConfig(
233
- name="jnlpba",
234
- version=SOURCE_VERSION,
235
- description="JNLPBA Protein, DNA, RNA, Cell Type, Cell Line IO Tagging",
236
- schema="ner",
237
- subset_id="jnlpba",
238
- ),
239
- BigBioConfig(
240
- name="ncbi_disease",
241
- version=SOURCE_VERSION,
242
- description="NCBI Disease IO Tagging",
243
- schema="ner",
244
- subset_id="ncbi_disease",
245
- ),
246
- ]
247
-
248
- def _info(self):
249
-
250
- ner_features = datasets.Features(
251
- {
252
- "id": datasets.Value("string"),
253
- "tokens": datasets.Sequence(datasets.Value("string")),
254
- "type": datasets.Value("string"),
255
- "ner_tags": datasets.Sequence(
256
- datasets.features.ClassLabel(
257
- names=[
258
- "O",
259
- "B",
260
- "I",
261
- ]
262
- )
263
- ),
264
- }
265
- )
266
- if self.config.schema == "ner":
267
- return datasets.DatasetInfo(
268
- description=_DESCRIPTION[self.config.name],
269
- features=ner_features,
270
- supervised_keys=None,
271
- homepage=_HOMEPAGE,
272
- license=str(_LICENSE),
273
- citation=_CITATION,
274
- )
275
-
276
- def _split_generators(self, dl_manager):
277
-
278
- my_urls = _URLs[self.config.name]
279
- dl_dir = dl_manager.download_and_extract(my_urls)
280
-
281
- return [
282
- datasets.SplitGenerator(
283
- name=datasets.Split.TRAIN,
284
- gen_kwargs={
285
- "filepath": dl_dir[0],
286
- "split": "train",
287
- },
288
- ),
289
- datasets.SplitGenerator(
290
- name=datasets.Split.VALIDATION,
291
- gen_kwargs={
292
- "filepath": dl_dir[1],
293
- "split": "validation",
294
- },
295
- ),
296
- datasets.SplitGenerator(
297
- name=datasets.Split.TEST,
298
- gen_kwargs={
299
- "filepath": dl_dir[2],
300
- "split": "test",
301
- },
302
- ),
303
- ]
304
-
305
- def _load_iob(self, fpath):
306
- """
307
- Assumes input CoNLL file is a single entity type.
308
- """
309
- with open(fpath, "r") as file:
310
- tagged = []
311
- for line in file:
312
- if line.strip() == "":
313
- toks, tags = zip(*tagged)
314
- # transform tags
315
- tags = tags = [t[0] for t in tags]
316
- yield (toks, tags)
317
- tagged = []
318
- continue
319
- tagged.append(re.split("\s", line.strip()))
320
-
321
- if tagged:
322
- toks, tags = zip(*tagged)
323
- tags = [t[0] for t in tags]
324
- yield (toks, tags)
325
-
326
- def _generate_examples(self, filepath, split):
327
-
328
- if self.config.schema == "ner":
329
-
330
- # Types for each NER dataset. Note BLURB's JNLPBA collapses all mentions into a
331
- # single entity type, which creates some ambiguity for prompting based on type
332
- ner_types = {
333
- "bc2gm": "gene",
334
- "bc5chem": "chemical",
335
- "bc5disease": "disease",
336
- "jnlpba": "protein, DNA, RNA, cell line, or cell type",
337
- "ncbi_disease": "disease",
338
- }
339
-
340
- uid = 0
341
- for item in self._load_iob(filepath):
342
- toks, tags = item
343
- yield uid, {
344
- "id": uid,
345
- "tokens": toks,
346
- "type": ner_types[self.config.name],
347
- "ner_tags": tags,
348
- }
349
- uid += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
jnlpba/blurb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93748ca8e9cc456c3f3926e338487f641ecd33ef3c92131556045e25480b0275
3
+ size 353123
jnlpba/blurb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21418a6a82d7492b79bfd9f6d1b82f86e655b647da17cad4533f09069bf8487f
3
+ size 1495826
jnlpba/blurb-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e704f45a1d82c70c5ed57677547e1282189426367545d769ded8a803c6e904b
3
+ size 159935
ncbi_disease/blurb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49bfcc210860803405f03f836f0df17f62204b1c6df7c1da309332eae0e19372
3
+ size 77337
ncbi_disease/blurb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fecc93d73ae81860d8eed94162d156b3b5f275b9120428cac66de36b71008a99
3
+ size 426254
ncbi_disease/blurb-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3e8ee29a8c0b49ae70d841162888502a03863cffcf0184af205ea5857c8b1bc
3
+ size 75100