Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
File size: 14,271 Bytes
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c2c795
df1f0cd
 
 
 
 
 
 
32b3cce
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62390ae
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62390ae
df1f0cd
8463279
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62390ae
df1f0cd
 
 
62390ae
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62390ae
df1f0cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62390ae
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
# coding=utf-8
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
BLURB is a collection of resources for biomedical natural language processing. 
In general domains, such as newswire and the Web, comprehensive benchmarks and 
leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. 
In biomedicine, however, such resources are ostensibly scarce. In the past, 
there have been a plethora of shared tasks in biomedical NLP, such as 
BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These 
efforts have played a significant role in fueling interest and progress by the 
research community, but they typically focus on individual tasks. The advent of 
neural language models, such as BERT provides a unifying foundation to leverage 
transfer learning from unlabeled text to support a wide range of NLP 
applications. To accelerate progress in biomedical pretraining strategies and 
task-specific methods, it is thus imperative to create a broad-coverage 
benchmark encompassing diverse biomedical tasks. 

Inspired by prior efforts toward this direction (e.g., BLUE), we have created 
BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). 
BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP 
applications, as well as a leaderboard for tracking progress by the community. 
BLURB includes thirteen publicly available datasets in six diverse tasks. To 
avoid placing undue emphasis on tasks with many available datasets, such as 
named entity recognition (NER), BLURB reports the macro average across all tasks 
as the main score. The BLURB leaderboard is model-agnostic. Any system capable 
of producing the test predictions using the same training and development data 
can participate. The main goal of BLURB is to lower the entry barrier in 
biomedical NLP and help accelerate progress in this vitally important field for 
positive societal and human impact."""

import re
import pandas
import datasets

from .bigbiohub import BigBioConfig
from .bigbiohub import Tasks

_DATASETNAME = "blurb"
_DISPLAYNAME = "BLURB"

_LANGUAGES = ["English"]
_PUBMED = True
_LOCAL = False
_CITATION = """\
@article{gu2021domain,
    title = {
        Domain-specific language model pretraining for biomedical natural
        language processing
    },
    author = {
        Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
        Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
        Jianfeng and Poon, Hoifung
    },
    year = 2021,
    journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
    publisher = {ACM New York, NY},
    volume = 3,
    number = 1,
    pages = {1--23}
}
"""


_BC2GM_DESCRIPTION = """\
The BioCreative II Gene Mention task. The training corpus for the current task \
consists mainly of the training and testing corpora (text collections) from the \
BCI task, and the testing corpus for the current task consists of an additional \
5,000 sentences that were held 'in reserve' from the previous task. In the \
current corpus, tokenization is not provided; instead participants are asked to \
identify a gene mention in a sentence by giving its start and end characters. As \
before, the training set consists of a set of sentences, and for each sentence a \
set of gene mentions (GENE annotations).

- Homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: Overview of BioCreative II gene mention recognition
         https://link.springer.com/article/10.1186/gb-2008-9-s2-s2
"""

_BC5_CHEM_DESCRIPTION = """\
The corpus consists of three separate sets of articles with diseases, chemicals \
and their relations annotated. The training (500 articles) and development (500 \
articles) sets were released to task participants in advance to support \
text-mining method development. The test set (500 articles) was used for final \
system performance evaluation.

- Homepage: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
         https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/
"""

_BC5_DISEASE_DESCRIPTION = """\
The corpus consists of three separate sets of articles with diseases, chemicals \
and their relations annotated. The training (500 articles) and development (500 \
articles) sets were released to task participants in advance to support \
text-mining method development. The test set (500 articles) was used for final \
system performance evaluation.

- Homepage: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
         https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/
"""

_JNLPBA_DESCRIPTION = """\
The BioNLP / JNLPBA Shared Task 2004 involves the identification and classification \
of technical terms referring to concepts of interest to biologists in the domain of \
molecular biology. The task was organized by GENIA Project based on the annotations \
of the GENIA Term corpus (version 3.02).

- Homepage: http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: Introduction to the Bio-entity Recognition Task at JNLPBA
         https://aclanthology.org/W04-1213
"""

_NCBI_DISEASE_DESCRIPTION = """\
[T]he NCBI disease corpus contains 6,892 disease mentions, which are mapped to \
790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the \
rest contain an OMIM identifier. We were able to link 91% of the mentions to a \
single disease concept, while the rest are described as a combination of \
concepts.

- Homepage: https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: NCBI disease corpus: a resource for disease name recognition and concept normalization
         https://pubmed.ncbi.nlm.nih.gov/24393765/
"""

_EBM_PICO_DESCRIPTION = """"""

_CHEMPROT_DESCRIPTION = """"""
_DDI_DESCRIPTION = """"""
_GAD_DESCRIPTION = """"""

_BIOSSES_DESCRIPTION = """"""

_HOC_DESCRIPTION = """"""

_PUBMEDQA_DESCRIPTION = """"""
_BIOASQ_DESCRIPTION = """"""

_DESCRIPTION = {
    "bc2gm": _BC2GM_DESCRIPTION,
    "bc5disease": _BC5_DISEASE_DESCRIPTION,
    "bc5chem": _BC5_CHEM_DESCRIPTION,
    "jnlpba": _JNLPBA_DESCRIPTION,
    "ncbi_disease": _NCBI_DISEASE_DESCRIPTION,
}

_HOMEPAGE = "https://microsoft.github.io/BLURB/tasks.html"

_LICENSE = "MIXED"


_URLs = {
    "bc2gm": [
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/train.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/devel.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC2GM-IOB/test.tsv",
    ],
    "bc5disease": [
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/train.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/devel.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-disease-IOB/test.tsv",
    ],
    "bc5chem": [
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/train.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/devel.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-chem-IOB/test.tsv",
    ],
    "jnlpba": [
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/train.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/devel.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/JNLPBA/test.tsv",
    ],
    "ncbi_disease": [
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/train.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/devel.tsv",
        "https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/NCBI-disease-IOB/test.tsv",
    ],
}

_SUPPORTED_TASKS = [Tasks.NAMED_ENTITY_RECOGNITION]
_SOURCE_VERSION = "1.0.0"
_BIGBIO_VERSION = "1.0.0"


class BlurbDataset(datasets.GeneratorBasedBuilder):
    """Source splits for BLURB data (train/val/test) for easy access."""

    DEFAULT_CONFIG_NAME = "bc5chem"
    SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
    BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)

    BUILDER_CONFIGS = [
        BigBioConfig(
            name="bc5chem",
            version=SOURCE_VERSION,
            description="BC5CDR Chemical IO Tagging",
            schema="ner",
            subset_id="bc5chem",
        ),
        BigBioConfig(
            name="bc5disease",
            version=SOURCE_VERSION,
            description="BC5CDR Chemical IO Tagging",
            schema="ner",
            subset_id="bc5disease",
        ),
        BigBioConfig(
            name="bc2gm",
            version=SOURCE_VERSION,
            description="BC2 Gene IO Tagging",
            schema="ner",
            subset_id="bc2gm",
        ),
        BigBioConfig(
            name="jnlpba",
            version=SOURCE_VERSION,
            description="JNLPBA Protein, DNA, RNA, Cell Type, Cell Line IO Tagging",
            schema="ner",
            subset_id="jnlpba",
        ),
        BigBioConfig(
            name="ncbi_disease",
            version=SOURCE_VERSION,
            description="NCBI Disease IO Tagging",
            schema="ner",
            subset_id="ncbi_disease",
        ),
    ]

    def _info(self):

        ner_features = datasets.Features(
            {
                "id": datasets.Value("string"),
                "tokens": datasets.Sequence(datasets.Value("string")),
                "type": datasets.Value("string"),
                "ner_tags": datasets.Sequence(
                    datasets.features.ClassLabel(
                        names=[
                            "O",
                            "B",
                            "I",
                        ]
                    )
                ),
            }
        )
        if self.config.schema == "ner":
            return datasets.DatasetInfo(
                description=_DESCRIPTION[self.config.name],
                features=ner_features,
                supervised_keys=None,
                homepage=_HOMEPAGE,
                license=str(_LICENSE),
                citation=_CITATION,
            )

    def _split_generators(self, dl_manager):

        my_urls = _URLs[self.config.name]
        dl_dir = dl_manager.download_and_extract(my_urls)

        return [
            datasets.SplitGenerator(
                name=datasets.Split.TRAIN,
                gen_kwargs={
                    "filepath": dl_dir[0],
                    "split": "train",
                },
            ),
            datasets.SplitGenerator(
                name=datasets.Split.VALIDATION,
                gen_kwargs={
                    "filepath": dl_dir[1],
                    "split": "validation",
                },
            ),
            datasets.SplitGenerator(
                name=datasets.Split.TEST,
                gen_kwargs={
                    "filepath": dl_dir[2],
                    "split": "test",
                },
            ),
        ]

    def _load_iob(self, fpath):
        """
        Assumes input CoNLL file is a single entity type.
        """
        with open(fpath, "r") as file:
            tagged = []
            for line in file:
                if line.strip() == "":
                    toks, tags = zip(*tagged)
                    # transform tags
                    tags = tags = [t[0] for t in tags]
                    yield (toks, tags)
                    tagged = []
                    continue
                tagged.append(re.split("\s", line.strip()))

            if tagged:
                toks, tags = zip(*tagged)
                tags = [t[0] for t in tags]
                yield (toks, tags)

    def _generate_examples(self, filepath, split):

        if self.config.schema == "ner":

            # Types for each NER dataset. Note BLURB's JNLPBA collapses all mentions into a
            # single entity type, which creates some ambiguity for prompting based on type
            ner_types = {
                "bc2gm": "gene",
                "bc5chem": "chemical",
                "bc5disease": "disease",
                "jnlpba": "protein, DNA, RNA, cell line, or cell type",
                "ncbi_disease": "disease",
            }

            uid = 0
            for item in self._load_iob(filepath):
                toks, tags = item
                yield uid, {
                    "id": uid,
                    "tokens": toks,
                    "type": ner_types[self.config.name],
                    "ner_tags": tags,
                }
                uid += 1