Dr. Jorge Abreu Vicente commited on
Commit
632e487
1 Parent(s): d8d5a96

First upload of dataset card.

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md CHANGED
@@ -1,3 +1,186 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ ---
6
+ annotations_creators:
7
+ - expert-generated
8
+ language_creators:
9
+ - expert-generated
10
+ languages:
11
+ - en
12
+ licenses:
13
+ - unknown
14
+ multilinguality:
15
+ - monolingual
16
+ paperswithcode_id: null
17
+ pretty_name: BLURB (Biomedical Language Understanding and Reasoning Benchmark.)
18
+ size_categories:
19
+ - 10K<n<100K
20
+ source_datasets:
21
+ - original
22
+ task_categories:
23
+ - structure-prediction
24
+ - question-answering
25
+ - text-scoring
26
+ - text-classification
27
+ task_ids:
28
+ - named-entity-recognition
29
+ - parsing
30
+ - closed-domain-qa
31
+ - semantic-similarity-scoring
32
+ - text-scoring-other-sentence-similrity
33
+ - topic-classification---
34
+
35
+ # Dataset Card for [Dataset Name]
36
+
37
+ ## Table of Contents
38
+ - [Table of Contents](#table-of-contents)
39
+ - [Dataset Description](#dataset-description)
40
+ - [Dataset Summary](#dataset-summary)
41
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
42
+ - [Languages](#languages)
43
+ - [Dataset Structure](#dataset-structure)
44
+ - [Data Instances](#data-instances)
45
+ - [Data Fields](#data-fields)
46
+ - [Data Splits](#data-splits)
47
+ - [Dataset Creation](#dataset-creation)
48
+ - [Curation Rationale](#curation-rationale)
49
+ - [Source Data](#source-data)
50
+ - [Annotations](#annotations)
51
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
52
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
53
+ - [Social Impact of Dataset](#social-impact-of-dataset)
54
+ - [Discussion of Biases](#discussion-of-biases)
55
+ - [Other Known Limitations](#other-known-limitations)
56
+ - [Additional Information](#additional-information)
57
+ - [Dataset Curators](#dataset-curators)
58
+ - [Licensing Information](#licensing-information)
59
+ - [Citation Information](#citation-information)
60
+ - [Contributions](#contributions)
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage: https://microsoft.github.io/BLURB/index.html**
65
+ - **Repository:**
66
+ - **Paper: https://arxiv.org/pdf/2007.15779.pdf**
67
+ - **Leaderboard: https://microsoft.github.io/BLURB/leaderboard.html**
68
+ - **Point of Contact:**
69
+
70
+ ### Dataset Summary
71
+
72
+ BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.
73
+
74
+ Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
75
+
76
+ ### Supported Tasks and Leaderboards
77
+
78
+ | **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** |
79
+ |:------------:|:-----------------------:|:---------:|:-------:|:--------:|:----------------------:|-----------|
80
+ | BC5-chem | NER | 5203 | 5347 | 5385 | F1 entity-level | Yes |
81
+ | BC5-disease | NER | 4182 | 4244 | 4424 | F1 entity-level | Yes |
82
+ | NCBI-disease | NER | 5134 | 787 | 960 | F1 entity-level | Yes |
83
+ | BC2GM | NER | 15197 | 3061 | 6325 | F1 entity-level | Yes |
84
+ | JNLPBA | NER | 46750 | 4551 | 8662 | F1 entity-level | Yes |
85
+ | EBM PICO | PICO | 339167 | 85321 | 16364 | Macro F1 word-level | No |
86
+ | ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No |
87
+ | DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No |
88
+ | GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No |
89
+ | BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | No |
90
+ | HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No |
91
+ | PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | No |
92
+ | BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No |
93
+
94
+ Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
95
+ This is something to be checked.
96
+
97
+ ### Languages
98
+
99
+ English from biomedical texts
100
+
101
+ ## Dataset Structure
102
+
103
+ ### Data Instances
104
+
105
+ * **NER**
106
+ * **PICO**
107
+ * **Relation Extraction**
108
+ * **Sentence Similarity**
109
+ * **Document Classification**
110
+ * **Question Answering**
111
+
112
+ ### Data Fields
113
+
114
+ * **NER**
115
+ * id, ner_tags, tokens
116
+ * **PICO**
117
+ * **Relation Extraction**
118
+ * **Sentence Similarity**
119
+ * **Document Classification**
120
+ * **Question Answering**
121
+
122
+ ### Data Splits
123
+
124
+ Shown in the table of supported tasks.
125
+
126
+ ## Dataset Creation
127
+
128
+ ### Curation Rationale
129
+
130
+ [More Information Needed]
131
+
132
+ ### Source Data
133
+
134
+ #### Initial Data Collection and Normalization
135
+
136
+ [More Information Needed]
137
+
138
+ #### Who are the source language producers?
139
+
140
+ [More Information Needed]
141
+
142
+ ### Annotations
143
+
144
+ #### Annotation process
145
+
146
+ [More Information Needed]
147
+
148
+ #### Who are the annotators?
149
+
150
+ [More Information Needed]
151
+
152
+ ### Personal and Sensitive Information
153
+
154
+ [More Information Needed]
155
+
156
+ ## Considerations for Using the Data
157
+
158
+ ### Social Impact of Dataset
159
+
160
+ [More Information Needed]
161
+
162
+ ### Discussion of Biases
163
+
164
+ [More Information Needed]
165
+
166
+ ### Other Known Limitations
167
+
168
+ [More Information Needed]
169
+
170
+ ## Additional Information
171
+
172
+ ### Dataset Curators
173
+
174
+ [More Information Needed]
175
+
176
+ ### Licensing Information
177
+
178
+ [More Information Needed]
179
+
180
+ ### Citation Information
181
+
182
+ [More Information Needed]
183
+
184
+ ### Contributions
185
+
186
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.