Datasets:
Dr. Jorge Abreu Vicente
commited on
Commit
•
2c1eb68
1
Parent(s):
a189d0c
Update README.md Adding BIOSSES
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ task_ids:
|
|
27 |
- closed-domain-qa
|
28 |
- semantic-similarity-scoring
|
29 |
- text-scoring-other-sentence-similrity
|
30 |
-
- topic-
|
31 |
---
|
32 |
|
33 |
# Dataset Card for BLURB
|
@@ -60,8 +60,7 @@ task_ids:
|
|
60 |
## Dataset Description
|
61 |
|
62 |
- **Homepage: https://microsoft.github.io/BLURB/index.html**
|
63 |
-
- **
|
64 |
-
- **Paper: https://arxiv.org/pdf/2007.15779.pdf**
|
65 |
- **Leaderboard: https://microsoft.github.io/BLURB/leaderboard.html**
|
66 |
- **Point of Contact:**
|
67 |
|
@@ -71,6 +70,142 @@ BLURB is a collection of resources for biomedical natural language processing. I
|
|
71 |
|
72 |
Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
|
73 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
### Supported Tasks and Leaderboards
|
75 |
|
76 |
| **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** |
|
@@ -122,11 +257,13 @@ English from biomedical texts
|
|
122 |
```
|
123 |
|
124 |
* **Sentence Similarity**
|
|
|
125 |
```json
|
126 |
-
{
|
127 |
-
|
128 |
-
|
129 |
```
|
|
|
130 |
* **Document Classification**
|
131 |
```json
|
132 |
{
|
@@ -144,13 +281,17 @@ English from biomedical texts
|
|
144 |
### Data Fields
|
145 |
|
146 |
* **NER**
|
147 |
-
* id
|
|
|
|
|
148 |
* **PICO**
|
149 |
* To be added
|
150 |
* **Relation Extraction**
|
151 |
* To be added
|
152 |
* **Sentence Similarity**
|
153 |
-
*
|
|
|
|
|
154 |
* **Document Classification**
|
155 |
* To be added
|
156 |
* **Question Answering**
|
@@ -164,19 +305,41 @@ Shown in the table of supported tasks.
|
|
164 |
|
165 |
### Curation Rationale
|
166 |
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
|
169 |
### Source Data
|
170 |
|
171 |
-
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
|
172 |
-
|
173 |
[More Information Needed]
|
174 |
|
175 |
### Annotations
|
176 |
|
177 |
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
|
178 |
|
179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
|
181 |
### Dataset Curators
|
182 |
|
@@ -184,7 +347,18 @@ All the datasets have been obtained and annotated by experts in thebiomedical do
|
|
184 |
|
185 |
### Licensing Information
|
186 |
|
187 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
188 |
|
189 |
### Citation Information
|
190 |
|
@@ -263,10 +437,22 @@ To be checked in the different datasets.
|
|
263 |
url = "https://aclanthology.org/W04-1213",
|
264 |
pages = "73--78",
|
265 |
}""",
|
266 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
267 |
}
|
268 |
```
|
269 |
### Contributions
|
270 |
-
This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
|
271 |
-
Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them.
|
272 |
-
I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.
|
|
|
|
27 |
- closed-domain-qa
|
28 |
- semantic-similarity-scoring
|
29 |
- text-scoring-other-sentence-similrity
|
30 |
+
- topic-classificatio
|
31 |
---
|
32 |
|
33 |
# Dataset Card for BLURB
|
|
|
60 |
## Dataset Description
|
61 |
|
62 |
- **Homepage: https://microsoft.github.io/BLURB/index.html**
|
63 |
+
- **Paper: [Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing](https://arxiv.org/pdf/2007.15779.pdf)**
|
|
|
64 |
- **Leaderboard: https://microsoft.github.io/BLURB/leaderboard.html**
|
65 |
- **Point of Contact:**
|
66 |
|
|
|
70 |
|
71 |
Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
|
72 |
|
73 |
+
|
74 |
+
#### BC5-chem
|
75 |
+
The corpus consists of three separate sets of
|
76 |
+
articles with diseases, chemicals and their relations annotated.
|
77 |
+
The training (500 articles) and development (500 articles) sets
|
78 |
+
were released to task participants in advance to support text-mining
|
79 |
+
method development. The test set (500 articles) was used for final
|
80 |
+
system performance evaluation.
|
81 |
+
|
82 |
+
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
|
83 |
+
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
|
84 |
+
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
|
85 |
+
|
86 |
+
#### BC5-disease
|
87 |
+
The corpus consists of three separate sets of
|
88 |
+
articles with diseases, chemicals and their relations annotated.
|
89 |
+
The training (500 articles) and development (500 articles) sets
|
90 |
+
were released to task participants in advance to support text-mining
|
91 |
+
method development. The test set (500 articles) was used for final
|
92 |
+
system performance evaluation.
|
93 |
+
|
94 |
+
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
|
95 |
+
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
|
96 |
+
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
|
97 |
+
|
98 |
+
#### BC2GM
|
99 |
+
The BioCreative II Gene Mention task.
|
100 |
+
The training corpus for the current task consists mainly of
|
101 |
+
the training and testing corpora (text collections) from the
|
102 |
+
BCI task, and the testing corpus for the current task
|
103 |
+
consists of an additional 5,000 sentences that were held
|
104 |
+
'in reserve' from the previous task.
|
105 |
+
In the current corpus, tokenization is not provided;
|
106 |
+
instead participants are asked to identify a gene mention
|
107 |
+
in a sentence by giving its start and end characters.
|
108 |
+
As before, the training set consists of a set of sentences,
|
109 |
+
and for each sentence a set of gene mentions
|
110 |
+
(GENE annotations).
|
111 |
+
- **Homepage: ** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
|
112 |
+
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
|
113 |
+
- **Paper: ** [verview of BioCreative II gene mention recognition](https://link.springer.com/article/10.1186/gb-2008-9-s2-s2)
|
114 |
+
|
115 |
+
#### NCBI Disease
|
116 |
+
|
117 |
+
The NCBI disease corpus is fully annotated at the mention
|
118 |
+
and concept level to serve as a research resource for the biomedical natural
|
119 |
+
language processing community.
|
120 |
+
Corpus Characteristics
|
121 |
+
----------------------
|
122 |
+
* 793 PubMed abstracts
|
123 |
+
* 6,892 disease mentions
|
124 |
+
* 790 unique disease concepts
|
125 |
+
* Medical Subject Headings (MeSH®)
|
126 |
+
* Online Mendelian Inheritance in Man (OMIM®)
|
127 |
+
* 91% of the mentions map to a single disease concept
|
128 |
+
**divided into training, developing and testing sets.
|
129 |
+
Corpus Annotation
|
130 |
+
* Fourteen annotators
|
131 |
+
* Two-annotators per document (randomly paired)
|
132 |
+
* Three annotation phases
|
133 |
+
* Checked for corpus-wide consistency of annotations
|
134 |
+
|
135 |
+
- **Homepage: ** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
|
136 |
+
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
|
137 |
+
- **Paper: ** [NCBI disease corpus: a resource for disease name recognition and concept normalization](https://pubmed.ncbi.nlm.nih.gov/24393765/)
|
138 |
+
|
139 |
+
#### JNLPBA
|
140 |
+
The BioNLP / JNLPBA Shared Task 2004 involves the identification
|
141 |
+
and classification of technical terms referring to concepts of interest to
|
142 |
+
biologists in the domain of molecular biology. The task was organized by GENIA
|
143 |
+
Project based on the annotations of the GENIA Term corpus (version 3.02).
|
144 |
+
Corpus format: The JNLPBA corpus is distributed in IOB format, with each line
|
145 |
+
containing a single token and its tag, separated by a tab character.
|
146 |
+
Sentences are separated by blank lines.
|
147 |
+
- **Homepage: ** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
|
148 |
+
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
|
149 |
+
- **Paper: ** [Introduction to the Bio-entity Recognition Task at JNLPBA](https://aclanthology.org/W04-1213)
|
150 |
+
|
151 |
+
#### EBM PICO
|
152 |
+
- **Homepage: **
|
153 |
+
- **Repository:**
|
154 |
+
- **Paper: **
|
155 |
+
- **Leaderboard: **
|
156 |
+
|
157 |
+
#### ChemProt
|
158 |
+
- **Homepage: **
|
159 |
+
- **Repository:**
|
160 |
+
- **Paper: **
|
161 |
+
|
162 |
+
#### DDI
|
163 |
+
- **Homepage: **
|
164 |
+
- **Repository:**
|
165 |
+
- **Paper: **
|
166 |
+
|
167 |
+
#### GAD
|
168 |
+
- **Homepage: **
|
169 |
+
- **Repository:**
|
170 |
+
- **Paper: **
|
171 |
+
|
172 |
+
#### BIOSSES
|
173 |
+
|
174 |
+
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
|
175 |
+
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
|
176 |
+
- very strong: 0.80–1.00
|
177 |
+
- strong: 0.60–0.79
|
178 |
+
- moderate: 0.40–0.59
|
179 |
+
- weak: 0.20–0.39
|
180 |
+
- very weak: 0.00–0.19
|
181 |
+
|
182 |
+
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
|
183 |
+
- **Repository:** https://github.com/gizemsogancioglu/biosses
|
184 |
+
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
|
185 |
+
- **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com)
|
186 |
+
|
187 |
+
#### HoC
|
188 |
+
- **Homepage: **
|
189 |
+
- **Repository:**
|
190 |
+
- **Paper: **
|
191 |
+
- **Leaderboard: **
|
192 |
+
- **Point of Contact:**
|
193 |
+
|
194 |
+
|
195 |
+
#### PubMedQA
|
196 |
+
- **Homepage: **
|
197 |
+
- **Repository:**
|
198 |
+
- **Paper: **
|
199 |
+
- **Leaderboard: **
|
200 |
+
- **Point of Contact:**
|
201 |
+
|
202 |
+
#### BioASQ
|
203 |
+
- **Homepage: **
|
204 |
+
- **Repository:**
|
205 |
+
- **Paper: **
|
206 |
+
- **Leaderboard: **
|
207 |
+
- **Point of Contact:**
|
208 |
+
|
209 |
### Supported Tasks and Leaderboards
|
210 |
|
211 |
| **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** |
|
|
|
257 |
```
|
258 |
|
259 |
* **Sentence Similarity**
|
260 |
+
|
261 |
```json
|
262 |
+
{'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation'
|
263 |
+
'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase'
|
264 |
+
'score': 2.2}
|
265 |
```
|
266 |
+
|
267 |
* **Document Classification**
|
268 |
```json
|
269 |
{
|
|
|
281 |
### Data Fields
|
282 |
|
283 |
* **NER**
|
284 |
+
* `id`: string
|
285 |
+
* `ner_tags`: Sequence[ClassLabel]
|
286 |
+
* `tokens`: Sequence[String]
|
287 |
* **PICO**
|
288 |
* To be added
|
289 |
* **Relation Extraction**
|
290 |
* To be added
|
291 |
* **Sentence Similarity**
|
292 |
+
* `sentence 1`: string
|
293 |
+
* `sentence 2`: string
|
294 |
+
* `score`: float ranging from 0 (no relation) to 4 (equivalent)
|
295 |
* **Document Classification**
|
296 |
* To be added
|
297 |
* **Question Answering**
|
|
|
305 |
|
306 |
### Curation Rationale
|
307 |
|
308 |
+
* BC5-chem
|
309 |
+
* BC5-disease
|
310 |
+
* BC2GM
|
311 |
+
* JNLPBA
|
312 |
+
* EBM PICO
|
313 |
+
* ChemProt
|
314 |
+
* DDI
|
315 |
+
* GAD
|
316 |
+
* BIOSSES
|
317 |
+
* HoC
|
318 |
+
* PubMedQA
|
319 |
+
* BioASQ
|
320 |
|
321 |
### Source Data
|
322 |
|
|
|
|
|
323 |
[More Information Needed]
|
324 |
|
325 |
### Annotations
|
326 |
|
327 |
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
|
328 |
|
329 |
+
#### Annotation process
|
330 |
+
|
331 |
+
* BC5-chem
|
332 |
+
* BC5-disease
|
333 |
+
* BC2GM
|
334 |
+
* JNLPBA
|
335 |
+
* EBM PICO
|
336 |
+
* ChemProt
|
337 |
+
* DDI
|
338 |
+
* GAD
|
339 |
+
* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
|
340 |
+
* HoC
|
341 |
+
* PubMedQA
|
342 |
+
* BioASQ
|
343 |
|
344 |
### Dataset Curators
|
345 |
|
|
|
347 |
|
348 |
### Licensing Information
|
349 |
|
350 |
+
* BC5-chem
|
351 |
+
* BC5-disease
|
352 |
+
* BC2GM
|
353 |
+
* JNLPBA
|
354 |
+
* EBM PICO
|
355 |
+
* ChemProt
|
356 |
+
* DDI
|
357 |
+
* GAD
|
358 |
+
* BIOSSES - BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
|
359 |
+
* HoC
|
360 |
+
* PubMedQA
|
361 |
+
* BioASQ
|
362 |
|
363 |
### Citation Information
|
364 |
|
|
|
437 |
url = "https://aclanthology.org/W04-1213",
|
438 |
pages = "73--78",
|
439 |
}""",
|
440 |
+
|
441 |
+
"BIOSSES":"""@article{souganciouglu2017biosses,
|
442 |
+
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
|
443 |
+
author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},
|
444 |
+
journal={Bioinformatics},
|
445 |
+
volume={33},
|
446 |
+
number={14},
|
447 |
+
pages={i49--i58},
|
448 |
+
year={2017},
|
449 |
+
publisher={Oxford University Press}
|
450 |
+
}"""
|
451 |
+
|
452 |
}
|
453 |
```
|
454 |
### Contributions
|
455 |
+
* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
|
456 |
+
* Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them.
|
457 |
+
* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.
|
458 |
+
* Thanks to [@bwang482](https://github.com/bwang482) for uploading the [BIOSSES dataset](https://github.com/bwang482/datasets/tree/master/datasets/biosses). We forked the [BIOSSES 🤗 dataset](https://huggingface.co/datasets/biosses) to add it to this BLURB benchmark.
|