Dr. Jorge Abreu Vicente commited on
Commit
78e2cf4
1 Parent(s): 04f4ad8

pasted extra information

Browse files
Files changed (1) hide show
  1. README.md +118 -37
README.md CHANGED
@@ -109,20 +109,52 @@ English from biomedical texts
109
  }
110
  ```
111
  * **PICO**
 
 
 
 
 
112
  * **Relation Extraction**
 
 
 
 
 
 
113
  * **Sentence Similarity**
 
 
 
 
 
114
  * **Document Classification**
 
 
 
 
 
115
  * **Question Answering**
 
 
 
 
 
 
116
 
117
  ### Data Fields
118
 
119
  * **NER**
120
  * id, ner_tags, tokens
121
  * **PICO**
 
122
  * **Relation Extraction**
 
123
  * **Sentence Similarity**
 
124
  * **Document Classification**
 
125
  * **Question Answering**
 
126
 
127
  ### Data Splits
128
 
@@ -132,60 +164,109 @@ Shown in the table of supported tasks.
132
 
133
  ### Curation Rationale
134
 
135
- [More Information Needed]
136
 
137
  ### Source Data
138
 
139
- #### Initial Data Collection and Normalization
140
-
141
- [More Information Needed]
142
-
143
- #### Who are the source language producers?
144
 
145
  [More Information Needed]
146
 
147
  ### Annotations
148
 
149
- #### Annotation process
150
-
151
- [More Information Needed]
152
-
153
- #### Who are the annotators?
154
-
155
- [More Information Needed]
156
-
157
- ### Personal and Sensitive Information
158
-
159
- [More Information Needed]
160
-
161
- ## Considerations for Using the Data
162
-
163
- ### Social Impact of Dataset
164
-
165
- [More Information Needed]
166
-
167
- ### Discussion of Biases
168
-
169
- [More Information Needed]
170
-
171
- ### Other Known Limitations
172
-
173
- [More Information Needed]
174
 
175
  ## Additional Information
176
 
177
  ### Dataset Curators
178
 
179
- [More Information Needed]
180
 
181
  ### Licensing Information
182
 
183
- [More Information Needed]
184
 
185
  ### Citation Information
186
 
187
- [More Information Needed]
188
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
  ### Contributions
190
-
191
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
 
109
  }
110
  ```
111
  * **PICO**
112
+ ```json
113
+ {
114
+ 'TBD'
115
+ }
116
+ ```
117
  * **Relation Extraction**
118
+ ```json
119
+ {
120
+ 'TBD'
121
+ }
122
+ ```
123
+
124
  * **Sentence Similarity**
125
+ ```json
126
+ {
127
+ 'TBD'
128
+ }
129
+ ```
130
  * **Document Classification**
131
+ ```json
132
+ {
133
+ 'TBD'
134
+ }
135
+ ```
136
  * **Question Answering**
137
+ ```json
138
+ {
139
+ 'TBD'
140
+ }
141
+ ```
142
+
143
 
144
  ### Data Fields
145
 
146
  * **NER**
147
  * id, ner_tags, tokens
148
  * **PICO**
149
+ * To be added
150
  * **Relation Extraction**
151
+ * To be added
152
  * **Sentence Similarity**
153
+ * To be added
154
  * **Document Classification**
155
+ * To be added
156
  * **Question Answering**
157
+ * To be added
158
 
159
  ### Data Splits
160
 
 
164
 
165
  ### Curation Rationale
166
 
167
+ All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
168
 
169
  ### Source Data
170
 
171
+ All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
 
 
 
 
172
 
173
  [More Information Needed]
174
 
175
  ### Annotations
176
 
177
+ All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
 
179
  ## Additional Information
180
 
181
  ### Dataset Curators
182
 
183
+ All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.
184
 
185
  ### Licensing Information
186
 
187
+ To be checked in the different datasets.
188
 
189
  ### Citation Information
190
 
191
+ ```json
192
+ {
193
+ "blurb": """\
194
+ @article{2022,
195
+ title={Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing},
196
+ volume={3},
197
+ ISSN={2637-8051},
198
+ url={http://dx.doi.org/10.1145/3458754},
199
+ DOI={10.1145/3458754},
200
+ number={1},
201
+ journal={ACM Transactions on Computing for Healthcare},
202
+ publisher={Association for Computing Machinery (ACM)},
203
+ author={Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao, Jianfeng and Poon, Hoifung},
204
+ year={2022},
205
+ month={Jan},
206
+ pages={1–23}
207
+ }
208
+ """,
209
+ "BC5CDR-chem-IOB": """@article{article,
210
+ author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong},
211
+ year = {2016},
212
+ month = {05},
213
+ pages = {baw068},
214
+ title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction},
215
+ volume = {2016},
216
+ journal = {Database},
217
+ doi = {10.1093/database/baw068}
218
+ }""",
219
+ "BC5CDR-disease-IOB":"""@article{article,
220
+ author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong},
221
+ year = {2016},
222
+ month = {05},
223
+ pages = {baw068},
224
+ title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction},
225
+ volume = {2016},
226
+ journal = {Database},
227
+ doi = {10.1093/database/baw068}
228
+ }""",
229
+ "BC2GM-IOB":"""@article{article,
230
+ author = {Smith, Larry and Tanabe, Lorraine and Ando, Rie and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph and Ganchev, Kuzman and Torii, Manabu and Liu, Hongfang and Haddow, Barry and Struble, Craig and Povinelli, Richard and Vlachos, Andreas and Baumgartner Jr, William and Hunter, Lawrence and Carpenter, Bob and Wilbur, W.},
231
+ year = {2008},
232
+ month = {09},
233
+ pages = {S2},
234
+ title = {Overview of BioCreative II gene mention recognition},
235
+ volume = {9 Suppl 2},
236
+ journal = {Genome biology},
237
+ doi = {10.1186/gb-2008-9-s2-s2}
238
+ }""",
239
+ "NCBI-disease-IOB":"""@article{10.5555/2772763.2772800,
240
+ author = {Dogan, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong},
241
+ title = {NCBI Disease Corpus},
242
+ year = {2014},
243
+ issue_date = {February 2014},
244
+ publisher = {Elsevier Science},
245
+ address = {San Diego, CA, USA},
246
+ volume = {47},
247
+ number = {C},
248
+ issn = {1532-0464},
249
+ abstract = {Graphical abstractDisplay Omitted NCBI disease corpus is built as a gold-standard resource for disease recognition.793 PubMed abstracts are annotated with disease mentions and concepts (MeSH/OMIM).14 Annotators produced high consistency level and inter-annotator agreement.Normalization benchmark results demonstrate the utility of the corpus.The corpus is publicly available to the community. Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora.This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH ) or Online Mendelian Inheritance in Man (OMIM ). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/.},
250
+ journal = {J. of Biomedical Informatics},
251
+ month = {feb},
252
+ pages = {1–10},
253
+ numpages = {10}}""",
254
+ "JNLPBA":"""@inproceedings{collier-kim-2004-introduction,
255
+ title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
256
+ author = "Collier, Nigel and
257
+ Kim, Jin-Dong",
258
+ booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
259
+ month = aug # " 28th and 29th",
260
+ year = "2004",
261
+ address = "Geneva, Switzerland",
262
+ publisher = "COLING",
263
+ url = "https://aclanthology.org/W04-1213",
264
+ pages = "73--78",
265
+ }""",
266
+
267
+ }
268
+ ```
269
  ### Contributions
270
+ This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
271
+ Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them.
272
+ I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.