Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
License:
mvarma commited on
Commit
7d61ed8
1 Parent(s): d935a45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -5
README.md CHANGED
@@ -166,11 +166,22 @@ Dataset licensed under CC BY 4.0.
166
  ### Citation Information
167
 
168
  ```
169
- @inproceedings{medwiki,
170
- title={Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text},
171
- author={Maya Varma and Laurel Orr and Sen Wu and Megan Leszczynski and Xiao Ling and Christopher Ré},
172
- year={2021},
173
- booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021}
 
 
 
 
 
 
 
 
 
 
 
174
  }
175
  ```
176
 
 
166
  ### Citation Information
167
 
168
  ```
169
+ @inproceedings{varma-etal-2021-cross-domain,
170
+ title = "Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text",
171
+ author = "Varma, Maya and
172
+ Orr, Laurel and
173
+ Wu, Sen and
174
+ Leszczynski, Megan and
175
+ Ling, Xiao and
176
+ R{\'e}, Christopher",
177
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
178
+ month = nov,
179
+ year = "2021",
180
+ address = "Punta Cana, Dominican Republic",
181
+ publisher = "Association for Computational Linguistics",
182
+ url = "https://aclanthology.org/2021.findings-emnlp.388",
183
+ pages = "4566--4575",
184
+ abstract = "Named entity disambiguation (NED), which involves mapping textual mentions to structured entities, is particularly challenging in the medical domain due to the presence of rare entities. Existing approaches are limited by the presence of coarse-grained structural resources in biomedical knowledge bases as well as the use of training datasets that provide low coverage over uncommon resources. In this work, we address these issues by proposing a cross-domain data integration method that transfers structural knowledge from a general text knowledge base to the medical domain. We utilize our integration scheme to augment structural resources and generate a large biomedical NED dataset for pretraining. Our pretrained model with injected structural knowledge achieves state-of-the-art performance on two benchmark medical NED datasets: MedMentions and BC5CDR. Furthermore, we improve disambiguation of rare entities by up to 57 accuracy points.",
185
  }
186
  ```
187