|
--- |
|
{} |
|
--- |
|
Below, we provide access to the datasets used in and created for the EMNLP 2022 paper [Large Language Models are Few-Shot Clinical Information Extractors](https://arxiv.org/abs/2205.12689). |
|
|
|
# Task #1: Clinical Sense Disambiguation |
|
For Task #1, we use the original annotations from the [Clinical Acronym Sense Inventory (CASI) dataset](https://conservancy.umn.edu/handle/11299/137703), described in [their paper](https://academic.oup.com/jamia/article/21/2/299/723657). |
|
As is common, due to noisiness in the label set, we do not evaluate on the entire dataset, but only on a cleaner subset. For consistency, we use the subset defined by the filtering used in ["Zero-Shot Clinical Acronym Expansion |
|
via Latent Meaning Cells"](https://arxiv.org/pdf/2010.02010.pdf). This results in a subset of 18,164 examples and 41 acronyms for evaluation. |
|
|
|
We additionally use the MIMIC Reverse Substitution dataset, as created in that same paper, with further instructions available in [their repository](https://github.com/griff4692/LMC). |
|
|
|
# Task #2: Biomedical Evidence Extraction |
|
For Task #2, we use the out-of-the-box high-level labels from the [PICO dataset](https://arxiv.org/abs/1806.04185) available publicly in the repository [here](https://github.com/bepnye/EBM-NLP). |
|
|
|
# Task #3: Coreference Resolution |
|
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. Each example is labeled with a singular pronoun and that pronoun's corresponding noun phrase antecedent (or antecedents). |
|
The antecedent was annotated as the entire noun phrase (barring any dependent clauses); in cases where multiple equally valid antecedents were available, all were labeled (empirically, up to 2). |
|
For the purposes of evaluation, we chose the antecedent with the highest overlap to each model’s output. |
|
To ensure nontrivial examples, the annotators excluded all examples of personal pronouns (e.g. “he”, “she”) if another person (and possible antecedent) had not yet been mentioned in the snippet. |
|
Examples were skipped in annotation if the pronoun did not have an antecedent within the provided text snippet. |
|
|
|
# Task #4: Medication Status Extraction |
|
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. We wanted to create a dataset of challenging examples containing a changeover in treatment. From a sample, only ∼5% of CASI snippets contained such examples. To increase the density of these examples, speeding up annotation, clinical notes were filtered with the following search terms: discont, adverse, side effect, switch, and dosage, leading to 1445 snippets. We excluded snippets that were purely medication lists, requiring at least some narrative part to be present. |
|
For each example, the annotators first extracted all medications. Guidelines excluded medication categories (e.g. “ACE-inhibitor”) if they referred to more specific drug names mentioned elsewhere (even if partially cut off in the snippet). For instance, only the antibiotic Levaquin was labeled in: “It is |
|
probably reasonable to treat with antibiotics [...]. I would agree with Levaquin alone [...]”. Guidelines also excluded electrolytes and intravenous fluids as well as route and dosage information. In a second step, medications were assigned to one of three categories: active, discontinued, and neither. |
|
Discontinued medications also contain medications that are temporarily on hold. The category neither was assigned to all remaining medications (e.g. allergies, potential medications). |
|
The medication lists for each example were serialized as a json. |
|
|
|
|
|
# Task #5: Medication Attribute Extraction |
|
For Task #5, we again annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. |
|
Annotation guideline were adopted from the 2009 i2b2 medication extraction challenge (Uzuner et al., 2010) with slight modifications. |
|
We allowed medication attributes to have multiple spans and grouped together different mentions of the the same drug (e.g. “Tylenol” and “Tylenol PM”) for the purpose of relation extraction. |
|
The annotation list for each example was serialized as a json. |
|
|
|
# Citations |
|
When using our annotations for tasks #3-5, please cite our paper, as well as the papers from which the underlying text originated. |
|
|
|
``` |
|
@inproceedings{agrawal2022large, |
|
title={Large Language Models are Few-Shot Clinical Information Extractors}, |
|
author={Monica Agrawal and Stefan Hegselmann and Hunter Lang and Yoon Kim and David Sontag}, |
|
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing}, |
|
year={2022}, |
|
url_Paper = {https://arxiv.org/pdf/2205.12689.pdf} |
|
} |
|
``` |
|
|
|
``` |
|
@article{moon2014sense, |
|
title={A sense inventory for clinical abbreviations and acronyms created using clinical notes and medical dictionary resources}, |
|
author={Moon, Sungrim and Pakhomov, Serguei and Liu, Nathan and Ryan, James O and Melton, Genevieve B}, |
|
journal={Journal of the American Medical Informatics Association}, |
|
volume={21}, |
|
number={2}, |
|
pages={299--307}, |
|
year={2014}, |
|
publisher={BMJ Publishing Group BMA House, Tavistock Square, London, WC1H 9JR} |
|
} |
|
``` |
|
|
|
# Licensing |
|
The annotations added by our team fall under the MIT license, but the CASI dataset itself is subject to its own licensing. |
|
|
|
|
|
|
|
--- |
|
license: other |
|
--- |
|
|