cja5553's picture
Update README.md
413f38b verified
metadata
license: mit
language:
  - en
library_name: transformers
tags:
  - medical
  - healthcare
  - clinical
  - perioperative care
base_model: emilyalsentzer/Bio_ClinicalBERT
inference: false

BJH-perioperative-notes-bioClinicalBERT

This clinical foundational model is intended to predict post-operative surgical outcomes from clinical notes taken during perioperative care. It was finetuned from the emilyalsentzer/Bio_ClinicalBERT model through a multi-task learning approach, spanning the following 6 outcomes:

  • Death in 30 days
  • Deep vein thrombosis (DVT)
  • pulmonary embolism (PE)
  • Pneumonia
  • Acute Knee Injury
  • delirium

Dataset

We used 84,875 perioperative clinical notes from patients spanning the Barnes Jewish Hospital (BJH) system in St Louis, MO. The following are the characteristics for the data:

  • vocabulary size: 3203
  • averaging words per clinical note: 8.9 words
  • all single sentenced clinical notes

How to use model

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cja5553/BJH-perioperative-notes-bioClinicalBERT")
model = AutoModel.from_pretrained("cja5553/BJH-perioperative-notes-bioClinicalBERT")

Codes

Codes used to train the model are publicly available at: https://github.com/cja5553/LLMs_in_perioperative_care

Citation

If you find this model useful, please cite the following paper:

@article{
author={Bing Xue, Charles Alba, Joanna Abraham, Thomas Kannampallil, Christopher King, Michael Avidan, Chenyang Lu}
title={"Prescribing Large Language Models for Perioperative Care: What’s The Right Dose for Pretrained Models?"},
year={2024}
}

Questions?

contact me at alba@wustl.edu