File size: 1,866 Bytes
ef69a9b
 
 
 
 
 
 
2f31c98
 
 
6c8a43a
4634871
ef69a9b
1650dcb
 
 
 
 
 
 
 
 
 
 
44cb8f6
 
1650dcb
 
e2a136a
1650dcb
 
 
 
 
 
 
 
413f38b
1650dcb
 
 
dc42c1d
 
 
05e69a7
b8f0803
 
 
 
 
 
 
 
 
 
 
 
087fd46
b8f0803
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: mit
language:
- en
library_name: transformers
tags:
- medical
- healthcare
- clinical
- perioperative care
base_model: emilyalsentzer/Bio_ClinicalBERT
inference: false
---
# BJH-perioperative-notes-bioClinicalBERT
This clinical foundational model is intended to predict post-operative surgical outcomes from clinical notes taken during perioperative care. 
It was finetuned from the `emilyalsentzer/Bio_ClinicalBERT` model through a multi-task learning approach, spanning the following 6 outcomes:

- Death in 30 days
- Deep vein thrombosis (DVT)
- pulmonary embolism (PE)
- Pneumonia
- Acute Knee Injury
- delirium

Also check out [`cja5553/BJH-perioperative-notes-bioGPT`](https://huggingface.co/cja5553/BJH-perioperative-notes-bioGPT), which is the bioGPT variant of our model!

## Dataset

We used 84,875 perioperative clinical notes from patients spanning the Barnes Jewish Healthcare (BJH) system in St Louis, MO. 
The following are the characteristics for the data:

- vocabulary size: 3203
- averaging words per clinical note: 8.9 words
- all single sentenced clinical notes

## How to use model

```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cja5553/BJH-perioperative-notes-bioClinicalBERT")
model = AutoModel.from_pretrained("cja5553/BJH-perioperative-notes-bioClinicalBERT")
```

## Codes
Codes used to train the model are publicly available at: https://github.com/cja5553/LLMs_in_perioperative_care

## Citation
If you find this model useful, please cite the following paper:

```
@article{
author={Bing Xue, Charles Alba, Joanna Abraham, Thomas Kannampallil, Christopher King, Michael Avidan, Chenyang Lu}
title={"Prescribing Large Language Models for Perioperative Care: What’s The Right Dose for Pretrained Models?"},
year={2024}
}
```

## Questions?
contact me at alba@wustl.edu