|
--- |
|
license: apache-2.0 |
|
tags: |
|
- tabular-regression |
|
- ehr |
|
- transformer |
|
- medical |
|
model_name: audit-icu-gpt2-25_3M |
|
--- |
|
# audit-icu-llama-219_8M |
|
|
|
This repo contains the model weights for audit-icu-llama-219_8M, a tabular language model built on the llama architecture |
|
for evaluating the cross-entropy of Epic EHR audit log event sequences. This model was originally designed to |
|
calculate cross-entropies but can also be used for generation. |
|
|
|
The code to train and perform inference this model is available [here](https://github.com/bcwarner/audit-log-lm). |
|
More details about how to use this model can be found there. |
|
|
|
# Model Details |
|
|
|
More details can be found in the model card of our paper in Appendix B [here](https://arxiv.org/abs/2311.06401). |
|
|
|
Please cite our paper if you use this model in your work: |
|
``` |
|
|
|
@misc{warner2023autoregressive, |
|
title={Autoregressive Language Models For Estimating the Entropy of Epic EHR Audit Logs}, |
|
author={Benjamin C. Warner and Thomas Kannampallil and Seunghwan Kim}, |
|
year={2023}, |
|
eprint={2311.06401}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
|
|
``` |
|
|