Initialize Readme
Browse files
README.md
CHANGED
@@ -1,3 +1,101 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
+
datasets:
|
4 |
+
- starmpcc/Asclepius-Synthetic-Clinical-Notes
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- medical
|
10 |
---
|
11 |
+
# Model Card for Model ID
|
12 |
+
|
13 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
14 |
+
|
15 |
+
This is an pre-trained Llama2-7B model, which was trained using causal language modeling on [Asclepius-Synthetic-Clinical-Notes](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes).
|
16 |
+
|
17 |
+
The [Asclepius-Llama2-7B](https://huggingface.co/starmpcc/Asclepius-Llama2-7B) model was developed from this checkpoint by applying instruction fine-tuning.
|
18 |
+
|
19 |
+
## UPDATE
|
20 |
+
### 2024.01.10
|
21 |
+
- Asclepius-R, the variant of Asclepius that trained on MIMIC-III discharge summaries, is now available on [Physionet](https://physionet.org/content/asclepius-r/1.0.0/)!
|
22 |
+
|
23 |
+
## Model Details
|
24 |
+
|
25 |
+
### Model Description
|
26 |
+
|
27 |
+
<!-- Provide a longer summary of what this model is. -->
|
28 |
+
- **Model type:** Clinical LLM (Large Language Model)
|
29 |
+
- **Language(s) (NLP):** English
|
30 |
+
- **License:** CC-BY-NC-SA 4.0
|
31 |
+
- **Finetuned from model:** Llama2-7B
|
32 |
+
|
33 |
+
### Model Sources
|
34 |
+
|
35 |
+
<!-- Provide the basic links for the model. -->
|
36 |
+
|
37 |
+
- **Repository:** https://github.com/starmpcc/Asclepius
|
38 |
+
- **Paper:** https://arxiv.org/abs/2309.00237
|
39 |
+
- **Data:** https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes
|
40 |
+
|
41 |
+
## Uses
|
42 |
+
|
43 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
44 |
+
This model is trained with causal launguage modeling, using [Asclepius-Synthetic-Clinical-Notes](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes).
|
45 |
+
|
46 |
+
### Out-of-Scope Use
|
47 |
+
|
48 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
49 |
+
|
50 |
+
ONLY USE THIS MODEL FOR RESEARCH PURPOSE!!
|
51 |
+
|
52 |
+
## How to Get Started with the Model
|
53 |
+
|
54 |
+
```python
|
55 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-Llama2-7B-Pretraining-Only", use_fast=False)
|
57 |
+
model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-Llama2-7B-Pretraining-Only")
|
58 |
+
|
59 |
+
model_input = "YOUR INPUT"
|
60 |
+
input_ids = tokenizer(model_input, return_tensors="pt").input_ids
|
61 |
+
output = model.generate(input_ids)
|
62 |
+
print(tokenizer.decode(output[0]))
|
63 |
+
```
|
64 |
+
|
65 |
+
## Training Details
|
66 |
+
|
67 |
+
### Training Data
|
68 |
+
|
69 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
70 |
+
|
71 |
+
https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes
|
72 |
+
|
73 |
+
### Training Procedure
|
74 |
+
|
75 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
76 |
+
- Causal language modeling on synthetic clinical notes.
|
77 |
+
|
78 |
+
#### Training Hyperparameters
|
79 |
+
|
80 |
+
- We followed config used in [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
81 |
+
-
|
82 |
+
#### Speeds, Sizes, Times
|
83 |
+
|
84 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
85 |
+
- Pre-Training (1 epoch): 1h 58m with 8x A100 80G
|
86 |
+
|
87 |
+
## Citation
|
88 |
+
|
89 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
90 |
+
|
91 |
+
**BibTeX:**
|
92 |
+
```
|
93 |
+
@misc{kweon2023publicly,
|
94 |
+
title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
|
95 |
+
author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
|
96 |
+
year={2023},
|
97 |
+
eprint={2309.00237},
|
98 |
+
archivePrefix={arXiv},
|
99 |
+
primaryClass={cs.CL}
|
100 |
+
}
|
101 |
+
```
|