Asclepius
Collection
8 items
•
Updated
•
2
This is an pre-trained Llama2-7B model, which was trained using causal language modeling on Asclepius-Synthetic-Clinical-Notes.
The Asclepius-Llama2-7B model was developed from this checkpoint by applying instruction fine-tuning.
This model is trained with causal launguage modeling, using Asclepius-Synthetic-Clinical-Notes.
ONLY USE THIS MODEL FOR RESEARCH PURPOSE!!
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-Llama2-7B-Pretraining-Only", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-Llama2-7B-Pretraining-Only")
model_input = "YOUR INPUT"
input_ids = tokenizer(model_input, return_tensors="pt").input_ids
output = model.generate(input_ids)
print(tokenizer.decode(output[0]))
https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes
BibTeX:
@misc{kweon2023publicly,
title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
year={2023},
eprint={2309.00237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}