Edit model card

Llama3-8B-SFT-SyntheticMedical-bnb-4bit

image/png

Model Details

Model Description

Llama3-8B-SFT-SSyntheticMedical-bnb-4bit is trained using the SFT method via QLoRA on 4336 rows of medical data to enhance it's abilities in the realm of scientific anatomy.

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

Using the model with transformers

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

model_name_or_path = "thesven/Llama3-8B-SFT-SyntheticMedical-bnb-4bit"

# BitsAndBytesConfig for loading the model in 4-bit precision
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
)

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name_or_path,
    device_map="auto",
    trust_remote_code=False,
    revision="main",
    quantization_config=bnb_config
)
model.pad_token = model.config.eos_token_id

prompt_template = '''
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are an expert in the field of anatomy, help explain its topics to me.<|eot_id|><|start_header_id|>user<|end_header_id|>

What is the function of the hamstring?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
'''

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.1, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)

print(generated_text)
Downloads last month
15
Safetensors
Model size
8.03B params
Tensor type
FP16
·

Dataset used to train thesven/Llama3-8B-SFT-SyntheticMedical-bnb-4bit

Collection including thesven/Llama3-8B-SFT-SyntheticMedical-bnb-4bit