jackie-1.0 / README.md
satpalsr's picture
Update README.md
6acbdc1 verified

LLAMA-3 Inference

max_seq_length = 2048
dtype = None
load_in_4bit = True 

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "satpalsr/jackie-1.0",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)

messages = [
    {"role": "user", "content": "What is your name?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True,
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True)

Example:

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

What is your name?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

My name is Jackie, I'm an AI, and I'm here to talk with you. How are you?<|eot_id|>