--- library_name: peft base_model: Qwen/Qwen1.5-1.8B-Chat --- Lora sft finetuned version of Qwen/Qwen1.5-1.8B-Chat ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM config = PeftConfig.from_pretrained("eren23/finetune_test_qwen15-1-8b-sft") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-1.8B-Chat") model = PeftModel.from_pretrained(model, "eren23/finetune_test_qwen15-1-8b-sft") model = model.to("cuda") from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # make prediction tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-1.8B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Framework versions - PEFT 0.8.2