Load the Lora model

''' import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "lucas0/empath-llama-7b" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(cwd+"/tokenizer.model")

model = PeftModel.from_pretrained(model, peft_model_id) '''

Training procedure

Framework versions

  • PEFT 0.5.0
Downloads last month
24
Inference API
Unable to determine this model’s pipeline type. Check the docs .