llama-2-13b-GEC / README.md
MRNH's picture
Update README.md
90f5bee
|
raw
history blame contribute delete
No virus
597 Bytes
metadata
library_name: peft

Load the Lora model

''' import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "lucas0/empath-llama-7b" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(cwd+"/tokenizer.model")

model = PeftModel.from_pretrained(model, peft_model_id) '''

Training procedure

Framework versions

  • PEFT 0.5.0