LoRA Loading Error (Peft Config)

#1
by deleted - opened
deleted

😏
gottem

deleted changed discussion title from wats llygma to LoRA Loading Error (Peft Config)
deleted

Alright, I change the name now that I finally got around to trying to test this. I get a LoRA loading error:

peft\peft_model.py", line 163, in from_pretrained
    config = PEFT_TYPE_TO_CONFIG_MAPPING[
KeyError: None

Doesn't happen with VicUnLocked LoRA or SuperCOT 13b LoRA.

I'm marking as private until I get this worked out.

I think I got it worked out

deleted

Same error still. I'd say have someone else test it just in case it's my ooba or PEFT version or whatever. Entirely possible since I've been messing with some stuff lately.

Honestly same. Everything has been kind of wonky since I started trying to load the MPT models. I can't even really merge properly right now but I'm still fiddling with it.

deleted

4bit LoRA loading has always been a little inconsistent for me. I'll try to remember to test it with koboldcpp later. I think they added support. Testing LoRAs is always such a pain. It'd help if base llama wasn't competent.

I'm fairly certain there was an issue with the config I just noticed and fixed. I am trying to merge it with another model right now so I was seeing the same issue but its fixed now.

deleted

Awesome, I'll wait for the merge then since the 4bit LoRA loading hurts me inside.

Sign up or log in to comment