ExLlamav2_HF load model -> ValueError: ## Could not find lm_head.* in model

#1
by HendrikW80 - opened

Hey I'm just trying to load your model into my up-to-date Oobabooga with flash-attn and CUDA 12.1 using ExLlamav2_HF. It fails with the given error message

ValueError: ## Could not find lm_head.* in model

Am I doing anything wrong? Or is there something awry in your model?

ExLlamav2 without _HF works.

Just download any Yi 34b tokenizer.model and put in the model folder then it will work

Oh, I forgot! Take the period out of the model name folder. Its a known bug in ooba.

I know it sounds weird, but just try it. I use the ooba HF loader as well.

Sign up or log in to comment