Text Generation
Transformers
Safetensors
imp
custom_code

Error while loading quantized version of imp v1 using AutoModelForCausalLM

#11
by aryachakraborty - opened

I am trying to fine tune imp v1 for my custom dataset using QLORA. Now when I am passing the quantization_config in the AutoModelForCausalLM It's throwing error like below

bnb_config = BitsAndBytesConfig( load_in_4bit=True )
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/imp-v1-3b",
torch_dtype=torch.float16,
device_map="auto",
quantization_config=bnb_config,
trust_remote_code=True
)

Error ~

ValueError: The model class you are passing has a config_class attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.MILVLG.imp-v1-3b.989a1a37f3d03c479767aedcb8eae88853d85b77.configuration_imp.ImpConfig'> and you passed <class 'transformers_modules.MILVLG.imp-v1-3b.989a1a37f3d03c479767aedcb8eae88853d85b77.configuration_im.ImpConfig'>. Fix one of those so they match!

any idea how to fix this. Some youtubers are using model specific "ConditionalGeneration" to invoke the quantized model. Are there any such thing for imp v1 ??

Sign up or log in to comment