4090 test with OobaBooga (In Windows) fails to load the model
IDEAS on how to fix this?
I get this error:
Traceback (most recent call last): File “C:\Users\cleverest\oobabooga_windows\text-generation-webui\server.py”, line 68, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “C:\Users\cleverest\oobabooga_windows\text-generation-webui\modules\models.py”, line 95, in load_model output = load_func(model_name) File “C:\Users\cleverest\oobabooga_windows\text-generation-webui\modules\models.py”, line 275, in GPTQ_loader model = modules.GPTQ_loader.load_quantized(model_name) File “C:\Users\cleverest\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 177, in load_quantized model = load_quant(str(path_to_model), str(pt_path), shared.args.wbits, shared.args.groupsize, kernel_switch_threshold=threshold) File “C:\Users\cleverest\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 84, in _load_quant model.load_state_dict(safe_load(checkpoint), strict=False) File “C:\Users\cleverest\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py”, line 2041, in load_state_dict raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format( RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM: size mismatch for model.layers.0.self_attn.k_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]). size mismatch for model.layers.0.self_attn.k_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]). size mismatch for model.layers.0.self_attn.o_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]). size mismatch for model.layers.0.self_attn.o_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]). size mismatch for model.layers.0.self_attn.q_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]). size mismatch for
I've seen similar errors when the group size wasn't set correctly, make sure it's set to 128
Ah the name didn't have 128 in it so I didn't even bother... I left home, I'll try it later, thanks
Ah the name didn't have 128 in it so I didn't even bother... I left home, I'll try it later, thanks
the filename does though! :) hope it works
Yup, that fixed it. Thanks! Is there any chance of getting a non-128G model of this model at some point?