Unable to create exl2 quant for this model

#1
by Samvanity - opened

Hi,

I'm trying to create an exl2 quant for this model but I ran into this error:


| Measured: model.layers.0 (Attention) |
| Duration: 8.47 seconds |
| Completed step: 1/67 |
| Avg time / step (rolling): 8.47 seconds |
| Estimated remaining time: 9min 18sec |
| Last checkpoint layer: None |

-- Layer: model.layers.0 (MoE MLP)
!! Warning: w2.2 has less than 10% calibration for 19/19 rows
!! Warning: w2.3 has less than 10% calibration for 19/19 rows
Traceback (most recent call last):
File "E:\ai\Exl2\exllamav2\convert.py", line 219, in
status = measure_quant(job, save_job, model) # capturing the graceful exits
File "E:\ai\pinokio\bin\miniconda\envs\exl2\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\Exl2\exllamav2\conversion\measure.py", line 538, in measure_quant
m = measure_moe_mlp(module, hidden_states, target_states, quantizers, cache, attn_params)
File "E:\ai\Exl2\exllamav2\conversion\measure.py", line 273, in measure_moe_mlp
quantizers[f"w2.{i}"].prepare()
File "E:\ai\Exl2\exllamav2\conversion\adaptivegptq.py", line 225, in prepare
self.hessian /= self.num_batches
TypeError: unsupported operand type(s) for /=: 'NoneType' and 'int'

I was able to quantize Buttercup-4x7B-V2-laser and others, but not this one. I'm not sure what I have to do to quantize it. I'm using the latest exllamav2 v0.0.18

Thanks!

I was able to quantize Buttercup-4x7B-V2-laser and others, but not this one. I'm not sure what I have to do to quantize it. I'm using the latest exllamav2 v0.0.18

Thanks!

So, I asked the creator of exllama and he says it's because some of my experts aren't activating at all during quantization...might be better to use a different dataset in the quantization

Sign up or log in to comment