Quantized version?

#1
by drakosfire - opened

Hello! I had downloaded your model and was playing with it, I quite like it and wanted to share it with someone to run locally on a 3080 (10 Gb of VRAM). I was thinking about quantizing it, and learned that to quantize you need the original dataset it was trained on. Would you be open to quantizing and sharing the model or else sharing what data set you had fine tuned it on?

TheBloke has quantized this model in:

GGUF , AWQ , and GPTQ formats.

I've also quantized this model in EXL2 if you prefer that format at 3.0bpw , and 4.0bpw . I've been using the 4.0bpw EXL2 quant on my 3080 12gb GPU via oobabooga and the ExLlamav2_HF loader.

Sign up or log in to comment