Edit model card

Psydestroyer 20B GGUFs

GGUF Quants of my Psydestroyer-20B merge.

Currently I only have a Q4-K-M, as that is the highest (I've tested) quant of a 20B frankenmerge that can mostly fit on my 3060 12GB and run at decent t/s (~8-9). If the demand is there I can create more, otherwise you'll probably have to download and quantize the model yourself.

Downloads last month
91
GGUF
Model size
20B params
Architecture
llama
Unable to determine this model's library. Check the docs .