GGUF
Not-For-All-Audiences
Edit model card

iMatrix GGUFs for Coomand-R 35b v1 - https://huggingface.co/TheDrummer/Coomand-R-35B-v1

These quants were made on 2024-05-05 after the BPE tokenizer fix for command-r(+) was merged into LCPP. These are the correct/fixed quants and anything made before this date for any command-r-based model should be discarded.

iMatrix generated with Kalomaze's groups_merged.txt

FP16 split with peazip. Recombine with peazip, 7zip, or a simple concatenate command.

Downloads last month
407
GGUF
Model size
35B params
Architecture
command-r
Inference API
Unable to determine this model's library. Check the docs .