Edit model card

GGUFs for Fireplace 34b - https://huggingface.co/ValiantLabs/Fireplace-34b

iMatrix GGUFs generated with Kalomaze's semi-random groups_merged.txt

Files >50gb have been split with peazip. Recombine with peazip, 7zip, or simple concatenate command.

Downloads last month
208
GGUF
Model size
34.4B params
Architecture
llama
Inference API
Input a message to start chatting with MarsupialAI/Fireplace-34b_iMatrix_GGUF.
Unable to determine this model's library. Check the docs .

Quantized from