Orpo-Llama-3.2-1B-15k-Q4_K_M-GGUF

Q4_K_M GGUF quantized version of AdamLucek/Orpo-Llama-3.2-1B-15k, see original model card for further details.

Downloads last month
9
GGUF
Model size
1.24B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AdamLucek/Orpo-Llama-3.2-1B-15k-Q4_K_M-GGUF

Quantized
(2)
this model

Dataset used to train AdamLucek/Orpo-Llama-3.2-1B-15k-Q4_K_M-GGUF