These are GGUF quantized versions of Karko/Proctora.

The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using wiki.train.raw.

The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version 147b17a or later.

Downloads last month
183
GGUF
Model size
12.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.