Edit model card

GGUF importance matrix (imatrix) quants for https://huggingface.co/abideen/AlphaMonarch-laser
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a general purpose imatrix calibration dataset.

AlphaMonarch-laser is a DPO fine-tuned of mlabonne/NeuralMonarch-7B using the argilla/OpenHermes2.5-dpo-binarized-alpha preference dataset but achieves better performance then mlabonne/AlphaMonarch-7B using LaserQLoRA. We have fine-tuned this model only on half of the projections, but have achieved better results as compared to the version released by Maximme Labonne. We have trained this model for 1080 steps.

Layers Context Template
32
32768
[INST] {prompt} [/INST]
{response}
Downloads last month
5
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API (serverless) does not yet support gguf models for this pipeline type.