Edit model card

These are GGUF quantized versions of notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES.

The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using wiki.train.raw.

The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version 147b17a or later.

Downloads last month
107
GGUF
Model size
46.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .