Edit model card

These are GGUF quantized versions of FoxEngineAi/Mega-Destroyer-8x7B.

The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.

Some model files above 50GB are split into smaller files. To concatenate them, use the cat command (on Windows, use PowerShell): cat foo-Q6_K.gguf.* > foo-Q6_K.gguf

Downloads last month
461
GGUF
Model size
46.7B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

Unable to determine this model's library. Check the docs .