Some of my own quants:

  • MythoMax-L2-13b_Q5_1.gguf
  • MythoMax-L2-13b_Q5_1_4K.gguf
  • MythoMax-L2-13b_Q5_1_8K.gguf

Source: Gryphe

Source Model: MythoMax-L2-13b

Source models for Gryphe/MythoMax-L2-13b (Merge)

Models utilizing Gryphe/MythoMax-L2-13b

Downloads last month
5
GGUF
Model size
13B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.