Some of my own quants:

  • MythoMax-L2-33b-V2_Q2_K.gguf
  • MythoMax-L2-33b-V2_Q3_K_M.gguf
  • MythoMax-L2-33b-V2_Q4_1.gguf

Source: grimpep

Source Model: MythoMax-L2-33b-V2

Source models for grimpep/MythoMax-L2-33b-V2 (Merge)

Downloads last month
177
GGUF
Model size
32.5B params
Architecture
llama

2-bit

3-bit

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.