image/png

upgraded version: The fourth-generation model of ZYH-LLM-Qwen2.5 has been released!

Model name: ZYH-LLM-Qwen2.5-14B🎉

This model's performance is absolutely phenomenal, surpassing all my previously released merged models! 🚀

To highlight its uniqueness, I've created a brand new series separate from all previous releases! 💫

📅 Release date: February 5, 2025

🧩 Merging methods: della and sce

🛠️ Models used:

  • Qwen2.5-Coder-14B
  • Qwen2.5-Coder-14B-instruct
  • Qwen2.5-14B-instruct
  • Qwen2.5-14B-instruct-1M
  • Qwen2.5-14B

✨ Coming soon: GGUF format version!

📥 Don't miss out on trying it - stay tuned for download! 🚨💻

Downloads last month
300
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for YOYO-AI/ZYH-LLM-Qwen2.5-14B-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(3)
this model