Overview

Qwen Labs developed and released the Qwen2.5-Coder model, a state-of-the-art language model tailored for code generation, understanding, and completion tasks. Featuring a 2.5B parameter dense Transformer architecture, Qwen2.5-Coder is designed to assist developers and researchers by generating high-quality code snippets, providing algorithm explanations, and completing coding prompts with accuracy. The model was trained on a diverse blend of programming languages and frameworks using carefully filtered code datasets to ensure precision and relevance. It leverages advanced fine-tuning techniques and rigorous safety measures to optimize instruction adherence and deliver reliable, contextually aware outputs. Released in November 2024, Qwen2.5-Coder offers an effective tool for software development, academic research, and programming education.

Variants

No Variant Cortex CLI command
1 Qwen2.5-coder-14b cortex run qwen2.5-coder:14b
1 Qwen2.5-coder-32b cortex run qwen2.5-coder:32b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/qwen2.5-coder
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run qwen2.5-coder
    

Credits

Downloads last month
1,345
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support