cyente commited on
Commit
254c229
1 Parent(s): 9da97d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -20,11 +20,10 @@ tags:
20
 
21
  ## Introduction
22
 
23
- Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
 
25
- - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
26
  - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
27
- - **Long-context Support** up to 128K tokens.
28
 
29
  **This repo contains the GPTQ-quantized 8-bit instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features:
30
  - Type: Causal Language Models
@@ -34,7 +33,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
34
  - Number of Paramaters (Non-Embedding): 1.31B
35
  - Number of Layers: 28
36
  - Number of Attention Heads (GQA): 12 for Q and 2 for KV
37
- - Context Length: Full 131,072 tokens
38
  - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
39
  - Quantization: GPTQ 8-bit
40
 
 
20
 
21
  ## Introduction
22
 
23
+ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers; Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
 
25
+ - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
26
  - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
 
27
 
28
  **This repo contains the GPTQ-quantized 8-bit instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features:
29
  - Type: Causal Language Models
 
33
  - Number of Paramaters (Non-Embedding): 1.31B
34
  - Number of Layers: 28
35
  - Number of Attention Heads (GQA): 12 for Q and 2 for KV
36
+ - Context Length: Full 32,768 tokens
37
  - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
38
  - Quantization: GPTQ 8-bit
39