feihu.hf commited on
Commit
52d3e5f
·
1 Parent(s): f251035

update readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
 
15
  ## Introduction
16
 
17
- Qwen1.5 is a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
@@ -22,7 +22,7 @@ Qwen1.5 is a transformer-based decoder-only language model pretrained on a large
22
  * Stable support of 32K context length for models of all sizes
23
  * No need of `trust_remote_code`.
24
 
25
- For more details, please refer to our blog post and GitHub repo. In this repo, we provide the `q2_k` and `q5_k_m` quantized model in the GGUF format.
26
  <br>
27
 
28
  ## Model Details
 
14
 
15
  ## Introduction
16
 
17
+ Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
 
22
  * Stable support of 32K context length for models of all sizes
23
  * No need of `trust_remote_code`.
24
 
25
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). In this repo, we provide the `q2_k` and `q5_k_m` quantized model in the GGUF format.
26
  <br>
27
 
28
  ## Model Details