yangapku commited on
Commit
1fd29b3
1 Parent(s): fe4fd76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,7 +17,7 @@ Compared with the state-of-the-art opensource language models, including the pre
17
 
18
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
19
 
20
- In this repo, we provide `fp16` model and quantized models in the GGUF formats, including `q2_k`, `q3_k_m`, `q4_0`, `q4_k_m`, `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
21
 
22
  ## Model Details
23
  Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
@@ -50,4 +50,4 @@ If you find our work helpful, feel free to give us a cite.
50
  title={Qwen2 Technical Report},
51
  year={2024}
52
  }
53
- ```
 
17
 
18
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
19
 
20
+ In this repo, we provide `fp16` model and quantized models in the GGUF formats, including `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
21
 
22
  ## Model Details
23
  Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
 
50
  title={Qwen2 Technical Report},
51
  year={2024}
52
  }
53
+ ```