JustinLin610 commited on
Commit
f86c458
1 Parent(s): 404aa75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2-beta-4B-Chat-GGUF/blob/main/LICENSE
5
  language:
6
  - en
7
  pipeline_tag: text-generation
@@ -9,12 +9,12 @@ tags:
9
  - chat
10
  ---
11
 
12
- # Qwen2-beta-4B-Chat-GGUF
13
 
14
 
15
  ## Introduction
16
 
17
- Qwen2-beta is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
@@ -26,7 +26,7 @@ For more details, please refer to our blog post and GitHub repo. In this repo, w
26
  <br>
27
 
28
  ## Model Details
29
- Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
30
 
31
 
32
  ## Training details
@@ -40,12 +40,12 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and
40
  ## How to use
41
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
42
  ```shell
43
- huggingface-cli download Qwen/Qwen2-beta-4B-Chat-GGUF qwen2-beta-4b-chat-q8_0.gguf --local-dir . --local-dir-use-symlinks False
44
  ```
45
 
46
- We demonstrate how to use `llama.cpp` to run Qwen2-beta:
47
  ```shell
48
- ./main -m qwen2-beta-4b-chat-q8_0.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
49
  ```
50
 
51
 
 
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GGUF/blob/main/LICENSE
5
  language:
6
  - en
7
  pipeline_tag: text-generation
 
9
  - chat
10
  ---
11
 
12
+ # Qwen1.5-4B-Chat-GGUF
13
 
14
 
15
  ## Introduction
16
 
17
+ Qwen1.5 is a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
 
26
  <br>
27
 
28
  ## Model Details
29
+ Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
30
 
31
 
32
  ## Training details
 
40
  ## How to use
41
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
42
  ```shell
43
+ huggingface-cli download Qwen/Qwen1.5-4B-Chat-GGUF qwen1_5-4b-chat-q8_0.gguf --local-dir . --local-dir-use-symlinks False
44
  ```
45
 
46
+ We demonstrate how to use `llama.cpp` to run Qwen1.5:
47
  ```shell
48
+ ./main -m qwen1_5-4b-chat-q8_0.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
49
  ```
50
 
51