- commited on
Commit
d3a1c93
1 Parent(s): 0da2686

models: add

Browse files
.gitattributes CHANGED
@@ -4,6 +4,7 @@
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
 
7
  *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gguf filter=lfs diff=lfs merge=lfs -text
8
  *.gz filter=lfs diff=lfs merge=lfs -text
9
  *.h5 filter=lfs diff=lfs merge=lfs -text
10
  *.joblib filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - qwen
7
+ - chat
8
+ - 中文
9
+ model_name: Qwen Chat 14B
10
+ model_type: qwen
11
+ pipeline_tag: text-generation
12
+ quantized_by: about0
13
+ ---
14
+
15
+ # Qwen Chat 14B - GGUF
16
+
17
+ Here are the llama.cpp-compatible GGUF converted and/or quantized models for [Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat).
18
+
19
+ ## Explanation of quantization methods
20
+ <details>
21
+ <summary>Click to see details</summary>
22
+
23
+ Methods:
24
+ * type-0 (Q4_0, Q5_0, Q8_0) - weights w are obtained from quants q using w = d * q, where d is the block scale.
25
+ * type-1 (Q4_1, Q5_1) - weights are given by w = d * q + m, where m is the block minimum
26
+
27
+ The new methods available are:
28
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
29
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This ends up using 3.4375 bpw.
30
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
31
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
32
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
33
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
34
+
35
+ This is exposed via llama.cpp quantization types that define various "quantization mixes" as follows:
36
+
37
+ * LLAMA_FTYPE_MOSTLY_Q2_K - uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
38
+ * LLAMA_FTYPE_MOSTLY_Q3_K_S - uses GGML_TYPE_Q3_K for all tensors
39
+ * LLAMA_FTYPE_MOSTLY_Q3_K_M - uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
40
+ * LLAMA_FTYPE_MOSTLY_Q3_K_L - uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
41
+ * LLAMA_FTYPE_MOSTLY_Q4_K_S - uses GGML_TYPE_Q4_K for all tensors
42
+ * LLAMA_FTYPE_MOSTLY_Q4_K_M - uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
43
+ * LLAMA_FTYPE_MOSTLY_Q5_K_S - uses GGML_TYPE_Q5_K for all tensors
44
+ * LLAMA_FTYPE_MOSTLY_Q5_K_M - uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
45
+ * LLAMA_FTYPE_MOSTLY_Q6_K- uses 6-bit quantization (GGML_TYPE_Q8_K) for all tensors
46
+ </details>
47
+
48
+ ## Provided files
49
+
50
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
51
+ | ---- | ---- | ---- | ---- | ---- | ----- |
52
+ | [qwen-chat-14B-Q2_K.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q2_K.gguf) | Q2_K | 2 | 6.2 GB| 9.1 GB | smallest, significant quality-loss - not recommended for most purposes |
53
+ | [qwen-chat-14B-Q3_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_S.gguf) | Q3_K_S | 3 | 6.5 GB | 9.4 GB | very small, high quality-loss |
54
+ | [qwen-chat-14B-Q3_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_M.gguf) | Q3_K_M | 3 | 7.2 GB | 10.1 GB | very small, high quality-loss |
55
+ | [qwen-chat-14B-Q3_K_L.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_L.gguf) | Q3_K_L | 3 | 7.5 GB| 10.4 GB | small, substantial quality-loss |
56
+ | [qwen-chat-14B-Q4_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_0.gguf) | Q4_0 | 4 | 7.7 GB| 10.6 GB | legacy; small, very high quality-loss - prefer using Q3_K_L |
57
+ | [qwen-chat-14B-Q4_1.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_1.gguf) | Q4_1 | 4 | 8.4 GB| 11.3 GB | legacy; small, very high quality-loss - prefer using Q4_K_S |
58
+ | [qwen-chat-14B-Q4_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_K_S.gguf) | Q4_K_S | 4 | 8.0 GB| 10.9 GB | small, greater quality-loss |
59
+ | [qwen-chat-14B-Q4_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_K_M.gguf) | Q4_K_M | 4 | 8.9 GB| 11.8 GB | medium, balanced quality - recommended |
60
+ | [qwen-chat-14B-Q5_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_0.gguf) | Q5_0 | 5 | 9.2 GB| 12.1 GB | legacy; medium, balanced quality - prefer using Q5_K_M |
61
+ | [qwen-chat-14B-Q5_1.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_1.gguf) | Q5_1 | 5 | 10 GB| 12.9 GB | legacy; medium, balanced quality - prefer using Q5_K_M |
62
+ | [qwen-chat-14B-Q5_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_K_S.gguf) | Q5_K_S | 5 | 9.4 GB | 12.3 GB | large, low quality-loss - recommended |
63
+ | [qwen-chat-14B-Q5_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_K_M.gguf) | Q5_K_M | 5 | 11 GB | 13.9 GB | large, very low quality-loss - recommended |
64
+ | [qwen-chat-14B-Q6_K.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q6_K.gguf) | Q6_K | 6 | 12 GB| 14.9 GB | very large, extremely low quality-loss |
65
+ | [qwen-chat-14B-Q8_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q8_0.gguf) | Q8_0 | 8 | 15 GB| 17.9 GB | very large, extremely low quality-loss - not recommended |
66
+ | [qwen-chat-14B-f16.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-f16.gguf) | f16 | 16 | 27 GB| 29.9 GB | very large, no quality-loss - not recommended |
67
+
68
+ ### Model Sources
69
+ - **Repository:** [https://huggingface.co/Qwen/Qwen-14B-Chat]
qwen-chat-14B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f4308858a22a4b9b9cf07a5a1c090209da62517110a252da0a1f034ba4cac2b
3
+ size 6550539008
qwen-chat-14B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7407828318b9c6730485124bb982875f530a1fba3a0752b9a954200c429ccdd
3
+ size 7987845888
qwen-chat-14B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54b1a581c392030c31d7df12219bda62c647aad88397e92a4e3513406c7c1873
3
+ size 7690230528
qwen-chat-14B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41392e1e965a61fb547ea9f4f57aeee70ae9a448ad30e034c28e66902880c003
3
+ size 6949100288
qwen-chat-14B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16721e4f4f0073c0ccbc3740623dcc542f3ded218a512b672c9e722fa1a71eff
3
+ size 8179313408
qwen-chat-14B-Q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b9239fbb57002b24d21f63335174e5f22ee7726f6da6d732412ff33bf9033b3
3
+ size 9016044288
qwen-chat-14B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa0b8114669a7e9904d5a33f8f7529f4104b37b57b6ae7c44c5b198f11ac52c1
3
+ size 9449073408
qwen-chat-14B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00a332130215c485bc123acee462b9c6aa5789b963fd2e36f7afb0694bd41c22
3
+ size 8547461888
qwen-chat-14B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d95f41978b9b734a6125926ff68ab2bb13663e8460517caebb77a97e8fdd2925
3
+ size 9852775168
qwen-chat-14B-Q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef870bea4ef3648f223801d8d166f2dc6153357c655a3e7691b606103e64bad1
3
+ size 10689506048
qwen-chat-14B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:342ef4900ef424fc39e4b0705b1a8a4d74f2589604ad207e9e158a0fce70ff32
3
+ size 10884147968
qwen-chat-14B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15f2ca97d79c169eba2629a8d6997412db4ad422fac010b4515c2d08c6aaf5e
3
+ size 10028083968
qwen-chat-14B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5197eaa8097f43870a3d2c6865498de5a1fbdf7fd3ba8ed0757aed51135fb3ab
3
+ size 12310149888
qwen-chat-14B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c57771eecec4106455d6c447bc7d2a558c8809ed86709de50588f4ddac547e98
3
+ size 15061719808