xniu commited on
Commit
d456d17
·
verified ·
1 Parent(s): 5d2cc57

Update README.md

Browse files

Initial version of model card

Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: databricks-open-model-license
4
+ license_link: https://www.databricks.com/legal/open-model-license
5
+ base_model: databricks/dbrx-base
6
+ ---
7
+ # dbrx-base-FP8-KV
8
+ - ## Introduction
9
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
10
+ - ## Quantization Stragegy
11
+ - ***Quantized Layers***: All linear layers excluding "lm_head" and "router.layer"
12
+ - ***Weight***: FP8 symmetric per-tensor
13
+ - ***Activation***: FP8 symmetric per-tensor
14
+ - ***KV Cache***: FP8 symmetric per-tensor
15
+ - ## Quick Start
16
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
17
+ 2. Run the quantization script in the example folder using the following command line:
18
+ ```sh
19
+ export MODEL_DIR = [local model checkpoint folder] or databricks/dbrx-base
20
+ # single GPU
21
+ python3 quantize_quark.py \
22
+ --model_dir $MODEL_DIR \
23
+ --output_dir dbrx-base-FP8-KV \
24
+ --quant_scheme w_fp8_a_fp8 \
25
+ --kv_cache_dtype fp8 \
26
+ --num_calib_data 128 \
27
+ --model_export quark_safetensors \
28
+ --no_weight_matrix_merge \
29
+ # If model size is too large for single GPU, please use multi GPU instead.
30
+ python3 quantize_quark.py
31
+ --model_dir $MODEL_DIR \
32
+ --output_dir dbrx-base-FP8-KV\
33
+ --quant_scheme w_fp8_a_fp8 \
34
+ --kv_cache_dtype fp8 \
35
+ --num_calib_data 128 \
36
+ --multi_gpu \
37
+ --model_export quark_safetensors \
38
+ --no_weight_matrix_merge \
39
+ --multi_gpu
40
+ ```
41
+ ## Deployment
42
+ Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
43
+ In the dbrx-base model, "transformer.blocks.\*.ffn.experts" modules can be divided into experts-num mlps, and if the shape of the weight of w1 in one of the mlps is [dim1, dim2],
44
+ then the shape of “transformer.blocks.\*.ffn.experts.mlp.w1.weight“ in the exported safetensors file is [dim1\*experts-num, dim2]. The shapes of "transformer.blocks.\*.ffn.experts.mlp.w1.weight_scale"
45
+ and "transformer.blocks.\*.ffn.experts.mlp.w1.input_scale" are [dim1]. Similarly, this also applies to the w2 and v1 of "transformer.blocks.\*.ffn.experts.mlp".
46
+ ## Evaluation
47
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
48
+ The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
49
+ #### Evaluation scores
50
+ <table>
51
+ <tr>
52
+ <td><strong>Benchmark</strong>
53
+ </td>
54
+ <td><strong>dbrx-base </strong>
55
+ </td>
56
+ <td><strong>dbrx-base-FP8-KV(this model)</strong>
57
+ </td>
58
+ </tr>
59
+ <tr>
60
+ <td>Perplexity-wikitext2
61
+ </td>
62
+ <td>3.9106
63
+ </td>
64
+ <td>3.9410
65
+ </td>
66
+ </tr>
67
+
68
+ </table>
69
+
70
+ #### License
71
+ Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
72
+
73
+ Licensed under the Apache License, Version 2.0 (the "License");
74
+ you may not use this file except in compliance with the License.
75
+ You may obtain a copy of the License at
76
+
77
+ http://www.apache.org/licenses/LICENSE-2.0
78
+
79
+ Unless required by applicable law or agreed to in writing, software
80
+ distributed under the License is distributed on an "AS IS" BASIS,
81
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
82
+ See the License for the specific language governing permissions and
83
+ limitations under the License.