Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license_link: https://huggingface.co/Qwen/Qwen-72B-Chat/blob/main/LICENSE
|
|
6 |
|
7 |
This is 2-bit quantization of [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) using [QuIP#](https://cornell-relaxml.github.io/quip-sharp/)
|
8 |
|
9 |
-
Random samples from C4 are used as calibration data.
|
10 |
|
11 |
## Model loading
|
12 |
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
@@ -17,7 +17,7 @@ As an alternative, you can use [vLLM branch](https://github.com/chu-tianxiang/vl
|
|
17 |
Measured at Wikitext with 4096 context length
|
18 |
| fp16 | 2-bit |
|
19 |
| ------- | ------- |
|
20 |
-
| 5.8438 |
|
21 |
|
22 |
## Speed
|
23 |
|
|
|
6 |
|
7 |
This is 2-bit quantization of [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) using [QuIP#](https://cornell-relaxml.github.io/quip-sharp/)
|
8 |
|
9 |
+
Random samples from C4 and [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B) are used as calibration data.
|
10 |
|
11 |
## Model loading
|
12 |
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
|
|
17 |
Measured at Wikitext with 4096 context length
|
18 |
| fp16 | 2-bit |
|
19 |
| ------- | ------- |
|
20 |
+
| 5.8438 | 7.3047 |
|
21 |
|
22 |
## Speed
|
23 |
|