squeeze-ai-lab
commited on
Commit
•
33ec55d
1
Parent(s):
e890c0d
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.p
|
|
12 |
|
13 |
3-bit quantized Vicuna-13B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
|
14 |
|
15 |
-
* **Base Model:** [Vicuna-13B-v1.1](https://huggingface.co/lmsys/vicuna-
|
16 |
* **Bitwidth:** 4-bit
|
17 |
* **Sparsity Level:** 0.45%
|
18 |
|
|
|
12 |
|
13 |
3-bit quantized Vicuna-13B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
|
14 |
|
15 |
+
* **Base Model:** [Vicuna-13B-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
|
16 |
* **Bitwidth:** 4-bit
|
17 |
* **Sparsity Level:** 0.45%
|
18 |
|