Add 8.0 link
Browse files
README.md
CHANGED
@@ -30,6 +30,8 @@ Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.
|
|
30 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
31 |
|
32 |
Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B
|
|
|
|
|
33 |
|
34 |
## Download instructions
|
35 |
|
|
|
30 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
31 |
|
32 |
Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B
|
33 |
+
|
34 |
+
<a href="https://huggingface.co/bartowski/NeuralHermes-2.5-Mistral-7B-exl2/tree/8_0">8.0 bits per weight</a>
|
35 |
|
36 |
## Download instructions
|
37 |
|