Rename model
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
quantized_by: bartowski
|
13 |
---
|
14 |
|
15 |
-
## Exllama v2 Quantizations of ChatQA-1.5-8B
|
16 |
|
17 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
|
18 |
|
@@ -20,7 +20,7 @@ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turb
|
|
20 |
|
21 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
22 |
|
23 |
-
Original model: https://huggingface.co/nvidia/ChatQA-1.5-8B
|
24 |
|
25 |
## Prompt format
|
26 |
|
@@ -44,18 +44,18 @@ Assistant:
|
|
44 |
|
45 |
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
|
46 |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
|
47 |
-
| [8_0](https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
48 |
-
| [6_5](https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
49 |
-
| [5_0](https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
50 |
-
| [4_25](https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
51 |
-
| [3_5](https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
|
52 |
|
53 |
## Download instructions
|
54 |
|
55 |
With git:
|
56 |
|
57 |
```shell
|
58 |
-
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/ChatQA-1.5-8B-exl2 ChatQA-1.5-8B-exl2-6_5
|
59 |
```
|
60 |
|
61 |
With huggingface hub (credit to TheBloke for instructions):
|
@@ -69,13 +69,13 @@ To download a specific branch, use the `--revision` parameter. For example, to d
|
|
69 |
Linux:
|
70 |
|
71 |
```shell
|
72 |
-
huggingface-cli download bartowski/ChatQA-1.5-8B-exl2 --revision 6_5 --local-dir ChatQA-1.5-8B-exl2-6_5 --local-dir-use-symlinks False
|
73 |
```
|
74 |
|
75 |
Windows (which apparently doesn't like _ in folders sometimes?):
|
76 |
|
77 |
```shell
|
78 |
-
huggingface-cli download bartowski/ChatQA-1.5-8B-exl2 --revision 6_5 --local-dir ChatQA-1.5-8B-exl2-6.5 --local-dir-use-symlinks False
|
79 |
```
|
80 |
|
81 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
12 |
quantized_by: bartowski
|
13 |
---
|
14 |
|
15 |
+
## Exllama v2 Quantizations of Llama-3-ChatQA-1.5-8B
|
16 |
|
17 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
|
18 |
|
|
|
20 |
|
21 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
22 |
|
23 |
+
Original model: https://huggingface.co/nvidia/Llama-3-ChatQA-1.5-8B
|
24 |
|
25 |
## Prompt format
|
26 |
|
|
|
44 |
|
45 |
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
|
46 |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
|
47 |
+
| [8_0](https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
48 |
+
| [6_5](https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
49 |
+
| [5_0](https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
50 |
+
| [4_25](https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
51 |
+
| [3_5](https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
|
52 |
|
53 |
## Download instructions
|
54 |
|
55 |
With git:
|
56 |
|
57 |
```shell
|
58 |
+
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-ChatQA-1.5-8B-exl2 Llama-3-ChatQA-1.5-8B-exl2-6_5
|
59 |
```
|
60 |
|
61 |
With huggingface hub (credit to TheBloke for instructions):
|
|
|
69 |
Linux:
|
70 |
|
71 |
```shell
|
72 |
+
huggingface-cli download bartowski/Llama-3-ChatQA-1.5-8B-exl2 --revision 6_5 --local-dir Llama-3-ChatQA-1.5-8B-exl2-6_5 --local-dir-use-symlinks False
|
73 |
```
|
74 |
|
75 |
Windows (which apparently doesn't like _ in folders sometimes?):
|
76 |
|
77 |
```shell
|
78 |
+
huggingface-cli download bartowski/Llama-3-ChatQA-1.5-8B-exl2 --revision 6_5 --local-dir Llama-3-ChatQA-1.5-8B-exl2-6.5 --local-dir-use-symlinks False
|
79 |
```
|
80 |
|
81 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|