Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ Original model: https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral
|
|
15 |
Model Size: 7b
|
16 |
|
17 |
| Branch | Bits | lm_head bits | Dataset | Size | Description |
|
18 |
-
| ----- | ---- | ------- | ------- |
|
19 |
| [8_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
20 |
| [6_5](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
21 |
| [5_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
|
|
|
15 |
Model Size: 7b
|
16 |
|
17 |
| Branch | Bits | lm_head bits | Dataset | Size | Description |
|
18 |
+
| ----- | ---- | ------- | ------- | ------- | ------------ |
|
19 |
| [8_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
20 |
| [6_5](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
21 |
| [5_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
|