File size: 929 Bytes
24cb80f 1b930ca 24cb80f 1b930ca 24cb80f 1b930ca 24cb80f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
quantized_by: bartowski
---
# Exllama v2 Quantizations of dolphin-2.6-mistral-7b-dpo at 6.5 bits per weight
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Conversion was done using the default calibration dataset.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo
## Download instructions
With git:
```shell
git clone --single-branch --branch 6.5 https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dolphin-2.6-mistral-7b-dpo-exl2
huggingface-cli download bartowski/dolphin-2.6-mistral-7b-dpo-exl2 --revision 6_5 --local-dir dolphin-2.6-mistral-7b-dpo-exl2 --local-dir-use-symlinks False
```
|