bartowski's picture
Add 2.4 link
91eb1d6
metadata
datasets:
  - ehartford/dolphin
  - jondurbin/airoboros-2.2.1
  - ehartford/dolphin-coder
  - teknium/openhermes
  - ise-uiuc/Magicoder-OSS-Instruct-75K
  - ise-uiuc/Magicoder-Evol-Instruct-110K
  - LDJnr/Capybara
language:
  - en
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation

Eric has pulled this model due to decreased performance, will leave the quants up but downloader beware, performance isn't what was expected

Exllama v2 Quantizations of dolphin-2.6.1-mixtral-8x7b

Using turboderp's ExLlamaV2 v0.0.11 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using the default calibration dataset.

Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.

Original model: https://huggingface.co/cognitivecomputations/dolphin-2.6.1-mixtral-8x7b

2.4 bits per weight

3.0 bits per weight

3.5 bits per weight

3.75 bits per weight

4.5 bits per weight

6.25 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/dolphin-2.6.1-mixtral-8x7b-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called dolphin-2.6.1-mixtral-8x7b-exl2:

mkdir dolphin-2.6.1-mixtral-8x7b-exl2
huggingface-cli download bartowski/dolphin-2.6.1-mixtral-8x7b-exl2 --local-dir dolphin-2.6.1-mixtral-8x7b-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir dolphin-2.6.1-mixtral-8x7b-exl2
huggingface-cli download bartowski/dolphin-2.6.1-mixtral-8x7b-exl2 --revision 4_0 --local-dir dolphin-2.6.1-mixtral-8x7b-exl2 --local-dir-use-symlinks False