Text Generation
Transformers
English
llama-2
code
Eval Results
Inference Endpoints
File size: 3,043 Bytes
45a7a37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
tags:
- llama-2
- code
license: llama2
model-index:
- name: SpeechlessCoder
  results:
  - task:
      type: text-generation
    dataset:
      type: openai_humaneval
      name: HumanEval
    metrics:
    - name: pass@1
      type: pass@1
      value: 34.146
      verified: false
quantized_by: bartowski
---

## Exllama v2 Quantizations of speechless-mistral-dolphin-orca-platypus-samantha-7b

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.

# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using the default calibration dataset.

Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.

Original model: https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b



<a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/8_0">8.0 bits per weight</a>

<a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/6_5">6.5 bits per weight</a>

<a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/5_0">5.0 bits per weight</a>

<a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/4_0">4.0 bits per weight</a>

<a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/3_5">3.5 bits per weight</a>

## Download instructions

With git:

```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2`:

```shell
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

```shell
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --revision 4_0 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
```