File size: 2,663 Bytes
cc1320a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---

## Exllama v2 Quantizations of internlm2-7b-llama

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.

# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using the default calibration dataset.

Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.

Original model: https://huggingface.co/internlm/internlm2-7b

Model Size: 7b

| Branch | Bits | lm_head | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/8_0) | 8.0  | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/6_5) | 6.5  | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/5_0) | 5.0  | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_25](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/4_25) | 4.25 | 6.0 | Default | 6.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/3_5) | 3.5  | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |

All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.

## Download instructions

With git:

```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-7b-llama-exl2 internlm2-7b-llama-exl2-6_5
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-7b-llama-exl2`:

```shell
mkdir internlm2-7b-llama-exl2
huggingface-cli download bartowski/internlm2-7b-llama-exl2 --local-dir internlm2-7b-llama-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

```shell
mkdir internlm2-7b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-7b-llama-exl2 --revision 6_5 --local-dir internlm2-7b-llama-exl2-6_5 --local-dir-use-symlinks False
```