File size: 2,919 Bytes
6711bcb
 
 
 
 
 
16012f3
 
6711bcb
 
 
 
 
 
 
 
 
 
 
 
d6e74e0
 
 
 
6711bcb
 
 
 
 
 
d6e74e0
6711bcb
 
 
 
 
 
 
 
d6e74e0
6711bcb
 
d6e74e0
 
6711bcb
 
 
 
 
 
 
d6e74e0
 
6711bcb
 
 
 
 
d6e74e0
 
6711bcb
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---

Update Jan 27: This has been redone with the proper token mappings and rope scaling, performance seems improved, please comment if not

## Exllama v2 Quantizations of internlm2-chat-20b-llama-test

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.

# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/internlm/internlm2-chat-20b

| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/6_5)   | 6.5  | 8.0 | 19.6 GB | 21.0 GB | 23.0 GB | Near unquantized performance at vastly reduced size, **recommended**.       |
| [4_25](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/4_25) | 4.25 | 6.0 | 13.8 GB | 15.2 GB | 17.2 GB | GPTQ equivalent bits per weight, slightly higher quality.                   |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/3_5)   | 3.5  | 6.0 | 12.4 GB | 13.8 GB | 15.8 GB | Lower quality, only use if you have to.                                     |
| [3_0](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/3_0)   | 3.0  | 6.0 | 11.1 GB | 12.5 GB | 15.5 GB | Very low quality. Usable on 12GB. |

## Download instructions

With git:

```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-20b-llama-exl2 internlm2-chat-20b-llama-exl2-6_5
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-20b-llama-exl2`:

```shell
mkdir internlm2-chat-20b-llama-exl2
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --local-dir internlm2-chat-20b-llama-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

Linux:

```shell
mkdir internlm2-chat-20b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exl2-6_5 --local-dir-use-symlinks False
```

Windows (which apparently doesn't like _ in folders sometimes?):

```shell
mkdir internlm2-chat-20b-llama-exl2-6.5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exl2-6.5 --local-dir-use-symlinks False
```

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski