bartowski commited on
Commit
3ea5508
1 Parent(s): d78c8ab

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +53 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ quantized_by: bartowski
4
+ ---
5
+
6
+ ## Exllama v2 Quantizations of internlm2-chat-20b-sft-llama
7
+
8
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
9
+
10
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
11
+
12
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
13
+
14
+ Original model: https://huggingface.co/internlm/internlm2-chat-20b-sft
15
+
16
+ Model Size: 20B
17
+
18
+ | Branch | Bits | lm_head bits | Dataset | Size | Description |
19
+ | -------------------------------------------------------------- | ---- | ------------ | ------- | ------- | --------------------------------------------------------------------------- |
20
+ | [6_5](https://huggingface.co/Bartowski/internlm2-chat-20b-sft-llama-exl2/tree/6_5) | 6.5 | 8.0 | Default | 21.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
21
+ | [4_25](https://huggingface.co/Bartowski/internlm2-chat-20b-sft-llama-exl2/tree/4_25) | 4.25 | 6.0 | Default | 15.2 GB | GPTQ equivalent bits per weight, slightly higher quality. |
22
+ | [3_5](https://huggingface.co/Bartowski/internlm2-chat-20b-sft-llama-exl2/tree/3_5) | 3.5 | 6.0 | Default | 13.8 GB | Lower quality, only use if you have to. |
23
+ | [3_0](https://huggingface.co/Bartowski/internlm2-chat-20b-sft-llama-exl2/tree/3_0) | 3.0 | 6.0 | Default | 12.5 GB | Very low quality. Usable on 12GB if you reduce context or use 8 bit cache. |
24
+
25
+ All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
26
+
27
+ ## Download instructions
28
+
29
+ With git:
30
+
31
+ ```shell
32
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-20b-sft-llama-exl2 internlm2-chat-20b-sft-llama-exl2-6_5
33
+ ```
34
+
35
+ With huggingface hub (credit to TheBloke for instructions):
36
+
37
+ ```shell
38
+ pip3 install huggingface-hub
39
+ ```
40
+
41
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-20b-sft-llama-exl2`:
42
+
43
+ ```shell
44
+ mkdir internlm2-chat-20b-sft-llama-exl2
45
+ huggingface-cli download bartowski/internlm2-chat-20b-sft-llama-exl2 --local-dir internlm2-chat-20b-sft-llama-exl2 --local-dir-use-symlinks False
46
+ ```
47
+
48
+ To download from a different branch, add the `--revision` parameter:
49
+
50
+ ```shell
51
+ mkdir internlm2-chat-20b-sft-llama-exl2-6_5
52
+ huggingface-cli download bartowski/internlm2-chat-20b-sft-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-sft-llama-exl2-6_5 --local-dir-use-symlinks False
53
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff