bartowski commited on
Commit
2609c3d
1 Parent(s): 96db2e1

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +58 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ quantized_by: bartowski
4
+ ---
5
+
6
+ ## Exllama v2 Quantizations of internlm2-chat-7b-sft-llama
7
+
8
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
9
+
10
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
11
+
12
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
13
+
14
+ Conversion was done using the default calibration dataset.
15
+
16
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
17
+
18
+ Original model: https://huggingface.co/internlm/internlm2-chat-7b-sft
19
+
20
+
21
+
22
+ <a href="https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2/tree/8_0">8.0 bits per weight</a>
23
+
24
+ <a href="https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2/tree/6_5">6.5 bits per weight</a>
25
+
26
+ <a href="https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2/tree/5_0">5.0 bits per weight</a>
27
+
28
+ <a href="https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2/tree/4_0">4.0 bits per weight</a>
29
+
30
+ <a href="https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2/tree/3_5">3.5 bits per weight</a>
31
+
32
+ ## Download instructions
33
+
34
+ With git:
35
+
36
+ ```shell
37
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2
38
+ ```
39
+
40
+ With huggingface hub (credit to TheBloke for instructions):
41
+
42
+ ```shell
43
+ pip3 install huggingface-hub
44
+ ```
45
+
46
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-7b-sft-llama-exl2`:
47
+
48
+ ```shell
49
+ mkdir internlm2-chat-7b-sft-llama-exl2
50
+ huggingface-cli download bartowski/internlm2-chat-7b-sft-llama-exl2 --local-dir internlm2-chat-7b-sft-llama-exl2 --local-dir-use-symlinks False
51
+ ```
52
+
53
+ To download from a different branch, add the `--revision` parameter:
54
+
55
+ ```shell
56
+ mkdir internlm2-chat-7b-sft-llama-exl2
57
+ huggingface-cli download bartowski/internlm2-chat-7b-sft-llama-exl2 --revision 4_0 --local-dir internlm2-chat-7b-sft-llama-exl2 --local-dir-use-symlinks False
58
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff