bartowski commited on
Commit
cf8711e
·
verified ·
1 Parent(s): 5d6dbc8

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +61 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ quantized_by: bartowski
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ ## Exllama v2 Quantizations of FusionNet_7Bx2_MoE_14B
10
+
11
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
12
+
13
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
14
+
15
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
16
+
17
+ Conversion was done using the default calibration dataset.
18
+
19
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
20
+
21
+ Original model: https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B
22
+
23
+
24
+
25
+ <a href="https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2/tree/8_0">8.0 bits per weight</a>
26
+
27
+ <a href="https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2/tree/6_5">6.5 bits per weight</a>
28
+
29
+ <a href="https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2/tree/5_0">5.0 bits per weight</a>
30
+
31
+ <a href="https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2/tree/4_0">4.0 bits per weight</a>
32
+
33
+ <a href="https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2/tree/3_5">3.5 bits per weight</a>
34
+
35
+ ## Download instructions
36
+
37
+ With git:
38
+
39
+ ```shell
40
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/FusionNet_7Bx2_MoE_14B-exl2
41
+ ```
42
+
43
+ With huggingface hub (credit to TheBloke for instructions):
44
+
45
+ ```shell
46
+ pip3 install huggingface-hub
47
+ ```
48
+
49
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `FusionNet_7Bx2_MoE_14B-exl2`:
50
+
51
+ ```shell
52
+ mkdir FusionNet_7Bx2_MoE_14B-exl2
53
+ huggingface-cli download bartowski/FusionNet_7Bx2_MoE_14B-exl2 --local-dir FusionNet_7Bx2_MoE_14B-exl2 --local-dir-use-symlinks False
54
+ ```
55
+
56
+ To download from a different branch, add the `--revision` parameter:
57
+
58
+ ```shell
59
+ mkdir FusionNet_7Bx2_MoE_14B-exl2
60
+ huggingface-cli download bartowski/FusionNet_7Bx2_MoE_14B-exl2 --revision 4_0 --local-dir FusionNet_7Bx2_MoE_14B-exl2 --local-dir-use-symlinks False
61
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff