bartowski commited on
Commit
9433533
1 Parent(s): 466037a

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +77 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - fr
5
+ - it
6
+ - de
7
+ - es
8
+ - en
9
+ inference: false
10
+ quantized_by: bartowski
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ ## Exllama v2 Quantizations of mixtral-instruct-0.1-laser
15
+
16
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
17
+
18
+ ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
19
+
20
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
21
+
22
+ Conversion was done using the default calibration dataset.
23
+
24
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
25
+
26
+ Original model: https://huggingface.co/cognitivecomputations/mixtral-instruct-0.1-laser
27
+
28
+
29
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/6_5">6.5 bits per weight</a>
30
+
31
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/5_0">5.0 bits per weight</a>
32
+
33
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/4_25">4.25 bits per weight</a>
34
+
35
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_75">3.75 bits per weight</a>
36
+
37
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_5">3.5 bits per weight</a>
38
+
39
+ <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_0">3.0 bits per weight</a>
40
+
41
+
42
+ ## Download instructions
43
+
44
+ With git:
45
+
46
+ ```shell
47
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2
48
+ ```
49
+
50
+ With huggingface hub (credit to TheBloke for instructions):
51
+
52
+ ```shell
53
+ pip3 install huggingface-hub
54
+ ```
55
+
56
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `mixtral-instruct-0.1-laser-exl2`:
57
+
58
+ ```shell
59
+ mkdir mixtral-instruct-0.1-laser-exl2
60
+ huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --local-dir mixtral-instruct-0.1-laser-exl2 --local-dir-use-symlinks False
61
+ ```
62
+
63
+ To download from a different branch, add the `--revision` parameter:
64
+
65
+ Linux:
66
+
67
+ ```shell
68
+ mkdir mixtral-instruct-0.1-laser-exl2-6_5
69
+ huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --revision 6_5 --local-dir mixtral-instruct-0.1-laser-exl2-6_5 --local-dir-use-symlinks False
70
+ ```
71
+
72
+ Windows (which apparently doesn't like _ in folders sometimes?):
73
+
74
+ ```shell
75
+ mkdir mixtral-instruct-0.1-laser-exl2-6.5
76
+ huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --revision 6_5 --local-dir mixtral-instruct-0.1-laser-exl2-6.5 --local-dir-use-symlinks False
77
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff