Text Generation
4 languages
bartowski commited on
Commit
a99863a
1 Parent(s): 3fbac40

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +94 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gpl-3.0
3
+ language:
4
+ - en
5
+ - zh
6
+ - ja
7
+ - de
8
+ datasets:
9
+ - JosephusCheung/GuanacoDataset
10
+ - meta-math/MetaMathQA
11
+ - jondurbin/airoboros-3.1
12
+ - WizardLM/WizardLM_evol_instruct_V2_196k
13
+ - RyokoAI/ShareGPT52K
14
+ - RyokoAI/Fandom23K
15
+ - milashkaarshif/MoeGirlPedia_wikitext_raw_archive
16
+ - wikipedia
17
+ - wiki_lingua
18
+ - garage-bAInd/Open-Platypus
19
+ - LDJnr/Puffin
20
+ - BAAI/COIG
21
+ - TigerResearch/tigerbot-zhihu-zh-10k
22
+ - liwu/MNBVC
23
+ - teknium/openhermes
24
+ - CausalLM/Refined-Anime-Text
25
+ - microsoft/orca-math-word-problems-200k
26
+ - m-a-p/CodeFeedback-Filtered-Instruction
27
+ quantized_by: bartowski
28
+ pipeline_tag: text-generation
29
+ ---
30
+
31
+ ## Exllama v2 Quantizations of 35b-beta-long
32
+
33
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
34
+
35
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
36
+
37
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
38
+
39
+ Conversion was done using the default calibration dataset.
40
+
41
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
42
+
43
+ Original model: https://huggingface.co/CausalLM/35b-beta-long
44
+
45
+
46
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/8_0">8.0 bits per weight</a>
47
+
48
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/6_5">6.5 bits per weight</a>
49
+
50
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/5_0">5.0 bits per weight</a>
51
+
52
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/4_25">4.25 bits per weight</a>
53
+
54
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/3_5">3.5 bits per weight</a>
55
+
56
+ <a href="https://huggingface.co/bartowski/35b-beta-long-exl2/tree/3_0">3.0 bits per weight</a>
57
+
58
+
59
+ ## Download instructions
60
+
61
+ With git:
62
+
63
+ ```shell
64
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/35b-beta-long-exl2
65
+ ```
66
+
67
+ With huggingface hub (credit to TheBloke for instructions):
68
+
69
+ ```shell
70
+ pip3 install huggingface-hub
71
+ ```
72
+
73
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `35b-beta-long-exl2`:
74
+
75
+ ```shell
76
+ mkdir 35b-beta-long-exl2
77
+ huggingface-cli download bartowski/35b-beta-long-exl2 --local-dir 35b-beta-long-exl2 --local-dir-use-symlinks False
78
+ ```
79
+
80
+ To download from a different branch, add the `--revision` parameter:
81
+
82
+ Linux:
83
+
84
+ ```shell
85
+ mkdir 35b-beta-long-exl2-6_5
86
+ huggingface-cli download bartowski/35b-beta-long-exl2 --revision 6_5 --local-dir 35b-beta-long-exl2-6_5 --local-dir-use-symlinks False
87
+ ```
88
+
89
+ Windows (which apparently doesn't like _ in folders sometimes?):
90
+
91
+ ```shell
92
+ mkdir 35b-beta-long-exl2-6.5
93
+ huggingface-cli download bartowski/35b-beta-long-exl2 --revision 6_5 --local-dir 35b-beta-long-exl2-6.5 --local-dir-use-symlinks False
94
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff