Text Generation
bartowski commited on
Commit
1b42d5d
1 Parent(s): f9c089f

Main branch

Browse files
Files changed (2) hide show
  1. README.md +77 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: yi-license
4
+ license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
5
+ datasets:
6
+ - ai2_arc
7
+ - unalignment/spicy-3.1
8
+ - codeparrot/apps
9
+ - facebook/belebele
10
+ - boolq
11
+ - jondurbin/cinematika-v0.1
12
+ - drop
13
+ - lmsys/lmsys-chat-1m
14
+ - TIGER-Lab/MathInstruct
15
+ - cais/mmlu
16
+ - Muennighoff/natural-instructions
17
+ - openbookqa
18
+ - piqa
19
+ - Vezora/Tested-22k-Python-Alpaca
20
+ - cakiki/rosetta-code
21
+ - Open-Orca/SlimOrca
22
+ - spider
23
+ - squad_v2
24
+ - migtissera/Synthia-v1.3
25
+ - datasets/winogrande
26
+ - nvidia/HelpSteer
27
+ - Intel/orca_dpo_pairs
28
+ - unalignment/toxic-dpo-v0.1
29
+ - jondurbin/truthy-dpo-v0.1
30
+ - allenai/ultrafeedback_binarized_cleaned
31
+ - Squish42/bluemoon-fandom-1-1-rp-cleaned
32
+ - LDJnr/Capybara
33
+ - JULIELab/EmoBank
34
+ - kingbri/PIPPA-shareGPT
35
+ quantized_by: bartowski
36
+ pipeline_tag: text-generation
37
+ ---
38
+
39
+ ## Exllama v2 Quantizations of bagel-dpo-34b-v0.2
40
+
41
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
42
+
43
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
44
+
45
+ Conversion was done using the default calibration dataset.
46
+
47
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
48
+
49
+ Original model: https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
50
+
51
+ ## Download instructions
52
+
53
+ With git:
54
+
55
+ ```shell
56
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/bagel-dpo-34b-v0.2-exl2
57
+ ```
58
+
59
+ With huggingface hub (credit to TheBloke for instructions):
60
+
61
+ ```shell
62
+ pip3 install huggingface-hub
63
+ ```
64
+
65
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `bagel-dpo-34b-v0.2-exl2`:
66
+
67
+ ```shell
68
+ mkdir bagel-dpo-34b-v0.2-exl2
69
+ huggingface-cli download bartowski/bagel-dpo-34b-v0.2-exl2 --local-dir bagel-dpo-34b-v0.2-exl2 --local-dir-use-symlinks False
70
+ ```
71
+
72
+ To download from a different branch, add the `--revision` parameter:
73
+
74
+ ```shell
75
+ mkdir bagel-dpo-34b-v0.2-exl2
76
+ huggingface-cli download bartowski/bagel-dpo-34b-v0.2-exl2 --revision 4_0 --local-dir bagel-dpo-34b-v0.2-exl2 --local-dir-use-symlinks False
77
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff