Text Generation
bartowski commited on
Commit
01b64b0
1 Parent(s): 3c0cb1e

Main branch

Browse files
Files changed (2) hide show
  1. README.md +71 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ai2_arc
5
+ - unalignment/spicy-3.1
6
+ - codeparrot/apps
7
+ - facebook/belebele
8
+ - boolq
9
+ - jondurbin/cinematika-v0.1
10
+ - drop
11
+ - lmsys/lmsys-chat-1m
12
+ - TIGER-Lab/MathInstruct
13
+ - cais/mmlu
14
+ - Muennighoff/natural-instructions
15
+ - openbookqa
16
+ - piqa
17
+ - Vezora/Tested-22k-Python-Alpaca
18
+ - cakiki/rosetta-code
19
+ - Open-Orca/SlimOrca
20
+ - spider
21
+ - squad_v2
22
+ - migtissera/Synthia-v1.3
23
+ - datasets/winogrande
24
+ - nvidia/HelpSteer
25
+ - Intel/orca_dpo_pairs
26
+ - unalignment/toxic-dpo-v0.1
27
+ - jondurbin/truthy-dpo-v0.1
28
+ - allenai/ultrafeedback_binarized_cleaned
29
+ quantized_by: bartowski
30
+ pipeline_tag: text-generation
31
+ ---
32
+
33
+ ## Exllama v2 Quantizations of bagel-dpo-7b-v0.1
34
+
35
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.10">turboderp's ExLlamaV2 v0.0.10</a> for quantization.
36
+
37
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
38
+
39
+ Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.
40
+
41
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
42
+
43
+ Original model: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
44
+
45
+ ## Download instructions
46
+
47
+ With git:
48
+
49
+ ```shell
50
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/bagel-dpo-7b-v0.1-exl2
51
+ ```
52
+
53
+ With huggingface hub (credit to TheBloke for instructions):
54
+
55
+ ```shell
56
+ pip3 install huggingface-hub
57
+ ```
58
+
59
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `bagel-dpo-7b-v0.1-exl2`:
60
+
61
+ ```shell
62
+ mkdir bagel-dpo-7b-v0.1-exl2
63
+ huggingface-cli download bartowski/bagel-dpo-7b-v0.1-exl2 --local-dir bagel-dpo-7b-v0.1-exl2 --local-dir-use-symlinks False
64
+ ```
65
+
66
+ To download from a different branch, add the `--revision` parameter:
67
+
68
+ ```shell
69
+ mkdir bagel-dpo-7b-v0.1-exl2
70
+ huggingface-cli download bartowski/bagel-dpo-7b-v0.1-exl2 --revision 4_0 --local-dir bagel-dpo-7b-v0.1-exl2 --local-dir-use-symlinks False
71
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff