Text Generation
Transformers
English
llama-2
code
Eval Results
Inference Endpoints
bartowski commited on
Commit
45a7a37
1 Parent(s): 3eaafe3

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ pipeline_tag: text-generation
6
+ datasets:
7
+ - jondurbin/airoboros-2.2.1
8
+ - Open-Orca/OpenOrca
9
+ - garage-bAInd/Open-Platypus
10
+ - ehartford/samantha-data
11
+ tags:
12
+ - llama-2
13
+ - code
14
+ license: llama2
15
+ model-index:
16
+ - name: SpeechlessCoder
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ dataset:
21
+ type: openai_humaneval
22
+ name: HumanEval
23
+ metrics:
24
+ - name: pass@1
25
+ type: pass@1
26
+ value: 34.146
27
+ verified: false
28
+ quantized_by: bartowski
29
+ ---
30
+
31
+ ## Exllama v2 Quantizations of speechless-mistral-dolphin-orca-platypus-samantha-7b
32
+
33
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
34
+
35
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
36
+
37
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
38
+
39
+ Conversion was done using the default calibration dataset.
40
+
41
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
42
+
43
+ Original model: https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
44
+
45
+
46
+
47
+ <a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/8_0">8.0 bits per weight</a>
48
+
49
+ <a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/6_5">6.5 bits per weight</a>
50
+
51
+ <a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/5_0">5.0 bits per weight</a>
52
+
53
+ <a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/4_0">4.0 bits per weight</a>
54
+
55
+ <a href="https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2/tree/3_5">3.5 bits per weight</a>
56
+
57
+ ## Download instructions
58
+
59
+ With git:
60
+
61
+ ```shell
62
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
63
+ ```
64
+
65
+ With huggingface hub (credit to TheBloke for instructions):
66
+
67
+ ```shell
68
+ pip3 install huggingface-hub
69
+ ```
70
+
71
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2`:
72
+
73
+ ```shell
74
+ mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
75
+ huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
76
+ ```
77
+
78
+ To download from a different branch, add the `--revision` parameter:
79
+
80
+ ```shell
81
+ mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
82
+ huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --revision 4_0 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
83
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff