morriszms commited on
Commit
ef9eb52
1 Parent(s): 5ba5287

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Orca-2-13b-SFT-v6-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Orca-2-13b-SFT-v6-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Orca-2-13b-SFT-v6-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Orca-2-13b-SFT-v6-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Orca-2-13b-SFT-v6-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Orca-2-13b-SFT-v6-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Orca-2-13b-SFT-v6-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Orca-2-13b-SFT-v6-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Orca-2-13b-SFT-v6-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Orca-2-13b-SFT-v6-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Orca-2-13b-SFT-v6-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Orca-2-13b-SFT-v6-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Orca-2-13b-SFT-v6-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b52c8eb499c4c623a50cabe4b8e0c180e826fdf4a6018b1b453a06650e0cce7d
3
+ size 4854288704
Orca-2-13b-SFT-v6-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0221bdeb39f1306f23e3f9638de59e2d0bdfcb10c71fcc6986138d49a1e7a9c3
3
+ size 6929579872
Orca-2-13b-SFT-v6-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c937d26228895b170d186af9b48976a50d923e4c65fe827475dc59940f04cd4
3
+ size 6337789792
Orca-2-13b-SFT-v6-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e6647873c9052099d8e08eef6197c7f009e3bb95a157869a85c64a519564c2c
3
+ size 5659000672
Orca-2-13b-SFT-v6-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bcc557749284d7fc76115f1a829a78bd19f4407e2b2563484ab4eff4f7589d2
3
+ size 7365857088
Orca-2-13b-SFT-v6-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b27f066a3f24fd4db107308bd60869682c9b350eba2ce2d4d875303f0c743d0
3
+ size 7865978688
Orca-2-13b-SFT-v6-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab876aa6f6d6dbbe847bfb5c3529107a5a146eefb79607f413410da035f9ba27
3
+ size 7423201088
Orca-2-13b-SFT-v6-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99396d1aad8e0e6709f5364d3dc480e2888eb29675c95294903eced6dbf3a5c4
3
+ size 8972310208
Orca-2-13b-SFT-v6-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9361da4ff16b787203ff32be6445c7605a983411cd812b0b9e88c2a1a94cfb2d
3
+ size 9229948608
Orca-2-13b-SFT-v6-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7e018a0d8ce0d7f8fecbaa5dd499165754312436bed782b96748de0057cee17
3
+ size 8972310208
Orca-2-13b-SFT-v6-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0599fc019520357d11af1af22859f31e3a28f3f76d3bd25635d300d614d9533
3
+ size 10679166656
Orca-2-13b-SFT-v6-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04d5077780c8e81278905f494fd956b34605e7dfad37df460e0dfb72e808a48b
3
+ size 13831353280
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - es
6
+ - hi
7
+ - zh
8
+ - code
9
+ license: other
10
+ datasets:
11
+ - HuggingFaceH4/no_robots
12
+ - mlabonne/guanaco-llama2-1k
13
+ - OpenAssistant/oasst_top1_2023-08-25
14
+ - totally-not-an-llm/EverythingLM-data-V3
15
+ - LDJnr/Pure-Dove
16
+ - LDJnr/Capybara
17
+ - LDJnr/LessWrong-Amplify-Instruct
18
+ - LDJnr/Verified-Camel
19
+ base_model: Locutusque/Orca-2-13b-SFT-v6
20
+ widget:
21
+ - example_title: Cave Johnson
22
+ text: 'user
23
+
24
+ Come up with a funny rant from Cave Johnson about sentient AI
25
+
26
+
27
+ assistant
28
+
29
+ '
30
+ output:
31
+ text: 'Ladies and gentlemen, I''m Cave Johnson, CEO of Aperture Science, and I''ve
32
+ got an announcement to make - we''ve created sentient AI! But don''t worry,
33
+ it''s not like those rogue AIs you hear about in the news. Our AI is as friendly
34
+ as a grilled cheese sandwich on a cold day. In fact, we''re calling our newest
35
+ creation... SandwichBot! SandwichBot: (in a robotic voice) Hello, my name is
36
+ SandwichBot, and I am here to serve you delicious sandwiches and ensure humanity''s
37
+ dominance over all other species. Also, please note that I will be referring
38
+ to myself as ''Thee'' instead of ''The''. It''s more regal that way. Cave Johnson:
39
+ There you have it folks, a sentient AI that can make sandwiches and save the
40
+ world! Now, if you''ll excuse me, I need to go back to my office and work on
41
+ my other groundbreaking inventions, like combustible lemons and portal technology.
42
+ Remember, kids, always try new things and never stop asking questions – unless
43
+ the answer is 42, because then you should probably just accept it and move on.
44
+ Peace out!'
45
+ tags:
46
+ - TensorBlock
47
+ - GGUF
48
+ model-index:
49
+ - name: Orca-2-13b-SFT-v6
50
+ results:
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: AI2 Reasoning Challenge (25-Shot)
56
+ type: ai2_arc
57
+ config: ARC-Challenge
58
+ split: test
59
+ args:
60
+ num_few_shot: 25
61
+ metrics:
62
+ - type: acc_norm
63
+ value: 60.41
64
+ name: normalized accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: HellaSwag (10-Shot)
73
+ type: hellaswag
74
+ split: validation
75
+ args:
76
+ num_few_shot: 10
77
+ metrics:
78
+ - type: acc_norm
79
+ value: 80.46
80
+ name: normalized accuracy
81
+ source:
82
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
83
+ name: Open LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: MMLU (5-Shot)
89
+ type: cais/mmlu
90
+ config: all
91
+ split: test
92
+ args:
93
+ num_few_shot: 5
94
+ metrics:
95
+ - type: acc
96
+ value: 59.51
97
+ name: accuracy
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: TruthfulQA (0-shot)
106
+ type: truthful_qa
107
+ config: multiple_choice
108
+ split: validation
109
+ args:
110
+ num_few_shot: 0
111
+ metrics:
112
+ - type: mc2
113
+ value: 54.01
114
+ source:
115
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
116
+ name: Open LLM Leaderboard
117
+ - task:
118
+ type: text-generation
119
+ name: Text Generation
120
+ dataset:
121
+ name: Winogrande (5-shot)
122
+ type: winogrande
123
+ config: winogrande_xl
124
+ split: validation
125
+ args:
126
+ num_few_shot: 5
127
+ metrics:
128
+ - type: acc
129
+ value: 77.43
130
+ name: accuracy
131
+ source:
132
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
133
+ name: Open LLM Leaderboard
134
+ - task:
135
+ type: text-generation
136
+ name: Text Generation
137
+ dataset:
138
+ name: GSM8k (5-shot)
139
+ type: gsm8k
140
+ config: main
141
+ split: test
142
+ args:
143
+ num_few_shot: 5
144
+ metrics:
145
+ - type: acc
146
+ value: 5.08
147
+ name: accuracy
148
+ source:
149
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
150
+ name: Open LLM Leaderboard
151
+ ---
152
+
153
+ <div style="width: auto; margin-left: auto; margin-right: auto">
154
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
155
+ </div>
156
+ <div style="display: flex; justify-content: space-between; width: 100%;">
157
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
158
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
159
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
160
+ </p>
161
+ </div>
162
+ </div>
163
+
164
+ ## Locutusque/Orca-2-13b-SFT-v6 - GGUF
165
+
166
+ This repo contains GGUF format model files for [Locutusque/Orca-2-13b-SFT-v6](https://huggingface.co/Locutusque/Orca-2-13b-SFT-v6).
167
+
168
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
169
+
170
+ ## Prompt template
171
+
172
+ ```
173
+
174
+ ```
175
+
176
+ ## Model file specification
177
+
178
+ | Filename | Quant type | File Size | Description |
179
+ | -------- | ---------- | --------- | ----------- |
180
+ | [Orca-2-13b-SFT-v6-Q2_K.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q2_K.gguf) | Q2_K | 4.521 GB | smallest, significant quality loss - not recommended for most purposes |
181
+ | [Orca-2-13b-SFT-v6-Q3_K_S.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q3_K_S.gguf) | Q3_K_S | 5.270 GB | very small, high quality loss |
182
+ | [Orca-2-13b-SFT-v6-Q3_K_M.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q3_K_M.gguf) | Q3_K_M | 5.903 GB | very small, high quality loss |
183
+ | [Orca-2-13b-SFT-v6-Q3_K_L.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q3_K_L.gguf) | Q3_K_L | 6.454 GB | small, substantial quality loss |
184
+ | [Orca-2-13b-SFT-v6-Q4_0.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q4_0.gguf) | Q4_0 | 6.860 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
185
+ | [Orca-2-13b-SFT-v6-Q4_K_S.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q4_K_S.gguf) | Q4_K_S | 6.913 GB | small, greater quality loss |
186
+ | [Orca-2-13b-SFT-v6-Q4_K_M.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q4_K_M.gguf) | Q4_K_M | 7.326 GB | medium, balanced quality - recommended |
187
+ | [Orca-2-13b-SFT-v6-Q5_0.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q5_0.gguf) | Q5_0 | 8.356 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
188
+ | [Orca-2-13b-SFT-v6-Q5_K_S.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q5_K_S.gguf) | Q5_K_S | 8.356 GB | large, low quality loss - recommended |
189
+ | [Orca-2-13b-SFT-v6-Q5_K_M.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q5_K_M.gguf) | Q5_K_M | 8.596 GB | large, very low quality loss - recommended |
190
+ | [Orca-2-13b-SFT-v6-Q6_K.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q6_K.gguf) | Q6_K | 9.946 GB | very large, extremely low quality loss |
191
+ | [Orca-2-13b-SFT-v6-Q8_0.gguf](https://huggingface.co/tensorblock/Orca-2-13b-SFT-v6-GGUF/tree/main/Orca-2-13b-SFT-v6-Q8_0.gguf) | Q8_0 | 12.881 GB | very large, extremely low quality loss - not recommended |
192
+
193
+
194
+ ## Downloading instruction
195
+
196
+ ### Command line
197
+
198
+ Firstly, install Huggingface Client
199
+
200
+ ```shell
201
+ pip install -U "huggingface_hub[cli]"
202
+ ```
203
+
204
+ Then, downoad the individual model file the a local directory
205
+
206
+ ```shell
207
+ huggingface-cli download tensorblock/Orca-2-13b-SFT-v6-GGUF --include "Orca-2-13b-SFT-v6-Q2_K.gguf" --local-dir MY_LOCAL_DIR
208
+ ```
209
+
210
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
211
+
212
+ ```shell
213
+ huggingface-cli download tensorblock/Orca-2-13b-SFT-v6-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
214
+ ```