RichardErkhov commited on
Commit
9c39cd6
1 Parent(s): baf9709

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +236 -0
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ synapsellm-7b-mistral-v0.4-preview2 - GGUF
11
+ - Model creator: https://huggingface.co/WebraftAI/
12
+ - Original model: https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview2/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [synapsellm-7b-mistral-v0.4-preview2.Q2_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [synapsellm-7b-mistral-v0.4-preview2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [synapsellm-7b-mistral-v0.4-preview2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [synapsellm-7b-mistral-v0.4-preview2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [synapsellm-7b-mistral-v0.4-preview2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [synapsellm-7b-mistral-v0.4-preview2.Q3_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [synapsellm-7b-mistral-v0.4-preview2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [synapsellm-7b-mistral-v0.4-preview2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [synapsellm-7b-mistral-v0.4-preview2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [synapsellm-7b-mistral-v0.4-preview2.Q4_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [synapsellm-7b-mistral-v0.4-preview2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [synapsellm-7b-mistral-v0.4-preview2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [synapsellm-7b-mistral-v0.4-preview2.Q4_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [synapsellm-7b-mistral-v0.4-preview2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [synapsellm-7b-mistral-v0.4-preview2.Q4_1.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [synapsellm-7b-mistral-v0.4-preview2.Q5_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [synapsellm-7b-mistral-v0.4-preview2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [synapsellm-7b-mistral-v0.4-preview2.Q5_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [synapsellm-7b-mistral-v0.4-preview2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [synapsellm-7b-mistral-v0.4-preview2.Q5_1.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [synapsellm-7b-mistral-v0.4-preview2.Q6_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q6_K.gguf) | Q6_K | 5.53GB |
38
+ | [synapsellm-7b-mistral-v0.4-preview2.Q8_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q8_0.gguf) | Q8_0 | 7.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - en
47
+ license: apache-2.0
48
+ library_name: transformers
49
+ tags:
50
+ - code
51
+ model-index:
52
+ - name: synapsellm-7b-mistral-v0.4-preview2
53
+ results:
54
+ - task:
55
+ type: text-generation
56
+ name: Text Generation
57
+ dataset:
58
+ name: AI2 Reasoning Challenge (25-Shot)
59
+ type: ai2_arc
60
+ config: ARC-Challenge
61
+ split: test
62
+ args:
63
+ num_few_shot: 25
64
+ metrics:
65
+ - type: acc_norm
66
+ value: 52.99
67
+ name: normalized accuracy
68
+ source:
69
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: HellaSwag (10-Shot)
76
+ type: hellaswag
77
+ split: validation
78
+ args:
79
+ num_few_shot: 10
80
+ metrics:
81
+ - type: acc_norm
82
+ value: 74.54
83
+ name: normalized accuracy
84
+ source:
85
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
86
+ name: Open LLM Leaderboard
87
+ - task:
88
+ type: text-generation
89
+ name: Text Generation
90
+ dataset:
91
+ name: MMLU (5-Shot)
92
+ type: cais/mmlu
93
+ config: all
94
+ split: test
95
+ args:
96
+ num_few_shot: 5
97
+ metrics:
98
+ - type: acc
99
+ value: 54.6
100
+ name: accuracy
101
+ source:
102
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
103
+ name: Open LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: TruthfulQA (0-shot)
109
+ type: truthful_qa
110
+ config: multiple_choice
111
+ split: validation
112
+ args:
113
+ num_few_shot: 0
114
+ metrics:
115
+ - type: mc2
116
+ value: 53.79
117
+ source:
118
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
119
+ name: Open LLM Leaderboard
120
+ - task:
121
+ type: text-generation
122
+ name: Text Generation
123
+ dataset:
124
+ name: Winogrande (5-shot)
125
+ type: winogrande
126
+ config: winogrande_xl
127
+ split: validation
128
+ args:
129
+ num_few_shot: 5
130
+ metrics:
131
+ - type: acc
132
+ value: 73.95
133
+ name: accuracy
134
+ source:
135
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
136
+ name: Open LLM Leaderboard
137
+ - task:
138
+ type: text-generation
139
+ name: Text Generation
140
+ dataset:
141
+ name: GSM8k (5-shot)
142
+ type: gsm8k
143
+ config: main
144
+ split: test
145
+ args:
146
+ num_few_shot: 5
147
+ metrics:
148
+ - type: acc
149
+ value: 25.7
150
+ name: accuracy
151
+ source:
152
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
153
+ name: Open LLM Leaderboard
154
+ ---
155
+
156
+ # SynapseLLM:
157
+
158
+ SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
159
+
160
+ ## Model Details
161
+ **SynapseLLM:**
162
+ - Parameters: 7B
163
+ - Learning rate: 2e-4
164
+ - Adapter used: Qlora
165
+ - Precision: float16
166
+ - Batch size: 32
167
+ - Maximum gradient normal: 0.3
168
+ - Optimizer: paged_adamw_32bit
169
+ - Warmup Ratio: 0.03
170
+ - Step(s) (trained): 150
171
+ - Epoch(s) (trained): 1
172
+
173
+ ### Model Description
174
+
175
+ This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 770k rows comprising of 361k Maths Instruct Q/A, 143k GPT-3.5 Q/A, 140k General Code, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
176
+
177
+
178
+ - **Developed by:** WebraftAI
179
+ - **Funded by:** Webraft Cloud
180
+ - **Shared by:** WebraftAI
181
+ - **Model type:** Decoder-only Transformer
182
+ - **Language(s):** English Only
183
+ - **License:** Apache 2.0
184
+ - **Finetuned from model:** Mistral-7b-v0.1
185
+
186
+ ### Prompt format:
187
+ This model follows the same prompt format as mistral instruct 7b v0.1 .The sample prompt is still given below:
188
+ ```text
189
+
190
+ <s>[INST] Hello, how are you? [/INST]
191
+
192
+ ```
193
+
194
+ ### Example Code:
195
+ Here's an example code using `transformers` library provided by HF.
196
+ ```python
197
+
198
+ from transformers import AutoTokenizer, AutoModelForCausalLM
199
+
200
+ tokenizer = AutoTokenizer.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
201
+ model = AutoModelForCausalLM.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
202
+
203
+ prompt= "<s>[INST] Hello! [/INST] "
204
+
205
+ device = "cuda"
206
+
207
+ model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
208
+ model.to(device)
209
+
210
+ generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
211
+ print(tokenizer.batch_decode(generated_ids)[0])
212
+ ```
213
+
214
+ ### Model Bias:
215
+ This model has some bias areas, discussed below:
216
+ - Model might output factually incorrect information.
217
+ - Model does not follow system prompts.
218
+ - Model does not have any kind of memory, researchers can experiment feeding memory.
219
+ - Model is trained on different datas, so it can bias information or exclaim itself as gpt model.
220
+
221
+
222
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
223
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview2)
224
+
225
+ | Metric |Value|
226
+ |---------------------------------|----:|
227
+ |Avg. |55.93|
228
+ |AI2 Reasoning Challenge (25-Shot)|52.99|
229
+ |HellaSwag (10-Shot) |74.54|
230
+ |MMLU (5-Shot) |54.60|
231
+ |TruthfulQA (0-shot) |53.79|
232
+ |Winogrande (5-shot) |73.95|
233
+ |GSM8k (5-shot) |25.70|
234
+
235
+
236
+