RichardErkhov
commited on
Commit
•
12c151a
1
Parent(s):
1ae7163
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,274 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
dolphin-2.8-mistral-7b-v02 - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/cognitivecomputations/
|
12 |
+
- Original model: https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
base_model: alpindale/Mistral-7B-v0.2-hf
|
20 |
+
language:
|
21 |
+
- en
|
22 |
+
license: apache-2.0
|
23 |
+
datasets:
|
24 |
+
- cognitivecomputations/dolphin
|
25 |
+
- cognitivecomputations/dolphin-coder
|
26 |
+
- cognitivecomputations/samantha-data
|
27 |
+
- jondurbin/airoboros-2.2.1
|
28 |
+
- teknium/openhermes-2.5
|
29 |
+
- m-a-p/Code-Feedback
|
30 |
+
- m-a-p/CodeFeedback-Filtered-Instruction
|
31 |
+
model-index:
|
32 |
+
- name: dolphin-2.8-mistral-7b-v02
|
33 |
+
results:
|
34 |
+
- task:
|
35 |
+
type: text-generation
|
36 |
+
dataset:
|
37 |
+
type: openai_humaneval
|
38 |
+
name: HumanEval
|
39 |
+
metrics:
|
40 |
+
- name: pass@1
|
41 |
+
type: pass@1
|
42 |
+
value: 0.469
|
43 |
+
verified: false
|
44 |
+
---
|
45 |
+
|
46 |
+
# Dolphin 2.8 Mistral 7b v0.2 🐬
|
47 |
+
|
48 |
+
By Eric Hartford and Cognitive Computations
|
49 |
+
|
50 |
+
Discord: https://discord.gg/8fbBeC7ZGx
|
51 |
+
|
52 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
|
53 |
+
|
54 |
+
My appreciation for the sponsors of Dolphin 2.8:
|
55 |
+
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
|
56 |
+
- [Winston Sou](https://twitter.com/WinsonDabbles) - Along with a generous anonymous sponsor, donated a massive personally owned compute resource!
|
57 |
+
- [Abacus AI](https://abacus.ai/) - my employer and partner in many things.
|
58 |
+
|
59 |
+
This model is based on [Mistral-7b-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) a new base model released by MistralAI on March 23, 2024 but they have not yet published on HuggingFace. Thanks to @alpindale for converting / publishing.
|
60 |
+
|
61 |
+
The base model has 32k context, and the full-weights fine-tune was with 16k sequence lengths.
|
62 |
+
|
63 |
+
It took 3 days on 10x L40S provided by [Crusoe Cloud](https://crusoe.ai/)
|
64 |
+
|
65 |
+
Dolphin-2.8 has a variety of instruction, conversational, and coding skills.
|
66 |
+
|
67 |
+
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
|
68 |
+
|
69 |
+
Dolphin is licensed Apache 2.0. I grant permission for any use including commercial. Dolphin was trained on data generated from GPT4 among other models.
|
70 |
+
|
71 |
+
# Evals
|
72 |
+
|
73 |
+
```
|
74 |
+
{
|
75 |
+
"arc_challenge": {
|
76 |
+
"acc,none": 0.5921501706484642,
|
77 |
+
"acc_stderr,none": 0.014361097288449701,
|
78 |
+
"acc_norm,none": 0.6339590443686007,
|
79 |
+
"acc_norm_stderr,none": 0.014077223108470139
|
80 |
+
},
|
81 |
+
"gsm8k": {
|
82 |
+
"exact_match,strict-match": 0.4783927217589083,
|
83 |
+
"exact_match_stderr,strict-match": 0.013759618667051773,
|
84 |
+
"exact_match,flexible-extract": 0.5367702805155421,
|
85 |
+
"exact_match_stderr,flexible-extract": 0.013735191956468648
|
86 |
+
},
|
87 |
+
"hellaswag": {
|
88 |
+
"acc,none": 0.6389165504879506,
|
89 |
+
"acc_stderr,none": 0.004793330525656218,
|
90 |
+
"acc_norm,none": 0.8338976299541924,
|
91 |
+
"acc_norm_stderr,none": 0.00371411888431746
|
92 |
+
},
|
93 |
+
"mmlu": {
|
94 |
+
"acc,none": 0.6122347243982339,
|
95 |
+
"acc_stderr,none": 0.003893774654142997
|
96 |
+
},
|
97 |
+
"truthfulqa_mc2": {
|
98 |
+
"acc,none": 0.5189872652778472,
|
99 |
+
"acc_stderr,none": 0.014901128316426086
|
100 |
+
},
|
101 |
+
"winogrande": {
|
102 |
+
"acc,none": 0.7971586424625099,
|
103 |
+
"acc_stderr,none": 0.011301439925936643
|
104 |
+
}
|
105 |
+
}
|
106 |
+
```
|
107 |
+
|
108 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
109 |
+
<details><summary>See axolotl config</summary>
|
110 |
+
|
111 |
+
axolotl version: `0.4.0`
|
112 |
+
```yaml
|
113 |
+
|
114 |
+
base_model: alpindale/Mistral-7B-v0.2-hf
|
115 |
+
model_type: MistralForCausalLM
|
116 |
+
tokenizer_type: LlamaTokenizer
|
117 |
+
is_mistral_derived_model: true
|
118 |
+
|
119 |
+
load_in_8bit: false
|
120 |
+
load_in_4bit: false
|
121 |
+
strict: false
|
122 |
+
|
123 |
+
datasets:
|
124 |
+
- path: /workspace/datasets/dolphin201-sharegpt2.jsonl
|
125 |
+
type: sharegpt
|
126 |
+
- path: /workspace/datasets/dolphin-coder-translate-sharegpt2.jsonl
|
127 |
+
type: sharegpt
|
128 |
+
- path: /workspace/datasets/dolphin-coder-codegen-sharegpt2.jsonl
|
129 |
+
type: sharegpt
|
130 |
+
- path: /workspace/datasets/m-a-p_Code-Feedback-sharegpt.jsonl
|
131 |
+
type: sharegpt
|
132 |
+
- path: /workspace/datasets/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt.jsonl
|
133 |
+
type: sharegpt
|
134 |
+
- path: /workspace/datasets/not_samantha_norefusals.jsonl
|
135 |
+
type: sharegpt
|
136 |
+
- path: /workspace/datasets/openhermes2_5-sharegpt.jsonl
|
137 |
+
type: sharegpt
|
138 |
+
|
139 |
+
chat_template: chatml
|
140 |
+
|
141 |
+
dataset_prepared_path: last_run_prepared
|
142 |
+
val_set_size: 0.001
|
143 |
+
output_dir: /workspace/dolphin-2.8-mistral-7b
|
144 |
+
|
145 |
+
sequence_len: 16384
|
146 |
+
sample_packing: true
|
147 |
+
pad_to_sequence_len: true
|
148 |
+
|
149 |
+
wandb_project: dolphin
|
150 |
+
wandb_entity:
|
151 |
+
wandb_watch:
|
152 |
+
wandb_run_id:
|
153 |
+
wandb_log_model:
|
154 |
+
|
155 |
+
gradient_accumulation_steps: 8
|
156 |
+
micro_batch_size: 3
|
157 |
+
num_epochs: 4
|
158 |
+
adam_beta2: 0.95
|
159 |
+
adam_epsilon: 0.00001
|
160 |
+
max_grad_norm: 1.0
|
161 |
+
lr_scheduler: cosine
|
162 |
+
learning_rate: 0.000005
|
163 |
+
optimizer: adamw_bnb_8bit
|
164 |
+
|
165 |
+
train_on_inputs: false
|
166 |
+
group_by_length: false
|
167 |
+
bf16: true
|
168 |
+
fp16: false
|
169 |
+
tf32: false
|
170 |
+
|
171 |
+
gradient_checkpointing: true
|
172 |
+
gradient_checkpointing_kwargs:
|
173 |
+
use_reentrant: true
|
174 |
+
early_stopping_patience:
|
175 |
+
resume_from_checkpoint:
|
176 |
+
local_rank:
|
177 |
+
logging_steps: 1
|
178 |
+
xformers_attention:
|
179 |
+
flash_attention: true
|
180 |
+
|
181 |
+
warmup_steps: 10
|
182 |
+
|
183 |
+
eval_steps: 73
|
184 |
+
eval_table_size:
|
185 |
+
eval_table_max_new_tokens:
|
186 |
+
eval_sample_packing: false
|
187 |
+
saves_per_epoch:
|
188 |
+
save_steps: 73
|
189 |
+
save_total_limit: 2
|
190 |
+
debug:
|
191 |
+
deepspeed: deepspeed_configs/zero3_bf16.json
|
192 |
+
weight_decay: 0.1
|
193 |
+
fsdp:
|
194 |
+
fsdp_config:
|
195 |
+
special_tokens:
|
196 |
+
eos_token: "<|im_end|>"
|
197 |
+
tokens:
|
198 |
+
- "<|im_start|>"
|
199 |
+
|
200 |
+
```
|
201 |
+
|
202 |
+
</details><br>
|
203 |
+
|
204 |
+
# workspace/dolphin-2.8-mistral-7b
|
205 |
+
|
206 |
+
This model is a fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on the None dataset.
|
207 |
+
It achieves the following results on the evaluation set:
|
208 |
+
- Loss: 0.4828
|
209 |
+
|
210 |
+
## Model description
|
211 |
+
|
212 |
+
More information needed
|
213 |
+
|
214 |
+
## Intended uses & limitations
|
215 |
+
|
216 |
+
More information needed
|
217 |
+
|
218 |
+
## Training and evaluation data
|
219 |
+
|
220 |
+
More information needed
|
221 |
+
|
222 |
+
## Training procedure
|
223 |
+
|
224 |
+
### Training hyperparameters
|
225 |
+
|
226 |
+
The following hyperparameters were used during training:
|
227 |
+
- learning_rate: 5e-06
|
228 |
+
- train_batch_size: 3
|
229 |
+
- eval_batch_size: 3
|
230 |
+
- seed: 42
|
231 |
+
- distributed_type: multi-GPU
|
232 |
+
- num_devices: 10
|
233 |
+
- gradient_accumulation_steps: 8
|
234 |
+
- total_train_batch_size: 240
|
235 |
+
- total_eval_batch_size: 30
|
236 |
+
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
|
237 |
+
- lr_scheduler_type: cosine
|
238 |
+
- lr_scheduler_warmup_steps: 10
|
239 |
+
- num_epochs: 4
|
240 |
+
|
241 |
+
### Training results
|
242 |
+
|
243 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
244 |
+
|:-------------:|:-----:|:----:|:---------------:|
|
245 |
+
| 1.1736 | 0.0 | 1 | 1.0338 |
|
246 |
+
| 0.6106 | 0.36 | 73 | 0.5439 |
|
247 |
+
| 0.5766 | 0.72 | 146 | 0.5171 |
|
248 |
+
| 0.5395 | 1.06 | 219 | 0.5045 |
|
249 |
+
| 0.5218 | 1.42 | 292 | 0.4976 |
|
250 |
+
| 0.5336 | 1.78 | 365 | 0.4915 |
|
251 |
+
| 0.5018 | 2.13 | 438 | 0.4885 |
|
252 |
+
| 0.5113 | 2.48 | 511 | 0.4856 |
|
253 |
+
| 0.5066 | 2.84 | 584 | 0.4838 |
|
254 |
+
| 0.4967 | 3.19 | 657 | 0.4834 |
|
255 |
+
| 0.4956 | 3.55 | 730 | 0.4830 |
|
256 |
+
| 0.5026 | 3.9 | 803 | 0.4828 |
|
257 |
+
|
258 |
+
|
259 |
+
### Framework versions
|
260 |
+
|
261 |
+
- Transformers 4.40.0.dev0
|
262 |
+
- Pytorch 2.2.1+cu121
|
263 |
+
- Datasets 2.18.0
|
264 |
+
- Tokenizers 0.15.0
|
265 |
+
|
266 |
+
|
267 |
+
# Quants
|
268 |
+
|
269 |
+
- [dagbs/-GGUF](https://huggingface.co/dagbs/dolphin-2.8-mistral-7b-v02-GGUF)
|
270 |
+
|
271 |
+
- [bartowski/ExLlamaV2](https://huggingface.co/bartowski/dolphin-2.8-mistral-7b-v02-exl2)
|
272 |
+
|
273 |
+
- [solidrust/AWQ](https://huggingface.co/solidrust/dolphin-2.8-mistral-7b-v02-AWQ)
|
274 |
+
|