gobean commited on
Commit
9a2e741
1 Parent(s): 574bbe4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +236 -2
README.md CHANGED
@@ -1,5 +1,239 @@
1
  ---
2
  license: other
3
- license_name: llama3
4
- license_link: LICENSE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: out
8
+ results: []
9
+ datasets:
10
+ - cognitivecomputations/Dolphin-2.9
11
+ - teknium/OpenHermes-2.5
12
+ - m-a-p/CodeFeedback-Filtered-Instruction
13
+ - cognitivecomputations/dolphin-coder
14
+ - cognitivecomputations/samantha-data
15
+ - HuggingFaceH4/ultrachat_200k
16
+ - microsoft/orca-math-word-problems-200k
17
+ - abacusai/SystemChat-1.1
18
+ - Locutusque/function-calling-chatml
19
+ - internlm/Agent-FLAN
20
  ---
21
+
22
+ This is the [llamafile](https://github.com/Mozilla-Ocho/llamafile) for [Dolphin 2.9 Llama 3 8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b).
23
+
24
+ Quick tests show it's good but not as sharp as the base model, using just some few shot prompts looking for precision when asking about history and science. More tests will have to be done to compare this and WizardLM-7B to see how much the finetuning did to Llama-3-8B.
25
+
26
+
27
+ size notes:
28
+ Windows users, go for q3-k-m. Others, use the biggest one that works on your machine. FreeBSD users, you're the real heroes.
29
+
30
+
31
+ I just copied the original model card this time.
32
+
33
+ ## Original Model Card Below
34
+
35
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
36
+ should probably proofread and complete it, then remove this comment. -->
37
+
38
+ # Dolphin 2.9 Llama 3 8b 🐬
39
+
40
+ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
41
+
42
+ Discord: https://discord.gg/8fbBeC7ZGx
43
+
44
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
45
+
46
+ My appreciation for the sponsors of Dolphin 2.9:
47
+ - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
48
+
49
+ This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
50
+
51
+ The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
52
+
53
+ It took 2.5 days on 8x L40S provided by Crusoe Cloud
54
+
55
+ This model was trained FFT on all parameters, using ChatML prompt template format.
56
+
57
+ example:
58
+
59
+ ```
60
+ <|im_start|>system
61
+ You are Dolphin, a helpful AI assistant.<|im_end|>
62
+ <|im_start|>user
63
+ {prompt}<|im_end|>
64
+ <|im_start|>assistant
65
+
66
+ ```
67
+
68
+ Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
69
+
70
+ Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
71
+
72
+ Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
73
+
74
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
75
+ <details><summary>See axolotl config</summary>
76
+
77
+ axolotl version: `0.4.0`
78
+ ```yaml
79
+ base_model: meta-llama/Meta-Llama-3-8B
80
+ model_type: AutoModelForCausalLM
81
+ tokenizer_type: AutoTokenizer
82
+ tokenizer_use_fast: false
83
+
84
+
85
+ load_in_8bit: false
86
+ load_in_4bit: false
87
+ strict: false
88
+ model_config:
89
+
90
+ datasets:
91
+ - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
92
+ type: sharegpt
93
+ conversation: chatml
94
+ - path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
95
+ type: sharegpt
96
+ conversation: chatml
97
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
98
+ type: sharegpt
99
+ conversation: chatml
100
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
101
+ type: sharegpt
102
+ conversation: chatml
103
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
104
+ type: sharegpt
105
+ conversation: chatml
106
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
107
+ type: sharegpt
108
+ conversation: chatml
109
+ - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
110
+ type: sharegpt
111
+ conversation: chatml
112
+ - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
113
+ type: sharegpt
114
+ conversation: chatml
115
+ - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
116
+ type: sharegpt
117
+ conversation: chatml
118
+ - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
119
+ type: sharegpt
120
+ conversation: chatml
121
+ - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
122
+ type: sharegpt
123
+ conversation: chatml
124
+ - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
125
+ type: sharegpt
126
+ conversation: chatml
127
+ - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
128
+ type: sharegpt
129
+ conversation: chatml
130
+ - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
131
+ type: sharegpt
132
+ conversation: chatml
133
+ - path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
134
+ type: sharegpt
135
+ conversation: chatml
136
+
137
+ chat_template: chatml
138
+
139
+
140
+ dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
141
+ val_set_size: 0.0002
142
+ output_dir: ./out
143
+
144
+ sequence_len: 4096
145
+ sample_packing: true
146
+ pad_to_sequence_len: true
147
+
148
+ gradient_accumulation_steps: 4
149
+ micro_batch_size: 3
150
+ num_epochs: 3
151
+ logging_steps: 1
152
+ optimizer: adamw_8bit
153
+ lr_scheduler: cosine
154
+ learning_rate: 2e-5
155
+
156
+ wandb_project: dolphin-2.9-mixtral-8x22b
157
+ wandb_watch:
158
+ wandb_run_id:
159
+ wandb_log_model:
160
+
161
+ train_on_inputs: false
162
+ group_by_length: false
163
+ bf16: auto
164
+ fp16:
165
+ tf32: false
166
+
167
+ gradient_checkpointing: true
168
+ gradient_checkpointing_kwargs:
169
+ use_reentrant: false
170
+ early_stopping_patience:
171
+ resume_from_checkpoint:
172
+ local_rank:
173
+ logging_steps: 1
174
+ xformers_attention:
175
+ flash_attention: true
176
+ saves_per_epoch: 4
177
+ save_total_limit: 2
178
+ save_steps:
179
+ evals_per_epoch: 4
180
+ eval_sample_packing: false
181
+ debug:
182
+ deepspeed: deepspeed_configs/zero3_bf16.json
183
+ weight_decay: 0.05
184
+ fsdp:
185
+ fsdp_config:
186
+ special_tokens:
187
+ eos_token: "<|im_end|>"
188
+ pad_token: "<|end_of_text|>"
189
+ tokens:
190
+ - "<|im_start|>"
191
+ - "<|im_end|>"
192
+
193
+ ```
194
+
195
+ </details><br>
196
+
197
+ ## Training procedure
198
+
199
+ ### Training hyperparameters
200
+
201
+ The following hyperparameters were used during training:
202
+ - learning_rate: 2e-05
203
+ - train_batch_size: 3
204
+ - eval_batch_size: 3
205
+ - seed: 42
206
+ - distributed_type: multi-GPU
207
+ - num_devices: 8
208
+ - gradient_accumulation_steps: 4
209
+ - total_train_batch_size: 96
210
+ - total_eval_batch_size: 24
211
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
212
+ - lr_scheduler_type: cosine
213
+ - lr_scheduler_warmup_steps: 7
214
+ - num_epochs: 3
215
+
216
+ ### Training results
217
+
218
+ | Training Loss | Epoch | Step | Validation Loss |
219
+ |:-------------:|:------:|:----:|:---------------:|
220
+ | 1.146 | 0.0005 | 1 | 1.1064 |
221
+ | 0.6962 | 0.2501 | 555 | 0.6636 |
222
+ | 0.6857 | 0.5001 | 1110 | 0.6503 |
223
+ | 0.6592 | 0.7502 | 1665 | 0.6419 |
224
+ | 0.6465 | 1.0002 | 2220 | 0.6317 |
225
+ | 0.5295 | 1.2395 | 2775 | 0.6408 |
226
+ | 0.5302 | 1.4895 | 3330 | 0.6351 |
227
+ | 0.5188 | 1.7396 | 3885 | 0.6227 |
228
+ | 0.521 | 1.9896 | 4440 | 0.6168 |
229
+ | 0.3968 | 2.2289 | 4995 | 0.6646 |
230
+ | 0.3776 | 2.4789 | 5550 | 0.6619 |
231
+ | 0.3983 | 2.7290 | 6105 | 0.6602 |
232
+
233
+
234
+ ### Framework versions
235
+
236
+ - Transformers 4.40.0
237
+ - Pytorch 2.2.2+cu121
238
+ - Datasets 2.18.0
239
+ - Tokenizers 0.19.1