ehartford commited on
Commit
b0ccdea
1 Parent(s): 37bcac8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -3,13 +3,66 @@ license: apache-2.0
3
  base_model: mistral-community/Mixtral-8x22B-v0.1
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: out
8
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
  <details><summary>See axolotl config</summary>
@@ -564,4 +617,4 @@ tokens:
564
  - Transformers 4.40.0.dev0
565
  - Pytorch 2.2.2+cu121
566
  - Datasets 2.15.0
567
- - Tokenizers 0.15.0
 
3
  base_model: mistral-community/Mixtral-8x22B-v0.1
4
  tags:
5
  - generated_from_trainer
6
+ - axolotl
7
  model-index:
8
  - name: out
9
  results: []
10
+ datasets:
11
+ - cognitivecomputations/Dolphin-2.9
12
+ - teknium/OpenHermes-2.5
13
+ - m-a-p/CodeFeedback-Filtered-Instruction
14
+ - cognitivecomputations/dolphin-coder
15
+ - cognitivecomputations/samantha-data
16
+ - HuggingFaceH4/ultrachat_200k
17
+ - microsoft/orca-math-word-problems-200k
18
+ - abacusai/SystemChat-1.1
19
+ - Locutusque/function-calling-chatml
20
+ - internlm/Agent-FLAN
21
+ language:
22
+ - en
23
  ---
24
 
25
+ # Dolphin 2.9 Mixtral 8x22b 🐬
26
+
27
+ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
28
+
29
+ Discord: https://discord.gg/8fbBeC7ZGx
30
+
31
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
32
+
33
+ My appreciation for the sponsors of Dolphin 2.9:
34
+ - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
35
+
36
+ This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed.
37
+
38
+ The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length.
39
+
40
+ It took 1 week on 8xH100 provided by Crusoe Cloud
41
+
42
+ This model was trained FFT on 50% parameters (targeted with [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py) by Fernando, David, Lucas, and Eric) , using ChatML prompt template format.
43
+
44
+ example:
45
+
46
+ ```
47
+ <|im_start|>system
48
+ You are Dolphin, a helpful AI assistant.<|im_end|>
49
+ <|im_start|>user
50
+ {prompt}<|im_end|>
51
+ <|im_start|>assistant
52
+
53
+ ```
54
+
55
+ Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
56
+
57
+ Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
58
+
59
+ Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models.
60
+
61
+ ## Evals
62
+
63
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/vHDgd4gfEMftZ8cJP3mRf.png)
64
+
65
+ ## Training
66
 
67
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
68
  <details><summary>See axolotl config</summary>
 
617
  - Transformers 4.40.0.dev0
618
  - Pytorch 2.2.2+cu121
619
  - Datasets 2.15.0
620
+ - Tokenizers 0.15.0