ehartford commited on
Commit
5ed2bd2
1 Parent(s): a4e560c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -24
README.md CHANGED
@@ -1,36 +1,82 @@
1
  ---
2
  license: apache-2.0
3
  base_model: mistralai/Mistral-7B-v0.1
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: workspace/dolphin-2.2-mistral-7b
8
- results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
- # workspace/dolphin-2.2-mistral-7b
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
18
 
19
- ## Model description
20
 
21
- More information needed
22
 
23
- ## Intended uses & limitations
24
 
25
- More information needed
26
 
27
- ## Training and evaluation data
 
28
 
29
- More information needed
30
 
31
- ## Training procedure
32
 
33
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 6e-06
@@ -45,15 +91,11 @@ The following hyperparameters were used during training:
45
  - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_steps: 100
48
- - num_epochs: 2
49
-
50
- ### Training results
51
-
52
-
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.34.1
57
  - Pytorch 2.0.1+cu117
58
  - Datasets 2.14.5
59
- - Tokenizers 0.14.0
 
1
  ---
2
  license: apache-2.0
3
  base_model: mistralai/Mistral-7B-v0.1
4
+ datasets:
5
+ - ehartford/dolphin
6
+ - jondurbin/airoboros-2.2.1
7
+ language:
8
+ - en
9
  ---
10
 
11
+ # dolphin-2.2.1-mistral-7b
 
12
 
13
+ Dolphin 2.2.1 🐬
14
+ https://erichartford.com/dolphin
15
 
16
+ This is a checkpoint release, to fix overfit training. ie, it was responding with CoT even when I didn't request it, and also it was too compliant even when the request made no sense. This one should be better.
17
 
18
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
19
 
20
+ Dolphin-2.2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
21
 
22
+ This model is based on [mistralAI](https://huggingface.co/mistralai/Mistral-7B-v0.1), with apache-2.0 license, so it is suitable for commercial or non-commercial use.
23
 
24
+ New in 2.2 is conversation and empathy. With an infusion of curated Samantha DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
25
 
26
+ This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
27
+ You are responsible for any content you create using this model. Enjoy responsibly.
28
 
29
+ ## Dataset
30
 
31
+ This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
32
 
33
+ I modified the dataset for uncensoring, deduping, cleaning, and quality.
34
+
35
+ I added Jon Durbin's excellent Airoboros dataset to increase creativity.
36
+
37
+ I added a curated subset of WizardLM and Samantha to give it multiturn conversation and empathy.
38
+
39
+ ## Training
40
+ It took 48 hours to train 4 epochs on 4x A100s.
41
+
42
+ Prompt format:
43
+ This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
44
+ ```
45
+ <|im_start|>system
46
+ You are Dolphin, a helpful AI assistant.<|im_end|>
47
+ <|im_start|>user
48
+ {prompt}<|im_end|>
49
+ <|im_start|>assistant
50
+
51
+ ```
52
+
53
+ Example:
54
+ ```
55
+ <|im_start|>system
56
+ you are an expert dolphin trainer<|im_end|>
57
+ <|im_start|>user
58
+ What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
59
+ <|im_start|>assistant
60
+ ```
61
+
62
+ ## Gratitude
63
+ - This model was made possible by the generous sponsorship of a16z.
64
+ - Thank you to Microsoft for authoring the Orca paper and inspiring this work.
65
+ - Special thanks to Wing Lian, and TheBloke for helpful advice
66
+ - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
67
+ - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
68
+ - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
69
+
70
+ ## Example Output
71
+
72
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/NSp06kUMxx9oDU-g6WSgu.png)
73
+
74
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/-YA3AKIXdnrW_Q8eH1gen.png)
75
+
76
+ [Buy me a coffee](https://www.buymeacoffee.com/ehartford)
77
+
78
+
79
+ ## Training hyperparameters
80
 
81
  The following hyperparameters were used during training:
82
  - learning_rate: 6e-06
 
91
  - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
92
  - lr_scheduler_type: cosine
93
  - lr_scheduler_warmup_steps: 100
94
+ - num_epochs: 4
 
 
 
 
95
 
96
  ### Framework versions
97
 
98
  - Transformers 4.34.1
99
  - Pytorch 2.0.1+cu117
100
  - Datasets 2.14.5
101
+ - Tokenizers 0.14.0