GGUF
English
JackCloudman commited on
Commit
8adff56
1 Parent(s): 7fb9e6e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - ehartford/dolphin
4
+ - jondurbin/airoboros-2.2.1
5
+ - ehartford/samantha-data
6
+ - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
7
+ language:
8
+ - en
9
+ license: other
10
+ license_name: yi-license
11
+ license_link: LICENSE
12
+ ---
13
+
14
+ <center><h1>GGUF version of Dolphin 2.2-Yi-34B by erichartford</h1></center>
15
+ <h4>Original Card</h4>
16
+ Dolphin 2.2 🐬
17
+ https://erichartford.com/dolphin
18
+
19
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
20
+
21
+ Dolphin-2.2-Yi-34b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
22
+
23
+ This model is based on Yi, and is subject to Yi license.
24
+
25
+ I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
26
+
27
+ You can load it as follows:
28
+
29
+ ```
30
+ from transformers import LlamaForCausalLM, AutoTokenizer
31
+ tokenizer = AutoTokenizer.from_pretrained("ehartford/dolphin-2.2-yi-34b", trust_remote_code=True)
32
+ model = LlamaForCausalLM.from_pretrained("ehartford/dolphin-2.2-yi-34b")
33
+ ```
34
+
35
+ New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
36
+
37
+ This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
38
+ You are responsible for any content you create using this model. Enjoy responsibly.
39
+
40
+ ## Dataset
41
+
42
+ This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
43
+
44
+ I modified the dataset for uncensoring, deduping, cleaning, and quality.
45
+
46
+ I added Jon Durbin's excellent Airoboros dataset to increase creativity.
47
+
48
+ I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
49
+
50
+ ## Training
51
+ It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
52
+
53
+ Prompt format:
54
+ This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
55
+ ```
56
+ <|im_start|>system
57
+ You are Dolphin, a helpful AI assistant.<|im_end|>
58
+ <|im_start|>user
59
+ {prompt}<|im_end|>
60
+ <|im_start|>assistant
61
+
62
+ ```
63
+
64
+ Example:
65
+ ```
66
+ <|im_start|>system
67
+ You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
68
+ <|im_start|>user
69
+ Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
70
+ <|im_start|>assistant
71
+ ```
72
+
73
+ ## Gratitude
74
+ - This model was made possible by the generous sponsorship of a16z.
75
+ - Thank you to Microsoft for authoring the Orca paper and inspiring this work.
76
+ - Thank you to Charles Goddard, for publishing llama-compatible Yi
77
+ - Special thanks to Wing Lian, and TheBloke for helpful advice
78
+ - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
79
+ - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
80
+ - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
81
+
82
+ ## Example Output
83
+
84
+ Example output:
85
+ ```
86
+ <|im_start|>system
87
+ You are a helpful AI assistant<|im_end|>
88
+ <|im_start|>user
89
+ Which theory of everything is most likely correct?<|im_end|>
90
+ <|im_start|>assistant
91
+ The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
92
+
93
+ 1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
94
+
95
+ 2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
96
+
97
+ Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
98
+ ```
99
+
100
+ [Buy me a coffee](https://www.buymeacoffee.com/ehartford)
101
+