Triangle104 commited on
Commit
8dadf9c
·
verified ·
1 Parent(s): 55537f3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -95
README.md CHANGED
@@ -16,6 +16,8 @@ datasets:
16
  language:
17
  - en
18
  base_model: cognitivecomputations/Dolphin3.0-Mistral-24B
 
 
19
  tags:
20
  - llama-cpp
21
  - gguf-my-repo
@@ -25,101 +27,6 @@ tags:
25
  This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Mistral-24B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
26
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B) for more details on the model.
27
 
28
- ---
29
- Our appreciation for the generous sponsors of Dolphin 3.0:
30
-
31
-
32
- Dria https://x.com/driaforall - Inference Sponsor
33
- Chutes https://x.com/rayon_labs - Compute Sponsor
34
- Crusoe Cloud - Compute Sponsor
35
- Andreessen Horowitz - provided the grant that originally launched Dolphin
36
-
37
-
38
-
39
-
40
-
41
-
42
-
43
- What is Dolphin?
44
-
45
-
46
-
47
-
48
- Dolphin 3.0 is the next generation of the Dolphin series of
49
- instruct-tuned models. Designed to be the ultimate general purpose
50
- local model, enabling coding, math, agentic, function calling, and
51
- general use cases.
52
-
53
-
54
- Dolphin aims to be a general purpose instruct model, similar to the
55
- models behind ChatGPT, Claude, Gemini. But these models present
56
- problems for businesses seeking to include AI in their products.
57
-
58
-
59
- They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
60
- They maintain control of the model versions, sometimes changing
61
- things silently, or deprecating older models that your business relies
62
- on.
63
- They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
64
- They can see all your queries and they can potentially use that data
65
- in ways you wouldn't want.
66
- Dolphin, in contrast, is steerable and gives control to the system
67
- owner. You set the system prompt. You decide the alignment. You have
68
- control of your data. Dolphin does not impose its ethics or guidelines
69
- on you. You are the one who decides the guidelines.
70
-
71
-
72
- Dolphin belongs to YOU, it is your tool, an extension of your will.
73
- Just as you are personally responsible for what you do with a knife,
74
- gun, fire, car, or the internet, you are the creator and originator of
75
- any content you generate with Dolphin.
76
-
77
-
78
- https://erichartford.com/uncensored-models
79
-
80
- Chat Template
81
-
82
-
83
-
84
-
85
- We use ChatML for the chat template.
86
-
87
-
88
- <|im_start|>system
89
-
90
- You are Dolphin, a helpful AI assistant.<|im_end|>
91
-
92
- <|im_start|>user
93
-
94
- {prompt}<|im_end|>
95
-
96
- <|im_start|>assistant
97
-
98
- System Prompt
99
-
100
-
101
-
102
-
103
- In Dolphin, the system prompt is what you use to set the tone and
104
- alignment of the responses. You can set a character, a mood, rules for
105
- its behavior, and it will try its best to follow them.
106
-
107
-
108
- Make sure to set the system prompt in order to set the tone and
109
- guidelines for the responses - Otherwise, it will act in a default way
110
- that might not be what you want.
111
-
112
-
113
- Example use of system prompt:
114
-
115
-
116
- <|im_start|>system
117
- You are Dolphin, a golang coding assistant. you only code in golang. If the user requests any other programming language, return the solution in golang instead.<|im_end|>
118
- <|im_start|>user
119
- Please implement A* using python<|im_end|>
120
- <|im_start|>assistant
121
-
122
- ---
123
  ## Use with llama.cpp
124
  Install llama.cpp through brew (works on Mac and Linux)
125
 
 
16
  language:
17
  - en
18
  base_model: cognitivecomputations/Dolphin3.0-Mistral-24B
19
+ pipeline_tag: text-generation
20
+ library_name: transformers
21
  tags:
22
  - llama-cpp
23
  - gguf-my-repo
 
27
  This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Mistral-24B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B) for more details on the model.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ## Use with llama.cpp
31
  Install llama.cpp through brew (works on Mac and Linux)
32