Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ The objective of this model is to serve as a fully open source new base model wi
|
|
10 |
|
11 |
With full fine-tuning, this model has the potential to deliver excellent performance.
|
12 |
|
|
|
|
|
13 |
## Model Creation
|
14 |
|
15 |
The model was created by merging two models: Dolphin and Zephyr, along with Meta-math7b and Speechless code, to form a single model. The layers of these two models were stacked on top of each other to create this model.
|
@@ -26,7 +28,8 @@ Initially, the output from the model was pure jargon. To rectify this, a LoRa ad
|
|
26 |
- Dolphin2.1-mistral-7b by Eric Hartford (https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)
|
27 |
- Zephyr-7b-beta by HuggingFace (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
28 |
- MetaMath-Mistral-7B by meta-math (https://huggingface.co/meta-math/MetaMath-Mistral-7B)
|
29 |
-
-
|
|
|
30 |
|
31 |
## upcoming Mistral 30b
|
32 |
- We currently have a Mistral model with 29 billion parameters(29.2B params) in development. At present, the model's output is not yet refined and may appear as jargon. If there is interest in the community for fine-tuning this model, we are open to uploading it in its current state. Otherwise, we plan to complete our training process before making it available. You can let us know with a post in this repo's discussion's!
|
|
|
10 |
|
11 |
With full fine-tuning, this model has the potential to deliver excellent performance.
|
12 |
|
13 |
+
Qlora adapter was trained on a modified dataset of airoboros-m-7b-3.1.2, using SFT. In the Alpaca Format.
|
14 |
+
|
15 |
## Model Creation
|
16 |
|
17 |
The model was created by merging two models: Dolphin and Zephyr, along with Meta-math7b and Speechless code, to form a single model. The layers of these two models were stacked on top of each other to create this model.
|
|
|
28 |
- Dolphin2.1-mistral-7b by Eric Hartford (https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)
|
29 |
- Zephyr-7b-beta by HuggingFace (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
30 |
- MetaMath-Mistral-7B by meta-math (https://huggingface.co/meta-math/MetaMath-Mistral-7B)
|
31 |
+
- Speechless-code-mistral-7b-v1.0 (https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0)
|
32 |
+
- Airoboros-m-7b-3.1.2 (https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
|
33 |
|
34 |
## upcoming Mistral 30b
|
35 |
- We currently have a Mistral model with 29 billion parameters(29.2B params) in development. At present, the model's output is not yet refined and may appear as jargon. If there is interest in the community for fine-tuning this model, we are open to uploading it in its current state. Otherwise, we plan to complete our training process before making it available. You can let us know with a post in this repo's discussion's!
|