Vezora commited on
Commit
abb02ba
1 Parent(s): e78c51a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -2
README.md CHANGED
@@ -1,6 +1,30 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- The goal of this model is be a new base model for mistral 14b. It was fixed with a lora adapter attached to all 62 layers of the merged model. The model does output and does respond with what you want, but over respond with un asked questions.
5
 
6
- If this model could get full fine tuned, it would preform great.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
 
4
 
5
+ # Mistral 14b: A New Base Model
6
+
7
+ The objective of this model is to serve as a new base model for Mistral 14b. It has been enhanced with a LoRa adapter attached to all 62 layers of the merged model. The model is capable of generating outputs and responding accurately to inputs. However, it tends to over-respond with unasked questions when asked to process more than 512 tokens, which is its training limit using QLoRa.
8
+
9
+ With full fine-tuning, this model has the potential to deliver excellent performance.
10
+
11
+ ## Model Creation
12
+
13
+ The model was created by merging two models: Dolphin and Zephyr, along with Meta-math7b and Speechless code, to form a single model. The layers of these two models were stacked on top of each other to create this model.
14
+
15
+ Initially, the output from the model was pure jargon. To rectify this, a LoRa adapter was trained and merged across all layers.
16
+
17
+ ## Useful Resources
18
+
19
+ - LoRa Adapter Merging (https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930)
20
+ - Model Merging (MergeKit) (https://github.com/cg123/mergekit)
21
+
22
+ ## Source Models
23
+
24
+ - Dolphin2.1-mistral-7b by Eric Hartford (https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)
25
+ - Zephyr-7b-beta by HuggingFace (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
26
+ - MetaMath-Mistral-7B by meta-math (https://huggingface.co/meta-math/MetaMath-Mistral-7B)
27
+ - speechless-code-mistral-7b-v1.0 (https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0)
28
+
29
+ ## upcoming Mistral 28b
30
+ - We currently have a Mistral model with 28 billion parameters in development. At present, the model's output is not yet refined and may appear as jargon. If there is interest in the community for fine-tuning this model, we are open to uploading it in its current state. Otherwise, we plan to complete our training process before making it available. You can let me us know with a post in this repo's discussion's!