Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,10 @@ base_model: LeroyDyer/Mixtral_AI_CyberBrain_3.0
|
|
13 |
interchangable !!!
|
14 |
# Uploaded model
|
15 |
|
|
|
|
|
|
|
|
|
16 |
- **Developed by:** LeroyDyer
|
17 |
- **License:** apache-2.0
|
18 |
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberBrain_3.0
|
|
|
13 |
interchangable !!!
|
14 |
# Uploaded model
|
15 |
|
16 |
+
|
17 |
+
Actually my Got to Lora Quite large training cycle so when applying to merged misrals it does realign them . i usually apply this lora to each model premerge , so the outcome will also be merged with this lora , and recieve some traing with dolphin databases ; hopefully still alligned to the dataset... so multi layered dolphin data is injected in to the hosts . to give a woarm start :
|
18 |
+
This is applied to smaller layered models also ! (as it generally has some effect especially on the raw base models .... something is in there working!)
|
19 |
+
|
20 |
- **Developed by:** LeroyDyer
|
21 |
- **License:** apache-2.0
|
22 |
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberBrain_3.0
|