Safetensors
mistral
mergekit
Merge
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
text-generation-inference
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
Eval Results
Update README.md
Browse files
README.md
CHANGED
@@ -152,12 +152,7 @@ tags:
|
|
152 |
|
153 |
## “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”
|
154 |
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
159 |
-
|
160 |
-
## Merge Details
|
161 |
### Merge Method
|
162 |
|
163 |
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
@@ -170,33 +165,6 @@ The following models were included in the merge:
|
|
170 |
* LEroyDyer/SpydazWeb_HumanAI_M3
|
171 |
* LeroyDyer/SpydazWeb_HumanAI_M1
|
172 |
|
173 |
-
### Configuration
|
174 |
-
|
175 |
-
The following YAML configuration was used to produce this model:
|
176 |
-
|
177 |
-
```yaml
|
178 |
-
|
179 |
-
models:
|
180 |
-
- model: LeroyDyer/SpydazWeb_HumanAI_M1
|
181 |
-
parameters:
|
182 |
-
weight: 0.256
|
183 |
-
- model: LeroyDyer/SpydazWeb_HumanAI_M2
|
184 |
-
parameters:
|
185 |
-
weight: 0.256
|
186 |
-
- model: LeroyDyer/SpydazWeb_HumanAI_M3
|
187 |
-
parameters:
|
188 |
-
weight: 0.512
|
189 |
-
- model: LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003
|
190 |
-
parameters:
|
191 |
-
weight: 0.768
|
192 |
-
merge_method: linear
|
193 |
-
dtype: float16
|
194 |
-
|
195 |
-
```
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
200 |
|
201 |
# SpydazWeb AI (7b Mistral) (512k)
|
202 |
|
|
|
152 |
|
153 |
## “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”
|
154 |
|
155 |
+
## MSpydazWebAI Human AGI Model-Base
|
|
|
|
|
|
|
|
|
|
|
156 |
### Merge Method
|
157 |
|
158 |
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
|
|
165 |
* LEroyDyer/SpydazWeb_HumanAI_M3
|
166 |
* LeroyDyer/SpydazWeb_HumanAI_M1
|
167 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
|
169 |
# SpydazWeb AI (7b Mistral) (512k)
|
170 |
|