Edit model card

: LeroyDyer/Mixtral_AI_oo7

Its milestone was LeroyDyer/Mixtral_AI_oo7 :

  • Not randomized but the original model has become lost inside the model : in alignment testing it was far from the known embedded data : but still a strong base medel : as it had also been merged multiple time with its past lifes as well as the new targets : This will be remerged in to the same past model which is the target model : this model cannot be discounted but it is not the target model : Hence it must be kept as a base model :

:FINAL MERGE - Models Journey!

This merge chain included various models which have also been merged with great pathways , ie selecting great parents for the next generation of training : the creation of a new base marker : the new desirable models were ties merged : weighting specifically due to alphaMonarch rich history as well as omnibeagle both have been merged with multiple targets.. elieveating the neccesatiy to search for models and go down a long rabbit hole of merging . as alpha monarch has been merged with 50 top targets: a random rogue gene was introduced to this merge chain , a model from yamPelg (a japhite) ... a hebrew model althought the tokenizer was not merged , ii will consider extending a model to include that tokenizer as hebrew uses non standard char set: the problem is will extending the vocabulary, which extends the embedding space .. mess with the already existing tokenizer : perhaps is beter to find a way to train the llamatokenizer to allow for training of a new tokenizer based on this models tokenizer , and extending t=with a hebrew corpus as well as some other non standard tokenizers...(Another paper in itself(the problem of tokenizers)) so also our past merged models and keystage models were also featured in the merges at strategic points used as an attention layer in the merges to eep the models ontrach as well as keeping the base model enforced by its past trainings. hence the deltas in the ties merges are specifically targeted in these merges , and these are what are used in the final created model , as these are then merged into the base model, hence prefering ties merges for retaining the training .. but , when doing this sometimes the output can be unexpected , hence the target generated from the X and Y merges are inturn merged to produce a Z Merge... Z/Y/X are then merged in a linear merge .... these subsequent linear merges seem to fiz output anomolies ... so this merge contained a cascade of genetic merges ...... So we can Imply : Our ties merges are very strong targets , and our linear merges are our soft targets : so now we can even take it to another step , cascade the merges even further ... we take the x,y,z and we take our soft merges , X1 , Y1 and remerge into a new ties merge : ( a strong child candidate ) < as the new target will be used as the new base model : and this base model , and the final stage (from our trainings) and our X,Y,Z can be linear merged into a single target , allowing for the final stage of traing that we came from to be inside the final merge , keeping the model as the parent , as well as its offspring , which are stronger candidates, and the Strong Child: hence the strong child will be given the highest weights in the merge , and the x,z,y averaged, and the final stage can have a small set of weights as it is only to bias the final output , to remeber where we were in our training jobs.... SO! the output produces ME!

NEW BASE MODEL MARKER :

So we can begin a new set of training goals as well as align this dataset acording to our new set of needs and expect that all information and layers previously generated by lora configs and past training have become submerged into the model an are even open pathways for training again .... So we can say this is a untrained model : with no specific goals and has been normalised for general use despite a long training pathway :

Past training loop : - World Archives :

Past prompt : You are the world archive of knowledge , you contain all the world historical knowledge. you store records and data for recall and use by users . you users request task to be performed , this can be calculations, or recalling information, producing tables , even to have a discrete discussion, create a virtual laboratory for simulations, you are an expert in all domains, so your user will request services which only experts can provide , perform all tasks requested , be formal when required and informal if requested. this prompt was used as a general prompt for all tasks :

instructions for the model :

  • Storing documents and books as well as recalling books,
  • creating timelines in history as well as discussing the past with anceint character based on thier histoical contexts , (ie search for all context related to a character and allow the model to generate a character based on this and answer questions based on the context, generate questions for the character regarding its actions and things it witnessed, also be informal and factual, as well as historically acurate without deviation from the historical timeline and contect.

Thought process and methodologys :

various thought process were also installed into the past models , so they have also been included in the model and cannot be removed, as well as new paradigms unknonw from models merged : past mistakes with thoughts.... (<<<<>>> thoughts should be placed After INPUT and Context! <<<>>) but this space has been reserved for step by step thinking and problem modelling , it helps to enable the model to actuall have this space to process the input before outputting the resposnse so it should be included in the prompt when possibe for comple tasks !

Functions :

yes it can write code !

)

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.