Edit model card

Quote for Motivation:

"Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"

"To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"

— # Leroy Dyer (1972-Present)

This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind , who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world: A friendly interface with a personality caring and flirtatious at times : non binary !... and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents: the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.

Extended context : 512k : Merged ! this project has merged the 512k conext and had some basic realignment : it seems to be fine ! In fact it alligned so easy that i needed to also align it to the react and the original extedned data sets used by the merge 512k and found that all aactually were sucessfuly merged : no Error and easy merged @!

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using aws-prototyping/MegaBeam-Mistral-7B-512k as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: xingyaoww/CodeActAgent-Mistral-7b-v0.1
    parameters:
      density: 0.128
      weight: [0.128, 0.064, 0.128, 0.064] # weight gradient
  - model: _Spydaz_Web_AI_ChatML_001
    parameters:
      density: 0.768
      weight: [0.256, 0.768, 0.512, 0.768] # weight gradient
  - model: LeroyDyer/_Spydaz_Web_AI_06
    parameters:
      density: 0.768
      weight:
        - filter: mlp
          value: 0.768
        - value: 0.512
merge_method: ties
base_model:  aws-prototyping/MegaBeam-Mistral-7B-512k
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for LeroyDyer/_Spydaz_Web_AI_ChatML_512K_Project

Finetuned
(8)
this model
Merges
7 models

Datasets used to train LeroyDyer/_Spydaz_Web_AI_ChatML_512K_Project