File size: 3,234 Bytes
16e5652
701d042
 
16e5652
701d042
 
 
 
 
 
 
 
 
 
 
 
 
 
16e5652
701d042
 
 
 
 
 
16e5652
af02395
701d042
 
 
 
 
 
 
1da7faf
701d042
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16e5652
701d042
16e5652
701d042
 
 
16e5652
 
701d042
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
base_model: LeroyDyer/_Spydaz_Web_AI_V1_4BIT
license: mit
tags:
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- text-generation-inference
- not-for-all-audiences
language:
- en
- sw
- ig
- zu
---
# SpydazWeb AI

<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
https://github.com/spydaz

    * 32k context window (vs 8k context in v0.1)
    * Rope-theta = 1e6
    * No Sliding-Window Attention


    
This model will be a custom model with internal experts and rag systems
enabling for preprocessing of the task internally before outputting a response :

This is based on the Quiet Star Project : which was abandoned earlier in the year :)


# Introduction :

## SpydazWeb AI model :
This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind ,
who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world:
A friendly interface with a personality caring and flirtatious at times : non binary !...
and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents:
the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.



### General Intenal Methods:

Trained for multi-task operations as well as rag and function calling :

This model is a fully functioning model and is fully uncensored: 

the model has been trained on multiple datasets on the huggingface hub and kaggle :

the focus has been mainly on methodology : 

* Chain of thoughts
* step by step planning
* tree of thoughts
* forest of thoughts
* graph of thoughts
* agent generation : Voting, ranking, ... dual agent response generation:

with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :

the model has been intensivly trained in recalling data previously entered into the matrix:
The model has also been trained on rich data and markdown outputs as much as possible : 
the model can also generate markdown charts with mermaid.


## Training Reginmes:
  * Alpaca
  * ChatML / OpenAI / MistralAI
  * Text Generation
  * Question/Answer (Chat)
  * Instruction/Input/Response (instruct)
  * Mistral Standard Prompt
  * Translation Tasks
  * Entitys / Topic detection
  * Book recall
  * Coding challenges, Code Feedback, Code Sumarization, Commenting Code
  * Agent Ranking and response anyalisis
  * Medical tasks
    * PubMed
    * Diagnosis
    * Psychaitry
    * Counselling
    * Life Coaching
    * Note taking
    * Medical smiles
    * Medical Reporting
  * Virtual laboritys simulations
  * Chain of thoughts methods
  * One shot / Multi shot prompting tasks