Text Generation
Transformers
Safetensors
4 languages
mistral
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
text-generation-inference
Not-For-All-Audiences
Inference Endpoints
4-bit precision
bitsandbytes
Update README.md
Browse files
README.md
CHANGED
@@ -32,12 +32,7 @@ https://github.com/spydaz
|
|
32 |
* 32k context window (vs 8k context in v0.1)
|
33 |
* Rope-theta = 1e6
|
34 |
* No Sliding-Window Attention
|
35 |
-
|
36 |
-
* Pre-Thoughts - Enables for pre-generation steps of potential artifacts for task solving:
|
37 |
-
* Generates plans for step by step thinking
|
38 |
-
* Generates python Code Artifacts for future tasks
|
39 |
-
* Recalls context for task internally to be used as refference for task:
|
40 |
-
* show thoughts or hidden thought usages ( Simular to self-Rag )
|
41 |
|
42 |
|
43 |
This model will be a custom model with internal experts and rag systems
|
|
|
32 |
* 32k context window (vs 8k context in v0.1)
|
33 |
* Rope-theta = 1e6
|
34 |
* No Sliding-Window Attention
|
35 |
+
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
|
38 |
This model will be a custom model with internal experts and rag systems
|