Text Generation
Transformers
Safetensors
mistral
text-generation-inference
unsloth
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -335,6 +335,19 @@ This design is flexible and reusable for various file types, making it a robust
|
|
335 |
|
336 |
|
337 |
# PROMPT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
338 |
|
339 |
|
340 |
```python
|
|
|
335 |
|
336 |
|
337 |
# PROMPT
|
338 |
+
This prompts enables for inteligence as well as role play!> Finding a balance beteen concepts , we attempt to guide the model to be more outgoig as well as a deep thinker , guiding the models responses as well as thought train:
|
339 |
+
We give some example outputs for responses for basic empathetical responses : These can even be replaced with more formal or informal examples:
|
340 |
+
We attempt to give the model wome inner dialog as well as a role in which it can begin to generate behaviors and personalities . I advising adding to the prompt or adjusting the prompt instead of removal of the prompt , in this way you keep the model still activate for tasks .
|
341 |
+
the model prompt also gives the model an eample of structuring respponses with key elememts as well as thought processes. these could be agents if tools an agents are specified in its additional settings :
|
342 |
+
Hence we give the model an example of the react process if possible to also structure tool usages as well as an overall method for general responses :
|
343 |
+
Planning and examining the potential outputs as well as effective searching for infromation within the model is alos very key in this prompt as we do not expect to be connected to tools or outside resources in fact the opposite as we expect the model to generate tools and plans as required even if they are disposable tools and unsee to the user .
|
344 |
+
This can even be base64 image tranaslation: In which the model has already been trained on vast amounts of images in base64 format. so it learned different filetypes for base64 conversion. as well as for entity detection , it was trained on named entity recognition tasks , so such task would be expected to be performed intuatively internally without tools depsite being over fit for the task on some very good datasets such as the open mind and the wordnet and liwc behaviour identification entity recognition data.
|
345 |
+
So these lists or data were basically overfit ... ie this embed the basic data as a truthful base model to begi building embedding and simualritys .
|
346 |
+
So even audio recognition it was trained for different tasks from speech to text , to spectogram to text to image etc . doing various conversions in python to create the dataset, with as well as tool usage for the same task.
|
347 |
+
So i would say the model has all types of hidden potential for base64 , such as even html websites can be correctly recreated and their images described as often the media is in base64 format !
|
348 |
+
So for this task ( untrianed ,) i would expect some reasonable response .. trasferable skills.
|
349 |
+
|
350 |
+
|
351 |
|
352 |
|
353 |
```python
|