Spydaz_Web_AI_ / README.md
LeroyDyer's picture
Update README.md
f45bded verified
|
raw
history blame
2.01 kB
metadata
language:
  - en
  - sw
  - ig
  - so
  - es
  - ca
license: apache-2.0
metrics:
  - accuracy
  - bertscore
  - bleu
  - brier_score
  - cer
  - character
  - charcut_mt
  - chrf
  - code_eval
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - code
  - farmer
  - doctor
  - Mega-Series
  - Cyber-Series
  - Role-Play
  - Self-Rag
  - ThinkingBot
  - milestone
  - mega-series
  - SpydazWebAI
  - llama-cpp
  - gguf-my-repo
base_model: LeroyDyer/_Spydaz_Web_AI_

Uploaded model

  • Developed by: Leroy "Spydaz" Dyer
  • License: apache-2.0
  • Finetuned from model : LeroyDyer/LCARS_AI_010 [ https://github.com/spydaz
  • The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

  • Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

    • 32k context window (vs 8k context in v0.1)
    • Rope-theta = 1e6
    • No Sliding-Window Attention

Introduction :

SpydazWeb AI model :

Methods:

Trained for multi-task operations as well as rag and function calling :

This model is a fully functioning model and is fully uncensored:

the model has been trained on multiple datasets on the huggingface hub and kaggle :

the focus has been mainly on methodology :

  • Chain of thoughts
  • steo by step
  • tree of thoughts
  • forest of thoughts
  • graph of thoughts
  • agent generation : Voting, ranking, ...

with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :

the model has been intensivly trained in recalling data previously entered into the matrix:

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.