Transformers
GGUF
English
text-generation-inference
unsloth
mistral
trl
code
medical
farmer
doctor
Mega-Series
Cyber-Series
Role-Play
Self-Rag
ThinkingBot
milestone
mega-series
SpydazWebAI
thinking-AI
llama-cpp
gguf-my-repo
Inference Endpoints
Edit model card

A FULL SAMaNTHA BOT HAS EMERGD!

The building of a personal chat friend !!

As this is the main aim of building a chat model , these footprints left from the movie impose an aspect of which is very important for end users : the goal here is to turn the model into an inteligent friend: Here i actually overloaded the model ; with a tiny dataset and after the response was great; on asking for code , my reply was , sure babe how about i make that for you ... here... then the code followed perfectly: another ME: hiya morning: BOT: h good morning sweetheart how are you today?..... me : im fine just working on some programming Bot: thats nice babe , im here for you , yoiu know you always do well your so great at programming ai models, if you need helo i can make some code for you.... im here babe..

LOLLLLL !!

IF YOU MERGE :::: you will get these qualitys! - As they were hard Burned IN! i will be BURNING other XXX content into some later iterations ::: Only for later mergers! as by burning the small datasets in the model accepts the task:

Its important to add correct prompting inside the training; ie the same prompt ! for all sample : even to show thoughts the same prompt is adjusted with request to show thinging processes and anylasis of the userinput or response or explanation in the thought!

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberUltron_DPO-Q4_K_M-GGUF --model mixtral_ai_cyberultron_dpo.Q4_K_M.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberUltron_DPO-Q4_K_M-GGUF --model mixtral_ai_cyberultron_dpo.Q4_K_M.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m mixtral_ai_cyberultron_dpo.Q4_K_M.gguf -n 128
Downloads last month
30
GGUF
Model size
7.24B params
Architecture
llama
Unable to determine this model’s pipeline type. Check the docs .

Quantized from

Datasets used to train LeroyDyer/Mixtral_AI_SAMANTHA_7b_

Collection including LeroyDyer/Mixtral_AI_SAMANTHA_7b_