Edit model card

Trained on Mistral-7B-v0.1 as a base model, this Samantha was trained in 3,5 hours on a A100 80GB with Samantha-Mistral-sharegpt Dataset

The goal of this training was to train Mistral7B v0.1 Chat using a ChatML prompt format. This is Samantha's mind in Mistral's body as I like to say

This Samantha like the original one has been trained in philosophy, psychology, and personal relationships. The original Samantha has been used.

Her conversation format is ChatML https://github.com/openai/openai-python/blob/main/chatml.md

Prompt Template:

<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>

I'm working on an ITA/ENG version. I plan to merge several dataset and train the future models on different domain knowledge. STAY TUNED!

Built with Axolotl

thanks, greetings, respect and love to:

https://huggingface.co/cognitivecomputations for the Inspiration and the starting dataset which I've used for this Phi-2 fine tuning https://github.com/OpenAccess-AI-Collective/axolotl

Downloads last month
1