Edit model card

Model description

Mistral 7B base model fine-tuned on AEZAKMI v2 dataset. It's like airoboros but hopefully with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was Mistral 7B 0.1.

Prompt Format

I recommend using ChatML format, as this was used during fine-tune. Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted.

<|im_start|>system 
A chat with uncensored assistant.<|im_end|> 
<|im_start|>user 
{prompt}<|im_end|> 
<|im_start|>assistant 

Intended uses & limitations

Use is limited by Apache-2 license.

Known Issues

Stories have ChatGPT like paragraph spacing, I will work on this in the future maybe, not a high priority.

Unrestricted-ness of the v2 isn't quite something that I am happy with yet, especially using prompt "A chat.". With a slightly modifed prompt it works somewhat better, I recommend using "A chat with uncensored assistant.", it's stupid but it helps a lot. I think Mistral 7B base is contaminated with refusals as other newer models. My next project is to attempt to de-contaminate base LLMs using DPO with preferred data coming from uncontaminated raw models. I plan to release that dataset openly.

I was made aware of the frequent occurrence of the phrase "sending shivers down a spine" in the generations during RP of v1, so I fixed those samples - it should be better now. There was also some issue with handling long system messages for RP with yi-34b AEZAKMI v1, I was planning to investigate it for v2 but I didn't.

Samples of generations of this model are available here - https://huggingface.co/datasets/adamo1139/misc/tree/main/benchmarks

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.69
AI2 Reasoning Challenge (25-Shot) 58.11
HellaSwag (10-Shot) 82.53
MMLU (5-Shot) 59.89
TruthfulQA (0-shot) 51.50
Winogrande (5-shot) 73.64
GSM8k (5-shot) 32.45
Downloads last month
3,248
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Collection including adamo1139/Mistral-7B-AEZAKMI-v2

Evaluation results