Model Card
Model Description
Mistral 7B fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
- remove all refusals
- remove any mention of AI assistant
- split any multi-turn dialog generated in the dataset into multi-turn conversations records
- added nfsw generated conversations from the Teatime dataset
- Developed by: l3utterfly
- Funded by: Layla Network
- Model type: Mistral
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: Mistral 7B
Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
USER:
ASSISTANT:
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 64.69 |
AI2 Reasoning Challenge (25-Shot) | 62.29 |
HellaSwag (10-Shot) | 83.36 |
MMLU (5-Shot) | 64.32 |
TruthfulQA (0-shot) | 43.14 |
Winogrande (5-shot) | 79.56 |
GSM8k (5-shot) | 55.50 |
- Downloads last month
- 101
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for l3utterfly/mistral-7b-v0.1-layla-v4
Spaces using l3utterfly/mistral-7b-v0.1-layla-v4 6
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard62.290
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard83.360
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.320
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard43.140
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard79.560
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard55.500