Mistral 7B Merges
Merges that may or may not be worth using. All credit goes to Maxime Labonne's course, https://github.com/mlabonne/llm-course, + mergekit
Text Generation • Updated • 75Note This one is pretty good IMO. EDIT: Highest OpenLLM score for my 7B merges: Open LLM Leaderboard📑 Average: 75.36 ARC: 73.38 HellaSwag: 88.5 MMLU: 64.94 TruthfulQA: 71.5 Winogrande: 83.58 GSM8K: 70.28
jsfs11/WONMSeverusDevil-TIES-7B
Text Generation • Updated • 1.74kNote WONMSeverusDevil-TIES-7B LLM AutoEval📑 | Model | AGIEval| GPT4All| TruthfulQA| Bigbench| Average| |-------------------------|----------|------------:|-----------:|--------:|----------:| |WONMSeverusDevil-TIES-7B| 45.26| 77.07| 72.47| 48.85| 60.91|
jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B
Text Generation • Updated • 74 • 2Note Open LLM Leaderboard📑 Average: 73.98 ARC: 71.93 HellaSwag: 88.06 MMLU: 64.99 TruthfulQA: 62.96 Winogrande: 82.79 GSM8K: 70.13
jsfs11/TurdusTrixBeagle-DARETIES-7B
Text Generation • Updated • 75Note Open LLM Leaderboard📑 Average: 75.2 ARC: 73.46 HellaSwag: 88.61 MMLU: 64.89 TruthfulQA: 68.81 Winogrande: 85.16 GSM8K: 70.28
jsfs11/WildMBXMarconi-SLERP-7B
Text Generation • Updated • 69 • 2Note Open LLM Leaderboard📑 Average: 75.09 ARC: 73.29 HellaSwag: 88.49 MMLU: 64.9 TruthfulQA: 68.98 Winogrande: 83.98 GSM8K: 70.89
jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA
Text Generation • Updated • 13