metadata
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
model-index:
- name: Mayonnaise-4in1-02
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.51
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.89
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.04
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.37
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.04
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
Model Card for Model ID
This is a mixture of experts created with mergekit and based on mistralai/Mistral-7B-v0.1.
Model Details
The model was created using a recipe detailed in this article: The Mayonnaise: Rank First on the Open LLM Leaderboard with TIES-Merging
Model Description
- Developed by: The Kaitchup
- Model type: Causal
- Language(s) (NLP): English
- License: Apache 2.0
Model Sources
Created with mergekit with this configuration:
models:
- model: mncai/mistral-7b-dpo-v5
# no parameters necessary for base model
- model: flemmingmiguel/MBX-7B
parameters:
density: 0.5
weight: 0.3
- model: BarryFutureman/NeuralTurdusVariant1-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mncai/mistral-7b-dpo-v5
parameters:
normalize: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.21 |
AI2 Reasoning Challenge (25-Shot) | 73.38 |
HellaSwag (10-Shot) | 88.51 |
MMLU (5-Shot) | 64.89 |
TruthfulQA (0-shot) | 69.04 |
Winogrande (5-shot) | 84.37 |
GSM8k (5-shot) | 71.04 |