merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using YOYO-AI/Qwen2.5-14B-it-restore as a base.
Models Merged
The following models were included in the merge:
- marcuscedricridia/Cheng-2-Ingredient4
- marcuscedricridia/Cheng-2-Ingredient1
- arcee-ai/Virtuoso-Small-v2
- marcuscedricridia/Cheng-2-Ingredient2
- marcuscedricridia/Cheng-2-Ingredient3
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: YOYO-AI/Qwen2.5-14B-it-restore
models:
- model: marcuscedricridia/Cheng-2-Ingredient1
- model: marcuscedricridia/Cheng-2-Ingredient2
- model: marcuscedricridia/Cheng-2-Ingredient3
- model: marcuscedricridia/Cheng-2-Ingredient4
- model: arcee-ai/Virtuoso-Small-v2
dtype: bfloat16
tokenizer_source: base
int8_mask: true
normalize: true
name: Cheng-2
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 42.85 |
IFEval (0-Shot) | 83.37 |
BBH (3-Shot) | 49.98 |
MATH Lvl 5 (4-Shot) | 54.38 |
GPQA (0-shot) | 12.75 |
MuSR (0-shot) | 12.02 |
MMLU-PRO (5-shot) | 44.59 |
- Downloads last month
- 30
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for marcuscedricridia/Cheng-2
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard83.370
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard49.980
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard54.380
- acc_norm on GPQA (0-shot)Open LLM Leaderboard12.750
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.020
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard44.590