--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - FelixChao/WestSeverus-7B-DPO-v2 - bardsai/jaskier-7b-dpo-v5.6 - AbacusResearch/haLLAwa3 - cognitivecomputations/WestLake-7B-v2-laser model-index: - name: jaLLAbi2-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.67 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.92 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 70.16 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/jaLLAbi2-7b name: Open LLM Leaderboard --- # jaLLAbi2-7b jaLLAbi2-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [AbacusResearch/haLLAwa3](https://huggingface.co/AbacusResearch/haLLAwa3) * [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) ## 🧩 Configuration \```yaml models: - model: eren23/ogno-monarch-jaskier-merge-7b # No parameters necessary for base model - model: FelixChao/WestSeverus-7B-DPO-v2 #Emphasize the beginning of Vicuna format models parameters: weight: 0.2 density: 0.59 - model: bardsai/jaskier-7b-dpo-v5.6 parameters: weight: 0.2 density: 0.55 # Vicuna format - model: AbacusResearch/haLLAwa3 parameters: weight: 0.3 density: 0.55 - model: cognitivecomputations/WestLake-7B-v2-laser parameters: weight: 0.3 density: 0.55 merge_method: dare_ties base_model: eren23/ogno-monarch-jaskier-merge-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 \``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AbacusResearch__jaLLAbi2-7b) | Metric |Value| |---------------------------------|----:| |Avg. |75.06| |AI2 Reasoning Challenge (25-Shot)|71.67| |HellaSwag (10-Shot) |88.29| |MMLU (5-Shot) |64.92| |TruthfulQA (0-shot) |70.16| |Winogrande (5-shot) |83.35| |GSM8k (5-shot) |71.95|