Excheverry's picture
Adding Evaluation Results
386aeb7 verified
|
raw
history blame
4.67 kB
metadata
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - anthracite-org/magnum-12b-v2
  - nothingiisreal/MN-12B-Celeste-V1.9
model-index:
  - name: MN-12B-Starcannon-v3
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 38.07
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 30.87
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 6.57
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.13
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 9.85
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 25.16
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nothingiisreal/MN-12B-Starcannon-v3
          name: Open LLM Leaderboard

Mistral Nemo 12B Starcannon v3

This is a merge of pre-trained language models created using mergekit.
Static GGUF (by Mradermacher)
Imatrix GGUF (by Mradermacher)
EXL2 (by kingbri of RoyalLab)

Merge Details

Merge Method

This model was merged using the TIES merge method using nothingiisreal/MN-12B-Celeste-V1.9 as a base.

Merge Fodder

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
    - model: anthracite-org/magnum-12b-v2
      parameters:
        density: 0.3
        weight: 0.5
    - model: nothingiisreal/MN-12B-Celeste-V1.9
      parameters:
        density: 0.7
        weight: 0.5

merge_method: ties
base_model: nothingiisreal/MN-12B-Celeste-V1.9
parameters:
    normalize: true
    int8_mask: true
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 18.94
IFEval (0-Shot) 38.07
BBH (3-Shot) 30.87
MATH Lvl 5 (4-Shot) 6.57
GPQA (0-shot) 3.13
MuSR (0-shot) 9.85
MMLU-PRO (5-shot) 25.16