leaderboard-pr-bot's picture
Adding Evaluation Results
b77d617 verified
|
raw
history blame
4.02 kB
metadata
language:
  - en
license: mit
datasets:
  - Open-Orca/SlimOrca
  - beaugogh/openorca-multiplechoice-10k
metrics:
  - accuracy
model-index:
  - name: llama2_7b_merge_orcafamily
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 56.91
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 81.17
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 51.49
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 49.68
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.93
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 23.12
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yeen214/llama2_7b_merge_orcafamily
          name: Open LLM Leaderboard

This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.

The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.

First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.

Second: model fine-tuned with the SlimOrca dataset on llama2 7b.

Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.

We'll add the results once we have the official results

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 56.38
AI2 Reasoning Challenge (25-Shot) 56.91
HellaSwag (10-Shot) 81.17
MMLU (5-Shot) 51.49
TruthfulQA (0-shot) 49.68
Winogrande (5-shot) 75.93
GSM8k (5-shot) 23.12