leaderboard-pr-bot's picture
Adding Evaluation Results
742c7f4 verified
metadata
language:
  - en
license: mit
library_name: transformers
model-index:
  - name: facebook-opt-125m-qcqa-ub-6-best-for-q-loss
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 23.29
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 25.57
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 23.15
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 49.03
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 49.17
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
          name: Open LLM Leaderboard

This is a QCQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has upto 6 groups of KV heads per layer instead of original 12 KV heads in the MHA implementation. This implementation is supposed to more efficient than corresponding GQA one. This has been optimized for quality loss.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.37
AI2 Reasoning Challenge (25-Shot) 23.29
HellaSwag (10-Shot) 25.57
MMLU (5-Shot) 23.15
TruthfulQA (0-shot) 49.03
Winogrande (5-shot) 49.17
GSM8k (5-shot) 0.00