DPOpenHermes-11B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
85be3e3 verified
|
raw
history blame
3.92 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
datasets:
  - teknium/openhermes
  - argilla/ultrafeedback-binarized-preferences
  - Intel/orca_dpo_pairs
model-index:
  - name: DPOpenHermes-11B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 66.55
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 84.8
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.02
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 57.34
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 76.95
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 51.33
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openaccess-ai-collective/DPOpenHermes-11B
          name: Open LLM Leaderboard

DPOpenHermes 11B

This is a mergekit merge of DPOpenHermes-7B from seperate versions of it.

slices:
  - sources:
    - model: openaccess-ai-collective/DPOpenHermes-7B
      revision: dpo-v0
      layer_range: [0, 24]
  - sources:
    - model: openaccess-ai-collective/DPOpenHermes-7B
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.83
AI2 Reasoning Challenge (25-Shot) 66.55
HellaSwag (10-Shot) 84.80
MMLU (5-Shot) 64.02
TruthfulQA (0-shot) 57.34
Winogrande (5-shot) 76.95
GSM8k (5-shot) 51.33