leaderboard-pr-bot's picture
Adding Evaluation Results
f6bd1ce verified
|
raw
history blame
4.31 kB
metadata
language:
  - en
license: apache-2.0
datasets:
  - gsm8k
model-index:
  - name: Marcoroni-neural-chat-7B-v2_gsm8k_merged
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 65.78
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 85.26
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.26
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 53.18
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 78.93
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 61.33
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged
          name: Open LLM Leaderboard

Marcoroni-neural-chat-7B-v2_gsm8k

This model is a fine-tuned version of Toten5/Marcoroni-neural-chat-7B-v2 on the GSM8K dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.36.2
  • Pytorch 2.1.2
  • Datasets 2.16.1
  • Tokenizers 0.15.1

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.13
AI2 Reasoning Challenge (25-Shot) 65.78
HellaSwag (10-Shot) 85.26
MMLU (5-Shot) 64.26
TruthfulQA (0-shot) 53.18
Winogrande (5-shot) 78.93
GSM8k (5-shot) 61.33