NotASI's picture
Adding Evaluation Results (#2)
c4a25af verified
|
raw
history blame
4.36 kB
metadata
language:
  - en
license: llama3.2
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - llama-3
  - trl
  - sft
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
datasets:
  - mlabonne/FineTome-100k
model-index:
  - name: FineTome-Llama3.2-3B-1002
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 54.74
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 19.52
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 5.29
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 0.11
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.96
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 15.96
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002
          name: Open LLM Leaderboard

Notice

Model was submitted to OpenLLM Leaderboard for full evaluation.

IMPORTANT

In case you got the following error:

exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3

Please upgrade your transformer package, that is, use the following code:

pip install --upgrade "transformers>=4.45"

Uploaded model

  • Developed by: NotASI
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-3B-Instruct-bnb-4bit

Details

This model was trained on mlabonne/FineTome-100k for 2 epochs with rslora + qlora, and achieve the final training loss: 0.596400.

This model follows the same chat template as the base model one.

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 16.60
IFEval (0-Shot) 54.74
BBH (3-Shot) 19.52
MATH Lvl 5 (4-Shot) 5.29
GPQA (0-shot) 0.11
MuSR (0-shot) 3.96
MMLU-PRO (5-shot) 15.96