OLMoE-1B-7B-0924 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
c1b31da verified
|
raw
history blame
10.3 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - moe
  - olmo
  - olmoe
datasets:
  - allenai/OLMoE-mix-0924
co2_eq_emissions: 1
model-index:
  - name: OLMoE-1B-7B-0924
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 21.85
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 8.31
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 1.13
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 0
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.57
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 8.22
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allenai/OLMoE-1B-7B-0924
          name: Open LLM Leaderboard
OLMoE Logo.

Model Summary

OLMoE-1B-7B is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B. OLMoE is 100% open-source.

This information and more can also be found on the OLMoE GitHub repository.

Use

Install transformers from source until a release after this PR & torch and run:

from transformers import OlmoeForCausalLM, AutoTokenizer
import torch

DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

# Load different ckpts via passing e.g. `revision=step10000-tokens41B`
model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924").to(DEVICE)
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924")
inputs = tokenizer("Bitcoin is", return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
out = model.generate(**inputs, max_length=64)
print(tokenizer.decode(out[0]))
# > # Bitcoin is a digital currency that is created and held electronically. No one controls it. Bitcoins aren’t printed, like dollars or euros – they’re produced by people and businesses running computers all around the world, using software that solves mathematical

You can list all revisions/branches by installing huggingface-hub & running:

from huggingface_hub import list_repo_refs
out = list_repo_refs("OLMoE/OLMoE-1B-7B-0924")
branches = [b.name for b in out.branches]

Important branches:

  • step1200000-tokens5033B: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them.
  • main: Checkpoint annealed from step1200000-tokens5033B for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT & https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct).
  • fp32: FP32 version of main. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for main you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks.

Evaluation Snapshot

Model Active Params Open Data MMLU HellaSwag ARC-Chall. ARC-Easy PIQA WinoGrande
LMs with ~1B active parameters
OLMoE-1B-7B 1.3B βœ… 54.1 80.0 62.1 84.2 79.8 70.2
DCLM-1B 1.4B βœ… 48.5 75.1 57.6 79.5 76.6 68.1
TinyLlama-1B 1.1B βœ… 33.6 60.8 38.1 69.5 71.7 60.1
OLMo-1B (0724) 1.3B βœ… 32.1 67.5 36.4 53.5 74.0 62.9
Pythia-1B 1.1B βœ… 31.1 48.0 31.4 63.4 68.9 52.7
LMs with ~2-3B active parameters
Qwen1.5-3B-14B 2.7B ❌ 62.4 80.0 77.4 91.6 81.0 72.3
Gemma2-3B 2.6B ❌ 53.3 74.6 67.5 84.3 78.5 71.8
JetMoE-2B-9B 2.2B ❌ 49.1 81.7 61.4 81.9 80.3 70.7
DeepSeek-3B-16B 2.9B ❌ 45.5 80.4 53.4 82.7 80.1 73.2
StableLM-2B 1.6B ❌ 40.4 70.3 50.6 75.3 75.6 65.8
OpenMoE-3B-9B 2.9B βœ… 27.4 44.4 29.3 50.6 63.3 51.9
LMs with ~7-9B active parameters
Gemma2-9B 9.2B ❌ 70.6 87.3 89.5 95.5 86.1 78.8
Llama3.1-8B 8.0B ❌ 66.9 81.6 79.5 91.7 81.1 76.6
DCLM-7B 6.9B βœ… 64.4 82.3 79.8 92.3 80.1 77.3
Mistral-7B 7.3B ❌ 64.0 83.0 78.6 90.8 82.8 77.9
OLMo-7B (0724) 6.9B βœ… 54.9 80.5 68.0 85.7 79.3 73.2
Llama2-7B 6.7B ❌ 46.2 78.9 54.2 84.0 77.5 71.7

Citation

@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
      title={OLMoE: Open Mixture-of-Experts Language Models}, 
      author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
      year={2024},
      eprint={2409.02060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02060}, 
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 7.18
IFEval (0-Shot) 21.85
BBH (3-Shot) 8.31
MATH Lvl 5 (4-Shot) 1.13
GPQA (0-shot) 0.00
MuSR (0-shot) 3.57
MMLU-PRO (5-shot) 8.22