metadata
license: apache-2.0
language:
- en
tags:
- moe
- olmo
- olmoe
co2_eq_emissions: 1
datasets:
- allenai/tulu-v3.1-mix-preview-4096-OLMoE
base_model: allenai/OLMoE-1B-7B-0924
Model Summary
This model is an intermediate training checkpoint during post-training, after the Supervised Fine-Tuning (SFT) step. For best performance, we recommend you use the OLMoE-Instruct version.
- Paper: https://arxiv.org/abs/2409.02060
- Pretraining Checkpoints, Code, Data and Logs.
- SFT (Supervised Fine-Tuning) Checkpoints, Code, Data and Logs.
- DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization), Checkpoints, Preference Data, DPO code, KTO code and Logs.
Branches:
main
: Instruction tuned / supervised finetuned (SFT) model of https://hf.co/allenai/OLMoE-1B-7B-0924 (main
branch)load-balancing
: Ablation with load balancing loss during SFTnon-annealed
: Ablation starting from the checkpoint prior to annealing (branchstep1200000-tokens5033B
of https://hf.co/allenai/OLMoE-1B-7B-0924) rather than the annealed checkpoint (branchmain
of https://hf.co/allenai/OLMoE-1B-7B-0924)
Citation
@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
title={OLMoE: Open Mixture-of-Experts Language Models},
author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
year={2024},
eprint={2409.02060},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.02060},
}