GGUF
English
Mixture of Experts
olmo
olmoe
Inference Endpoints
Edit model card

GGUF version of https://huggingface.co/allenai/OLMoE-1B-7B-0924

@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
      title={OLMoE: Open Mixture-of-Experts Language Models}, 
      author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
      year={2024},
      eprint={2409.02060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02060}, 
}
Downloads last month
1,020
GGUF
Model size
6.92B params
Architecture
olmoe

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for allenai/OLMoE-1B-7B-0924-GGUF

Quantized
(2)
this model

Dataset used to train allenai/OLMoE-1B-7B-0924-GGUF

Collection including allenai/OLMoE-1B-7B-0924-GGUF