Papers
arxiv:2404.14619

OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework

Published on Apr 22
Β· Featured in Daily Papers on Apr 24

Abstract

The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring 2times fewer pre-training tokens. Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors. Our source code along with pre-trained model weights and training recipes is available at https://github.com/apple/corenet. Additionally, \model models can be found on HuggingFace at: https://huggingface.co/apple/OpenELM.

Community

Great to see developments in the small language model family.

β€œI sense a great disturbance in the source, as if millions of developers suddenly cried out in excitement and were suddenly empowered. I fear something remarkable has happened. The β€˜Apples’ and β€˜Metas’ of the tech empire have opened their vaults, joining the open-source Resistance. This is the beginning of a new collaboration, a new hope for innovation.” πŸŒŒπŸπŸ’» May the source be with you

Β·

Apple realized their golden age of being considered the king of innovation is long over. Now they're lagging far behind in the generative AI race.

Got a plain-english rewrite of the paper here if anyone is interested: https://www.aimodels.fyi/papers/arxiv/openelm-efficient-language-model-family-open-source

Β·

Cool service @mikelabs !

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 13

Browse 13 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.14619 in a dataset README.md to link it from this page.

Spaces citing this paper 6

Collections including this paper 49