Papers
arxiv:2503.00865

Babel: Open Multilingual Large Language Models Serving Over 90% of Global Speakers

Published on Mar 2
Ā· Submitted by Yiran0924 on Mar 6
#1 Paper of the day
Authors:
,
,
,
,

Abstract

Large language models (LLMs) have revolutionized natural language processing (NLP), yet open-source multilingual LLMs remain scarce, with existing models often limited in language coverage. Such models typically prioritize well-resourced languages, while widely spoken but under-resourced languages are often overlooked. To address this disparity, we introduce Babel, an open multilingual LLM that covers the top 25 languages by number of speakers, supports over 90% of the global population, and includes many languages neglected by other open multilingual LLMs. Unlike traditional continue pretraining approaches, Babel expands its parameter count through a layer extension technique that elevates Babel's performance ceiling. We introduce two variants: Babel-9B, designed for efficient inference and fine-tuning, and Babel-83B, which sets a new standard for open multilingual LLMs. Extensive evaluations on multilingual tasks demonstrate its superior performance compared to open LLMs of comparable size. In addition, using open-source supervised fine-tuning datasets, Babel achieves remarkable performance, with Babel-9B-Chat leading among 10B-sized LLMs and Babel-83B-Chat setting a new standard for multilingual tasks, reaching the same level of commercial models.

Community

Paper author Paper submitter

šŸŒŸ Key Highlights:
1ļøāƒ£ Convering 90% populationā€”supporting top 25 languages, prioritizing widely spoken but previously underexplored languages in open multilingual models.

2ļøāƒ£ Innovative architectureā€”Unlike traditional continued pretraining approaches, Babel expands its parameter count through model extension, raising its performance ceiling.

3ļøāƒ£ Two powerful variants
šŸ’”Babel-9Bā€”Designed for efficient inference and fine-tuning.
šŸ’”Babel-83Bā€”A new benchmark for open multilingual LLMs.

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.00865 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.00865 in a Space README.md to link it from this page.

Collections including this paper 3