Papers
arxiv:2406.18266

"Vorbeşti Româneşte?" A Recipe to Train Powerful Romanian LLMs with English Instructions

Published on Jun 26
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

In recent years, Large Language Models (LLMs) have achieved almost human-like performance on various tasks. While some LLMs have been trained on multilingual data, most of the training data is in English; hence, their performance in English greatly exceeds other languages. To our knowledge, we are the first to collect and translate a large collection of texts, instructions, and benchmarks and train, evaluate, and release open-source LLMs tailored for Romanian. We evaluate our methods on four different categories, including academic benchmarks, MT-Bench (manually translated), and a professionally built historical, cultural, and social benchmark adapted to Romanian. We argue for the usefulness and high performance of RoLLMs by obtaining state-of-the-art results across the board. We publicly release all resources (i.e., data, training and evaluation code, models) to support and encourage research on Romanian LLMs while concurrently creating a generalizable recipe, adequate for other low or less-resourced languages.

Community

Sign up or log in to comment

Models citing this paper 38

Browse 38 models citing this paper

Datasets citing this paper 18

Browse 18 datasets citing this paper

Spaces citing this paper 5

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.