--- license: llama2 datasets: - HiTZ/euscrawl language: - eu - en metrics: - accuracy - f1 - perplexity pipeline_tag: text-generation --- # **Model Card for Basque Llama 7B** Basque LLaMA is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with highly curated Basque corpora, Euscrawl ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 7B repository, links to other models can be found in the index at the bottom. # **Model Details** ## **Model Description** Basque LLaMA is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations push the gap between high- and low-resource languages when it comes to digital development. We present Basque LLaMA to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Basque LLaMA models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)), a high-quality Basque corpora. The models are released in three sizes: 7B, 13B and 70B. * **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU) * **Model type:** Language model * **Language(s) (NLP):** en, eu * **License:** llama2 * **Parent Model:** meta-llama/Llama-2-7B * **Resources for more information:** [PAPER/BLOG/POST link] * **Contact:** hitz@ehu.eus ## **Getting started** Use the code below to get started with the model. ```python from transformers import pipeline pipe = pipeline("text-generation", model="HiTZ/basque-llama-2-7b-v1") text = "Donosti da Euskal Herriko lekurik" pipe(text, max_new_tokens=40) >> [ { 'generated_text': 'Donosti da Euskal Herriko lekurik garestiena alokairuan bizitzeko,' ' eta Donostiako alokairuaren prezioa %11,3 igo da azken urtean' } ] ``` # **Uses** Basque LLaMA models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Basque LLaMA inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use. ## **Direct Use** Basque LLaMA family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases. ## **Out-of-Scope Use** The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended. # **Bias, Risks, and Limitations** In an effort to alleviate the potentially disturbing or harmful content, Basque LLaMA has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations. Please see the LLaMA’s _Ethical Considerations and Limitations _for further information. # **Training Details** ## **Training Data** The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches. See more details in the [EusCrawl](https://huggingface.co/datasets/HiTZ/euscrawl) dataset card. Additionally, 100K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting. ## **Training Procedure** The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps. | Model | Steps | Sequence length | Effective Batch size | Total tokens | GPU hours | | ---------------- | ----- | --------------- | -------------------- | ------------ | ---------- | | Basque LLaMA 7B | 2000 | 4096 | 2M tokens/step | 4B | 359.2h | | Basque LLaMA 13B | 1000 | 4096 | 2M tokens/step | 2B | 468.8h | | Basque LLaMA 70B | 1680 | 4096 | 2M tokens/step | 3.4B | \*6475.52h | "*" indicates the time for the entire training process (2000 steps), however the weights of the step 1680 are shared as it is the best checkpoint according to validation loss. # **Evaluation** We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset. ## **Testing Data, Factors & Metrics** ### **Testing Data** * **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion. * Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele) * **X-StoryCloze** ([Lin et al.](https://aclanthology.org/2022.emnlp-main.616.pdf)): XStoryCloze consists of the professionally translated version of the English Story Cloze dataset to 10 non-English languages. Story Cloze is a new commonsense reasoning dataset which consists in choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion. * Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze) * **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. Data card: [https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE). We evaluated the model in a 5-shot fashion on the following tasks: * **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign. * **VaxxStance**: Stance detection on tweets around the anti-vaccine movement. * **BTHCv2**: Topic classification of news extracts with 12 categories. * **EpecKorrefBin**: Correference detection task similar to WSC. * **QNLIeu**: Q&A NLI built from the Basque Wikipedia. * **WiCeu**: Basque Word-in-Context task. ### **Metrics** * Accuracy: Belebele, X-StoryCloze, EpecKorrefBin, QNLI-eu, and, WiC-eu * Micro F1: BEC2016-eu and BHTCv2 * Macro F1: VaxxStance (favor & against) ## **Results** The model was evaluated using the LM Evaluation harness library from Eleuther AI. In order to reproduce our results please refer to our [fork](https://github.com/naiarapm/lm-evaluation-harness/tree/basqueglue) that includes the implementation for the mentioned datasets. | Model | Belebele | X-StoryCloze | BEC | Vaxx | BHTC | coref | QNLI | WiC | Average | | ---------------- | -------- | ------------ | ----- | ----- | ----- | ----- | ----- | ----- | ------- | | Random | 25.00 | 50.00 | 33.33 | 33.33 | 8.33 | 50.00 | 50.00 | 50.00 | 37.50 | | LLaMA 2 7B | 26.22 | 50.43 | 41.63 | 18.60 | 20.06 | 50.94 | 48.32 | 49.64 | 38.23 | | LLaMA 2 13B | 32.00 | 50.63 | 41.09 | 18.25 | 27.35 | 49.23 | 48.74 | 49.21 | 39.56 | | LLaMA 2 70B | 33.56 | 51.62 | 47.47 | 21.01 | 31.01 | 52.98 | 51.26 | 51.57 | 42.56 | | BLOOM 7B | 27.00 | 57.18 | 37.94 | 20.72 | 39.10 | 48.21 | 47.48 | 47.57 | 40.65 | | XGLM 7B | 23.88 | 57.71 | 39.94 | 21.58 | 36.73 | 50.94 | 50.42 | 49.21 | 41.30 | | Basque LLaMA 7B | 35.67 | 63.13 | 55.61 | 45.93 | 44.44 | 50.43 | 55.04 | 50.14 | 50.05 | | Basque LLaMA 13B | 53.56 | 65.85 | 53.23 | 48.66 | 53.61 | 62.52 | 57.14 | 54.21 | 56.10 | | Basque LLaMA 70B | 71.78 | 67.57 | 63.52 | 48.95 | 49.51 | 79.90 | 58.82 | 55.50 | 61.94 | # **Environmental Impact** Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). * **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes * **Hours used:** 359.2h + 468.8h + 6475.52h = 7303.52h * **Compute cluster:** CINECA HPC * **Compute Region:** Italy * **Carbon Emitted:** 673.75kg CO2 eq # **Acknowledgements** This work has been partially supported by the Basque Government (IKER-GAITU project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.