File size: 15,177 Bytes
44608fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a798bd
44608fd
5a798bd
44608fd
 
 
5a798bd
44608fd
 
 
 
5a798bd
44608fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
---
license: llama2
datasets:
- HiTZ/latxa-corpus-v1.1
language:
- eu
- en
metrics:
- accuracy
- f1
- perplexity
pipeline_tag: text-generation
model-index:
- name: Latxa-7b-v1.2
  results:
    - task:
        type: multiple-choice
      dataset:
        name: xstory_cloze
        type: XStory
      metrics:
        - name: Accuracy (0-shot)
          type: Accuracy (0-shot)
          value: 65.45
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: multiple-choice
      dataset:
        name: belebele
        type: Belebele
      metrics:
        - name: Accuracy (5-shot)
          type: Accuracy (5-shot)
          value: 37.33
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: mix
      dataset:
        name: basque_glue
        type: BasqueGLUE
      metrics:
        - name: Average scores (5-shot)
          type: Average scores (5-shot)
          value: 52.56
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: multiple_choice
      dataset:
        name: eus_proficiency
        type: EusProficiency
      metrics:
        - name: Accuracy (5-shot)
          type: Accuracy (5-shot)
          value: 30.26
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: multiple_choice
      dataset:
        name: eus_reading
        type: EusReading
      metrics:
        - name: Accuracy (5-shot)
          type: Accuracy (5-shot)
          value: 25.00
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: multiple_choice
      dataset:
        name: eus_trivia
        type: EusTrivia
      metrics:
        - name: Accuracy (5-shot)
          type: Accuracy (5-shot)
          value: 42.16
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
    - task:
        type: multiple_choice
      dataset:
        name: eus_exams
        type: EusExams
      metrics:
        - name: Accuracy (5-shot)
          type: Accuracy (5-shot)
          value: 33.82
      source:
        name: Paper
        url: https://arxiv.org/abs/2403.20266
---

# **Model Card for Latxa 7b**

<p align="center">
  <img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
</p>

We introduce Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which we continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens. In our extensive evaluation, Latxa outperforms all previous open models we compare to by a large margin. In addition, it is competitive with GPT-4 Turbo in language proficiency and understanding, despite lagging behind in reading comprehension and knowledgeintensive tasks. Both the Latxa family of models, as well as our new pretraining corpora and evaluation datasets, are publicly available under open licenses. Our suite enables reproducible research on methods to build LLMs for low-resource languages

- 📒 Blog Post: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://www.hitz.eus/en/node/340)
- 📖 Paper: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266)
- 💻 Code: [hitz-zentroa/latxa](https://github.com/hitz-zentroa/latxa)

# **Model Details**


## **Model Description**

Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in [Latxa Corpus v1.1](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1), a high-quality Basque corpora. 

The models are released in three sizes: 7B, 13B and 70B.



* **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
* **Model type:** Language model
* **Language(s) (NLP):** en, eu
* **License:** llama2
* **Parent Model:** meta-llama/Llama-2-7b
* **Contact:** hitz@ehu.eus 


## **Getting started**

Use the code below to get started with the model.

```python

from transformers import pipeline

pipe = pipeline("text-generation", model="HiTZ/latxa-7b-v1.2")

text = "Euskara adimen artifizialera iritsi da!"

pipe(text, max_new_tokens=50, num_beams=5)

>> [
 {
  'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
  ' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
 }
]

```


# **Uses**

Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use. 


## **Direct Use**

Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.


## **Out-of-Scope Use**

The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended. 


# **Bias, Risks, and Limitations**

In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Latxa-Corpus below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations. 

Please see the LLaMA’s _Ethical Considerations and Limitations_ for further information.


# **Training Details**


## **Training Data**

Our training corpus combines various existing datasets, as well as some new ones that we release with this work. We have prioritized quality over quantity when constructing our corpus, prioritizing high-quality data sources and applying a thorough deduplication and filtering process. In total, a 4.17B tokens corpus is used to train the model.

See more details in the [Latxa Corpus](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1) dataset card. 

Additionally, 500K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting.


## **Training Procedure**

The training of Latxa was conducted using the [GPT-Neox](https://github.com/EleutherAI/gpt-neox) library. As infrastructure, we leveraged the CINECA HPC Leonardo computing cluster located in Italy, which is powered by 3456 nodes each containing 4x custom A100 64Gb GPUs. The models were trained for 10k steps with a sequence length of 4096 tokens and an effective batch size of 2M tokens, resulting in a total of 20B tokens (around 4 epochs). We used a cosine learning rate schedule, with a warm-up of 500 steps and decaying down to 3\% of the peak learning rate. We set up the peak learning rate to be 1e-4. All other hyperparameters follow ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)).



# **Evaluation**

We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.


## **Testing Data, Factors & Metrics**


### **Testing Data**



* **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
    * Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele)
* **X-StoryCloze** ([Lin et al.](https://arxiv.org/abs/2112.10668)): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
    * Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze)
* **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
    * Data card:[ https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE).
    * Tasks:
        * **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign.
        * **VaxxStance**: Stance detection on tweets around the anti-vaccine movement.
        * **BTHCv2**: Topic classification of news extracts with 12 categories.
        * **EpecKorrefBin**: Correference detection task similar to WSC.
        * **QNLIeu**: Q&A NLI built from the Basque Wikipedia.
        * **WiCeu**: Basque Word-in-Context task.
* **EusProficiency** ([Etxaniz et al., 2024]()): EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.
    * Data card: [https://huggingface.co/datasets/HiTZ/EusProficiency](https://huggingface.co/datasets/HiTZ/EusProficiency)
* **EusReading** ([Etxaniz et al., 2024]()): EusReading consists of 352 reading comprehension exercises (_irakurmena_) sourced from the same set of past EGA exams.
    * Data card: [https://huggingface.co/datasets/HiTZ/EusReading](https://huggingface.co/datasets/HiTZ/EusReading)
* **EusTrivia** ([Etxaniz et al., 2024]()): EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging.
    * Data card: [https://huggingface.co/datasets/HiTZ/EusTrivia](https://huggingface.co/datasets/HiTZ/EusTrivia)
* **EusExams** ([Etxaniz et al., 2024]()): EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU).
    * Data card: [https://huggingface.co/datasets/HiTZ/EusExams](https://huggingface.co/datasets/HiTZ/EusExams)

### **Metrics**

For most of the task we used Accuracy, as they are framed as Multiple Choice questions. For the rest, particularly task from BasqueGLUE benchmark, we have used the following:

* **Micro F1**: BEC2016-eu and BHTCv2
* **Macro F1**: VaxxStance (favor & against)


## **Results**

The model was evaluated using the LM Evaluation harness library from Eleuther AI. 
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation).


| Model            | Size | XStory | Belebele | BasGLUE | EusProf | EusRead | EusTrivia | EusExams | Avg   |
|------------------|------|--------|----------|---------|---------|---------|-----------|----------|-------|
| **Random**           |      | 50.00  | 25.00    | 37.50   | 25.00   | 25.83   | 26.55     | 25.00    | 30.70 |
|
| GPT 3.5 Turbo    | n/a  | --     | 57.33    | 48.62   | 31.24   | 36.65   | 46.71     | 42.42    | --    |
| GPT 4 Turbo      | n/a  | --     | **90.67**| **62.90**| **56.70**| **75.85**| **73.12** | **70.22**| --    |
|
| XGLM             | 7B   | 57.71  | 23.88    | 41.47   | 22.96   | 24.43   | 26.53     | 24.59    | 32.51 |
| BLOOM            | 7B   | 57.18  | 27.00    | 40.17   | 25.34   | 28.41   | 27.17     | 25.07    | 33.86 |
| Mistral          | 7B   | 51.09  | **38.89**| 39.22   | 25.01   | **29.26**   | 34.58     | 32.15    | 35.94 |
| Llama 2          | 7B   | 50.43  | 26.22    | 38.20   | 24.09   | 27.27   | 29.50     | 28.84    | 32.51 |
| **Latxa v1.1**   | 7B   | **65.45**| 37.33    | **52.56**| **30.26**| 25.00| **42.16** | **33.82**| **40.94** |
|
| mGPT             | 13B  | 55.39  | 25.00    | 37.56   | 25.00   | 24.15   | 27.17     | 25.73    | 32.14 |
| Llama 2          | 13B  | 50.63  | 32.00    | 38.98   | 25.90   | 28.98   | 33.53     | 29.66    | 34.36 |
| **Latxa v1.1**   | 13B  | **66.51**| **53.89**  | **53.36**     | **44.11**| **32.67**    | **56.38** | **43.66**| **50.08** |
|
| Mixtral          | 8x7B | 52.55  | 50.44    | 45.00   | 26.43   | 37.50   | 42.51     | 39.87    | 41.97 |
| Yi               | 34B  | 52.22  | 54.56    | 43.90   | 27.30   | 34.66   | 42.57     | 39.68    | 42.05 |
| Llama 2          | 70B  | 51.62  | 33.56    | 42.55   | 24.16   | 27.84   | 38.43     | 33.08    | 35.47 |
| **Latxa v1.1**   | 70B  | **70.55**| **71.67** | **59.74**| **60.65**| **50.57**| **62.45** | **51.90**| **61.08** |


# **Environmental Impact**

Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

| Model      | Size | Time (GPU Hours)  | Carbon Emitted (kg CO2 eq)  |
|------------|------|-------------------|----------------------------|
| Latxa v1.1 | 7B   | 952.5h            | 124.47kg                   |
| Latxa v1.1 | 13B  | 2,518.0h          | 329.06kg                   |
| Latxa v1.1 | 70B  | 30,266.0h         | 3,955.17kg                 |
| Total      |  -   | 33,636.5h         | 4,408,7kg                  |


* **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes
* **Hours used:** 33,636.5h 
* **Compute cluster:** CINECA HPC
* **Compute Region:** Italy
* **Carbon Emitted:** 4,408,7kg  CO<sub>2</sub> eq


# **Acknowledgements**

This work has been partially supported by the Basque Government (IKER-GAITU project). 
It has also been partially supported by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project with reference 2022/TL22/00215335.
The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.

# **Citation**
To cite our work, please use:
```bibtex
@misc{etxaniz2024latxa,
      title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque}, 
      author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
      year={2024},
      eprint={2403.20266},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

```