HiTZ
/

Text Generation
Transformers
PyTorch
Basque
English
llama
text-generation-inference
Inference Endpoints
File size: 12,258 Bytes
99c288b
 
5059354
 
 
 
 
 
 
 
 
c1dfd49
99c288b
5059354
87ebf1a
5059354
f76fdd0
 
 
4a5b53f
dbac5c2
 
 
 
864e444
5059354
4bba458
5059354
c1dfd49
5059354
 
c1dfd49
5059354
87ebf1a
5059354
c1dfd49
5059354
 
 
c1dfd49
 
 
 
3ee0836
c1dfd49
5059354
 
c1dfd49
5059354
 
 
c1dfd49
3ee0836
c1dfd49
5059354
87ebf1a
3ee0836
 
 
 
5059354
c1dfd49
3ee0836
 
 
 
c1dfd49
3ee0836
c1dfd49
5059354
 
c1dfd49
5059354
87ebf1a
5059354
 
c1dfd49
5059354
87ebf1a
5059354
 
c1dfd49
5059354
c1dfd49
5059354
 
c1dfd49
5059354
87ebf1a
5059354
c1dfd49
5059354
 
c1dfd49
5059354
 
c1dfd49
5059354
c1dfd49
5059354
c1dfd49
5059354
c1dfd49
5059354
 
c1dfd49
5059354
c1dfd49
5059354
 
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5059354
 
c1dfd49
5059354
c1dfd49
5059354
 
c1dfd49
5059354
 
c1dfd49
5059354
 
 
c1dfd49
 
cd3604e
5b44362
3ee0836
 
 
 
 
 
 
 
 
 
5059354
c1dfd49
5059354
 
 
3ee0836
 
 
5059354
 
c1dfd49
5059354
fdd7a82
 
5059354
 
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ebf1a
3ee0836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5059354
 
 
c1dfd49
5059354
c1dfd49
5059354
 
 
c1dfd49
 
 
 
 
5059354
 
c1dfd49
5059354
c1dfd49
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
---
license: llama2
datasets:
- HiTZ/euscrawl
language:
- eu
- en
metrics:
- accuracy
- f1
- perplexity
pipeline_tag: text-generation
---

# **Model Card for Latxa 7b**

<p align="center">
  <img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
</p>

<span style="color: red; font-weight: bold">IMPORTANT:</span> This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in [our HuggingFace collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304).



Latxa is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with Euscrawl, a highly curated Basque corpora ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 7b repository, links to other models can be found in the [Latxa Collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304).

Read more about Latxa in our [website](https://www.hitz.eus/en/node/340) or in [LinkedIn](https://www.linkedin.com/pulse/presenting-latxa-largest-language-model-built-basque-hitz-zentroa-63qdf)!

# **Model Details**


## **Model Description**

Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)), a high-quality Basque corpora. 

The models are released in three sizes: 7B, 13B and 70B.



* **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
* **Model type:** Language model
* **Language(s) (NLP):** en, eu
* **License:** llama2
* **Parent Model:** meta-llama/Llama-2-7b
* **Contact:** hitz@ehu.eus 


## **Getting started**

Use the code below to get started with the model.

```python

from transformers import pipeline

pipe = pipeline("text-generation", model=”HiTZ/latxa-7b-v1”)

text = "Euskara adimen artifizialera iritsi da!"

pipe(text, max_new_tokens=50, num_beams=5)

>> [
 {
  'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
  ' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
 }
]

```


# **Uses**

Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use. 


## **Direct Use**

Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.


## **Out-of-Scope Use**

The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended. 


# **Bias, Risks, and Limitations**

In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations. 

Please see the LLaMA’s _Ethical Considerations and Limitations _for further information.


# **Training Details**


## **Training Data**

The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches.	

See more details in the [EusCrawl](https://huggingface.co/datasets/HiTZ/euscrawl) dataset card. 

Additionally, 100K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting.


## **Training Procedure**

The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps. 


<table>
  <tr>
   <td>Model
   </td>
   <td>Steps
   </td>
   <td>Sequence length
   </td>
   <td>Effective Batch size
   </td>
   <td>Total tokens
   </td>
   <td>GPU hours
   </td>
  </tr>
  <tr>
   <td>Latxa 7B
   </td>
   <td><p style="text-align: right">
2000</p>

   </td>
   <td><p style="text-align: right">
4096</p>

   </td>
   <td><p style="text-align: right">
2M tokens/step</p>

   </td>
   <td><p style="text-align: right">
4B</p>

   </td>
   <td><p style="text-align: right">
359.2h</p>

   </td>
  </tr>
  <tr>
   <td>Latxa 13B
   </td>
   <td><p style="text-align: right">
1000</p>

   </td>
   <td><p style="text-align: right">
4096</p>

   </td>
   <td><p style="text-align: right">
2M tokens/step</p>

   </td>
   <td><p style="text-align: right">
2B</p>

   </td>
   <td><p style="text-align: right">
468.8h</p>

   </td>
  </tr>
  <tr>
   <td>Latxa 70B
   </td>
   <td><p style="text-align: right">
1680</p>

   </td>
   <td><p style="text-align: right">
4096</p>

   </td>
   <td><p style="text-align: right">
2M tokens/step</p>

   </td>
   <td><p style="text-align: right">
3.4B</p>

   </td>
   <td><p style="text-align: right">
*6475.52h</p>

   </td>
  </tr>
</table>


* indicates the time for the entire training process (2000 steps), however the weights of the step 1680 are shared as it is the best checkpoint according to validation loss.


# **Evaluation**

We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.


## **Testing Data, Factors & Metrics**


### **Testing Data**



* **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
    * Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele)
* **X-StoryCloze** ([Lin et al.](https://arxiv.org/abs/2112.10668)): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
    * Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze)
* **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
    * Data card:[ https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE).
    * Tasks:
        * **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign.
        * **VaxxStance**: Stance detection on tweets around the anti-vaccine movement.
        * **BTHCv2**: Topic classification of news extracts with 12 categories.
        * **EpecKorrefBin**: Correference detection task similar to WSC.
        * **QNLIeu**: Q&A NLI built from the Basque Wikipedia.
        * **WiCeu**: Basque Word-in-Context task.


### **Metrics**



* **Accuracy**: Belebele, X-StoryCloze, EpecKorrefBin, QNLI-eu, and, WiC-eu
* **Micro F1**: BEC2016-eu and BHTCv2
* **Macro F1**: VaxxStance (favor & against)


## **Results**

The model was evaluated using the LM Evaluation harness library from Eleuther AI. 
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation).


<table>
  <tr>
   <td><strong>Model</strong>
   </td>
   <td><strong>Belebele</strong>
   </td>
   <td><strong>X-StoryCloze</strong>
   </td>
   <td><strong>BEC</strong>
   </td>
   <td><strong>Vaxx</strong>
   </td>
   <td><strong>BHTC</strong>
   </td>
   <td><strong>coref</strong>
   </td>
   <td><strong>QNLI</strong>
   </td>
   <td><strong>WiC</strong>
   </td>
   <td><strong>Average</strong>
   </td>
  </tr>
  <tr>
   <td>Random
   </td>
   <td>25.00
   </td>
   <td>50.00
   </td>
   <td>33.33
   </td>
   <td>33.33
   </td>
   <td>8.33
   </td>
   <td>50.00
   </td>
   <td>50.00
   </td>
   <td>50.00
   </td>
   <td>37.50
   </td>
  </tr>
  <tr>
   <td>LLaMA 2 7B
   </td>
   <td>26.22
   </td>
   <td>50.43
   </td>
   <td>41.63
   </td>
   <td>18.60
   </td>
   <td>20.06
   </td>
   <td>50.94
   </td>
   <td>48.32
   </td>
   <td>49.64
   </td>
   <td>38.23
   </td>
  </tr>
  <tr>
   <td>LLaMA 2 13B
   </td>
   <td>32.00
   </td>
   <td>50.63
   </td>
   <td>41.09
   </td>
   <td>18.25
   </td>
   <td>27.35
   </td>
   <td>49.23
   </td>
   <td>48.74
   </td>
   <td>49.21
   </td>
   <td>39.56
   </td>
  </tr>
  <tr>
   <td>LLaMA 2 70B
   </td>
   <td>33.56
   </td>
   <td>51.62
   </td>
   <td>47.47
   </td>
   <td>21.01
   </td>
   <td>31.01
   </td>
   <td>52.98
   </td>
   <td>51.26
   </td>
   <td>51.57
   </td>
   <td>42.56
   </td>
  </tr>
  <tr>
   <td>BLOOM 7B
   </td>
   <td>27.00
   </td>
   <td>57.18
   </td>
   <td>37.94
   </td>
   <td>20.72
   </td>
   <td>39.10
   </td>
   <td>48.21
   </td>
   <td>47.48
   </td>
   <td>47.57
   </td>
   <td>40.65
   </td>
  </tr>
  <tr>
   <td>XGLM 7B
   </td>
   <td>23.88
   </td>
   <td>57.71
   </td>
   <td>39.94
   </td>
   <td>21.58
   </td>
   <td>36.73
   </td>
   <td>50.94
   </td>
   <td>50.42
   </td>
   <td>49.21
   </td>
   <td>41.30
   </td>
  </tr>
  <tr>
   <td><strong>Latxa 7B</strong>
   </td>
   <td>35.67
   </td>
   <td>63.13
   </td>
   <td>55.61
   </td>
   <td>45.93
   </td>
   <td>44.44
   </td>
   <td>50.43
   </td>
   <td>55.04
   </td>
   <td>50.14
   </td>
   <td>50.05
   </td>
  </tr>
  <tr>
   <td><strong>Latxa 13B</strong>
   </td>
   <td>53.56
   </td>
   <td>65.85
   </td>
   <td>53.23
   </td>
   <td>48.66
   </td>
   <td><strong>53.61</strong>
   </td>
   <td>62.52
   </td>
   <td>57.14
   </td>
   <td>54.21
   </td>
   <td>56.10
   </td>
  </tr>
  <tr>
   <td><strong>Latxa 70B</strong>
   </td>
   <td><strong>71.78</strong>
   </td>
   <td><strong>67.57</strong>
   </td>
   <td><strong>63.52</strong>
   </td>
   <td><strong>48.95</strong>
   </td>
   <td>49.51
   </td>
   <td><strong>79.90</strong>
   </td>
   <td><strong>58.82</strong>
   </td>
   <td><strong>55.50</strong>
   </td>
   <td><strong>61.94</strong>
   </td>
  </tr>
</table>



# **Environmental Impact**

Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).



* **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes
* **Hours used:** 359.2h + 468.8h + 6475.52h = 7303.52h
* **Compute cluster:** CINECA HPC
* **Compute Region:** Italy
* **Carbon Emitted:** 673.75kg CO<sub>2</sub> eq


# **Acknowledgements**

This work has been partially supported by the Basque Government (IKER-GAITU project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.