File size: 1,523 Bytes
8909e95 fbf961f 3c6748e 3241d86 8909e95 fbf961f 9baff34 fbf961f 3c6748e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
license: llama2
pipeline_tag: text-generation
---
MaLA-500 is a novel large language model designed to cover an extensive range of 534 languages. This model builds upon LLaMA 2 7B and integrates continued pretraining with vocabulary extension, with an expanded vocabulary size of 260,164, and LoRA low-rank adaptation.
- **Continued Pretraining:** Enhances the model's ability to adapt to a wide range of languages.
- **LoRA Low-Rank Adaptation:** LoRA low-rank adaptation refines the model's adaptation capabilities.
- **Vocabulary Extension:** MaLA-500 boasts an extended vocabulary size of 260,164.
- **Multilingual Proficiency:** Trained on Glot500-c, covering 534 languages.
## How to Get Started with the Model
Use the code below to get started with the model.
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-hf')
base_model.resize_token_embeddings(260164)
tokenizer = AutoTokenizer.from_pretrained('MaLA-LM/mala-500')
model = PeftModel.from_pretrained(base_model, 'MaLA-LM/mala-500')
```
## Citation
```
@misc{lin2024mala500,
title={MaLA-500: Massive Language Adaptation of Large Language Models},
author={Peiqin Lin and Shaoxiong Ji and Jörg Tiedemann and André F. T. Martins and Hinrich Schütze},
year={2024},
eprint={2401.13303},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |