|
|
|
--- |
|
license: gemma |
|
language: |
|
- si |
|
base_model: google/gemma-2-9b |
|
library_name: transformers |
|
--- |
|
# Gemma2 9B for Sinhala: 100 target vocabulary size + Random target vocabulary initialization + 2x2LS/MTP/512 training |
|
|
|
This model is built on top of Gemma2 9B adapted for Sinhala using 30K target language sentences sampled from CC-100. |
|
|
|
## Model Details |
|
|
|
* **Vocabulary**: This model has an additional 100 target vocabulary. |
|
* **Target vocabulary initialization**: The target weights of the embedding were initialized using Random initialization. |
|
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper. |
|
|
|
## Model Description |
|
|
|
- **Language:** Sinhala |
|
- **License:** Gemma Terms of Use |
|
- **Fine-tuned from model:** google/gemma-2-9b |
|
|
|
|
|
## Model Sources |
|
|
|
- **Repository:** https://github.com/gucci-j/lowres-cve |
|
- **Paper:** https://arxiv.org/abs/2406.11477 |
|
|
|
## How to Get Started with the Model |
|
Use the code below to get started with the model. |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
"atsuki-yamaguchi/gemma-2-9b-si-30K-rand" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained( |
|
"atsuki-yamaguchi/gemma-2-9b-si-30K-rand" |
|
) |
|
``` |
|
|
|
|
|
## Citation |
|
``` |
|
@article{yamaguchi-etal-2024-effectively, |
|
title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?}, |
|
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras}, |
|
year={2024}, |
|
journal={ArXiv}, |
|
year={2024}, |
|
volume={abs/2406.11477}, |
|
url={https://arxiv.org/abs/2406.11477}, |
|
} |
|
``` |
|
|
|
|
|
|