atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
13e1c87 verified
|
raw
history blame
1.96 kB
metadata
license: llama2
language:
  - si
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers

Llama2 7B for Sinhala: 100 target vocabulary size + Align target vocabulary initialization + 2 Stage training

This model is built on top of Llama2 7B adapted for Sinhala using 30K target language sentences sampled from CC-100.

Model Details

  • Vocabulary: This model has an additional 100 target vocabulary.
  • Target vocabulary initialization: The target weights of the embedding and LM head were initialized using Align initialization.
  • Training: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2 Stage strategies introduced in the paper.

Model Description

  • Language: Sinhala
  • License: Llama 2 Community License Agreement
  • Fine-tuned from model: meta-llama/Llama-2-7b-hf

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-si-30K-align-2stage"
)
model = PeftModelForCausalLM.from_pretrained(
    model,
    "atsuki-yamaguchi/Llama-2-7b-hf-si-30K-align-2stage"
)
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-si-30K-align-2stage"
)

Citation

@article{yamaguchi-etal-2024-effectively,
    title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?}, 
    author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
    year={2024},
    journal={ArXiv},
    year={2024},
    volume={abs/2406.11477},
    url={https://arxiv.org/abs/2406.11477}, 
}