atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
a563490 verified
|
raw
history blame
1.77 kB
metadata
license: llama3
language:
  - my
base_model: meta-llama/Meta-Llama-3-8B
library_name: transformers

Llama3 8B for Burmese: 5000 target vocabulary size + Mean target vocabulary initialization + 2x2LS/MTP/512 training

This model is built on top of Llama3 8B adapted for Burmese using 30K target language sentences sampled from CC-100.

Model Details

  • Vocabulary: This model has an additional 5000 target vocabulary.
  • Target vocabulary initialization: The target weights of the embedding and LM head were initialized using Mean initialization.
  • Training: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.

Model Description

  • Language: Burmese
  • License: Llama 3 Community License Agreement
  • Fine-tuned from model: meta-llama/Meta-Llama-3-8B

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Llama-3-8B-my-30K-5000-mean"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Llama-3-8B-my-30K-5000-mean"
)

Citation

@article{yamaguchi-etal-2024-effectively,
    title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?}, 
    author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
    year={2024},
    journal={ArXiv},
    year={2024},
    volume={abs/2406.11477},
    url={https://arxiv.org/abs/2406.11477}, 
}