atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
5f9d33a verified
metadata
license: llama2
language:
  - el
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers

Llama2 7B for Greek: 100 target vocabulary size + Align target vocabulary initialization + 2x2LS/MTP training

This model is built on top of Llama2 7B adapted for Greek using 30K target language sentences sampled from CC-100.

Model Details

  • Vocabulary: This model has an additional 100 target vocabulary.
  • Target vocabulary initialization: The target weights of the embedding and LM head were initialized using Align initialization.
  • Training: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP strategies introduced in the paper.

Model Description

  • Language: Greek
  • License: Llama 2 Community License Agreement
  • Fine-tuned from model: meta-llama/Llama-2-7b-hf

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-el-30K-align-2x2ls-mtp"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-el-30K-align-2x2ls-mtp"
)

Citation

@article{yamaguchi-etal-2024-effectively,
    title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?}, 
    author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
    year={2024},
    journal={ArXiv},
    year={2024},
    volume={abs/2406.11477},
    url={https://arxiv.org/abs/2406.11477}, 
}