please give me to code for training this model llama2 to extend contextlength and fine-tuning to new language.

#3
by Teera - opened

git repo or other ... give me please
thank you

MaLA-LM org

the code is based on huggingface examples and lm-eval-harness. we also referred to the codes of bloom+1, Chinese llama and bigscience's fork of lm-eval-harness. we plan to release a package of all these sources in the future. please refer to these mentioned repos for now. thanks.

Sign up or log in to comment