metadata
datasets:
- tatsu-lab/alpaca
- ewof/alpaca-instruct-unfiltered
- databricks/databricks-dolly-15k
- teknium/GPTeacher-General-Instruct
- garage-bAInd/Open-Platypus
- Honkware/oasst1-alpaca-json
- GAIR/lima
- infCapital/viet-llama2-ft-tiny
language:
- vi
LLaMa2 - 7B Chat models, extend vocab size to 44800 for Vietnamese understanding.
Continual Pre-Train with 2B Vietnames Tokens aligned from VnNews Corpus, 10K vnthuquan books, wikipedia_vi
Fine-Tuning with infCapital/viet-llama2-ft-tiny dataset, the combination of vaious dataset then translated into Vietnamese using OpenAI GPT-3
For more information: email me at duyhunghd6@gmail.com | http://fb.com/hungbui2013