|
# PhoGPT: Generative Pre-training for Vietnamese |
|
|
|
|
|
We open-source a state-of-the-art 4B-parameter generative model series for Vietnamese, which includes the base pre-trained monolingual model PhoGPT-4B and its chat variant, PhoGPT-4B-Chat. The base model, PhoGPT-4B, with exactly 3.7B parameters, is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length, employing a vocabulary of 20480 token types. The chat variant, PhoGPT-4B-Chat, is the modeling output obtained by fine-tuning PhoGPT-4B on a dataset of 70K instructional prompts and their responses, along with an additional 290K conversations. We demonstrate its strong performance compared to previous closed-source and open-source 7B-parameter models. More details about the general architecture and experimental results of PhoGPT can be found in our [technical report](https://arxiv.org/abs/2311.02945): |
|
|
|
``` |
|
@article{PhoGPT, |
|
title = {{PhoGPT: Generative Pre-training for Vietnamese}}, |
|
author = {Dat Quoc Nguyen and Linh The Nguyen and Chi Tran and Dung Ngoc Nguyen and Dinh Phung and Hung Bui}, |
|
journal = {arXiv preprint}, |
|
volume = {arXiv:2311.02945}, |
|
year = {2023} |
|
} |
|
``` |
|
|
|
**Please CITE** our technical report when PhoGPT is used to help produce published results or is incorporated into other software. |
|
|
|
For further information or requests, please go to [PhoGPT's homepage](https://github.com/VinAIResearch/PhoGPT)! |