Papers
arxiv:2003.00744

PhoBERT: Pre-trained language models for Vietnamese

Published on Mar 2, 2020
Authors:

Abstract

We present PhoBERT with two versions, <PRE_TAG>PhoBERT-base</POST_TAG> and <PRE_TAG>PhoBERT-large</POST_TAG>, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at https://github.com/VinAIResearch/PhoBERT

Community

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2003.00744 in a dataset README.md to link it from this page.

Spaces citing this paper 16

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.