Edit model card

BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

The pre-trained model vinai/bartpho-word-base is the "base" variant of BARTpho-word, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model BART. The general architecture and experimental results of BARTpho can be found in our paper:

@article{bartpho,
title     = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}},
author    = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen},
journal   = {arXiv preprint},
volume    = {arXiv:2109.09701},
year      = {2021}
}

Please CITE our paper when BARTpho is used to help produce published results or incorporated into other software.

For further information or requests, please go to BARTpho's homepage!

Downloads last month
977
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for vinai/bartpho-word-base

Finetunes
16 models

Space using vinai/bartpho-word-base 1