uer's picture
Update README.md
f55e2b0
metadata
language: Chinese
datasets: CLUECorpusSmall
widget:
  - text: 中国的首都是[MASK]

Chinese RoBERTa-base-word Model

Model description

We use sentencepiece model to train this roberta base model. You can download the model via HuggingFace from the link roberta-base-word-chinese-cluecorpussmall.

How to use

You can use this model directly with a pipeline for masked language modeling:

>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-base-word-chinese-cluecorpussmall')
>>> unmasker("中国的首都是[MASK]。")

BertTokenizer does not support sentencepiece, so we use AlbertTokenizer here.

Here is how to use this model to get the features of a given text in PyTorch:

from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)

and in TensorFlow:

from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)

Training data

CLUECorpusSmall is used as training data.

Training procedure

We use google's sentencepiece to train the sentencepiece model.

>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='CLUEsmall_shuf.txt',
             model_prefix='clue_6',
             vocab_size=100000,
             max_sentence_length=1024,
             max_sentencepiece_length=6,
             user_defined_symbols=['[MASK]','[unused1]','[unused2]',
                '[unused3]','[unused4]','[unused5]','[unused6]',
                '[unused7]','[unused8]','[unused9]','[unused10]'],
             pad_id=0,
             pad_piece='[PAD]',
             unk_id=1,
             unk_piece='[UNK]',
             bos_id=2,
             bos_piece='[CLS]',
             eos_id=3,
             eos_piece='[SEP]',
             train_extremely_large_corpus=True
            )

The model is pre-trained by UER-py on Tencent Cloud TI-ONE. We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.

Stage1:

python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
                      --spm_model_path models/clue_6.model \
                      --dataset_path cluecorpussmall_seq128_dataset.pt \
                      --processes_num 32 --seq_length 128 \
                      --dynamic_masking --target mlm
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
                    --spm_model_path models/clue_6.model \
                    --config_path models/bert/base_config.json \
                    --output_model_path models/cluecorpussmall_word_roberta_base_128.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
                    --learning_rate 1e-4 --batch_size 64 \
                    --embedding word_pos_seg --encoder transformer --mask fully_visible \
                    --target mlm --tie_weights

Stage2:

python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
                      --spm_model_path models/clue_6.model \
                      --dataset_path cluecorpussmall_seq512_dataset.pt \
                      --processes_num 32 --seq_length 512 \
                      --dynamic_masking --target mlm
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
                    --pretrained_model_path models/cluecorpussmall_word_roberta_base_128.bin-1000000 \
                    --spm_model_path models/clue_6.model \
                    --config_path models/bert/base_config.json \
                    --output_model_path models/cluecorpussmall_word_roberta_base_512.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
                    --learning_rate 5e-5 --batch_size 16 \
                    --embedding word_pos_seg --encoder transformer --mask fully_visible \
                    --target mlm --tie_weights

Finally, we convert the pre-trained model into Huggingface's format:

python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_base_512.bin-250000 \
                                                        --output_model_path pytorch_model.bin \
                                                        --layers_num 12 --target mlm

BibTeX entry and citation info

@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}