language: Chinese
datasets: CLUECorpusSmall
widget:
- text: 中国的首都是extra0京
Chinese T5-small Model
Model description
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. Based on this, we released this Chinese t5-small model. You can download the model via HuggingFace from the link t5-small-chinese-cluecorpussmall.
How to use
We provide two vocabs ( vocab.txt and google_zh_with_sentinel_vocab.txt ) for this model and use the google_zh_with_sentinel_vocab.txt to train this model. In order to use Hosted inference API, we replaced characters like [extra_id_0] in the google_zh_with_sentinel_vocab.txt with characters extra0 to prevent characters from being split .
You can use the model directly with a pipeline for text2text generation:
>>> from transformers import BertTokenizer, T5ForConditionalGeneration,Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
>>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
Training data
CLUECorpusSmall is used as training data.
Training procedure
The model is pre-trained by UER-py on Tencent Cloud TI-ONE. We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
Stage1:
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--dataset_path cluecorpussmall_t5_seq128_dataset.pt \
--seq_length 128 --processes_num 32 \
--dynamic_masking --target t5
python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--output_model_path models/cluecorpussmall_t5_seq128_model.bin \
--config_path models/t5/small_config.json \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-3 --batch_size 64 \
--embedding word --tgt_embedding word \
--remove_embedding_layernorm --relative_position_embedding \
--encoder transformer --decoder transformer \
--mask fully_visible --layernorm_positioning pre \
--target t5 --tie_weights \
--span_masking --span_max_length 5 --span_geo_prob 0.3
Stage2:
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--dataset_path cluecorpussmall_t5_seq512_dataset.pt \
--seq_length 512 --processes_num 32 --target t5 \
--dynamic_masking
python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
--pretrained_model_path models/cluecorpussmall_t5_seq128_model.bin-1000000 \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--output_model_path models/cluecorpussmall_t5_seq512_model.bin \
--config_path models/t5/small_config.json \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 1e-3 --batch_size 16 \
--embedding word --tgt_embedding word \
--remove_embedding_layernorm --relative_position_embedding \
--encoder transformer --decoder transformer \
--mask fully_visible --layernorm_positioning pre \
--target t5 --tie_weights \
--span_masking --span_max_length 5 --span_geo_prob 0.3
Finally, we convert the pre-trained model into Huggingface's format:
python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 12 \
--type t5
BibTeX entry and citation info
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}