uer commited on
Commit
4647ecc
1 Parent(s): 684d417

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "作为电子extra0的平台,京东绝对是领先者。如今的刘强extra1已经是身价过extra2的老板。"
6
+
7
+ ---
8
+
9
+ # Chinese T5 Version 1.1
10
+
11
+ ## Model description
12
+
13
+ This is the set of Chinese T5 Version 1.1 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
14
+
15
+ **Version 1.1**
16
+
17
+ Chinese T5 Version 1.1 includes the following improvements compared to our Chinese T5 model:
18
+
19
+ - GEGLU activation in feed-forward hidden layer, rather than ReLU
20
+ - Dropout was turned off in pre-training
21
+ - no parameter sharing between embedding and classifier layer
22
+
23
+ | | Link |
24
+ | ----------------- | :----------------------------: |
25
+ | **T5-v1_1-Small** | [**L=8/H=512 (Small)**][small] |
26
+ | **T5-v1_1-Base** | [**L=12/H=768 (Base)**][base] |
27
+
28
+ In T5 Version 1.1, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.
29
+
30
+ ## How to use
31
+
32
+ You can use this model directly with a pipeline for text2text generation (take the case of T5-v1_1-Small):
33
+
34
+ ```python
35
+ >>> from transformers import BertTokenizer, MT5ForConditionalGeneration, Text2TextGenerationPipeline
36
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/t5-v1_1-small-chinese-cluecorpussmall")
37
+ >>> model = MT5ForConditionalGeneration.from_pretrained("uer/t5-v1_1-small-chinese-cluecorpussmall")
38
+ >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
39
+ >>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
40
+ [{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}]
41
+ ```
42
+
43
+ ## Training data
44
+
45
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
46
+
47
+ ## Training procedure
48
+
49
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
50
+
51
+ Taking the case of T5-v1_1-Small
52
+
53
+ Stage1:
54
+
55
+ ```
56
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
57
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
58
+ --dataset_path cluecorpussmall_t5-v1_1_seq128_dataset.pt \
59
+ --processes_num 32 --seq_length 128 \
60
+ --dynamic_masking --target t5
61
+ ```
62
+
63
+ ```
64
+ python3 pretrain.py --dataset_path cluecorpussmall_t5-v1_1_seq128_dataset.pt \
65
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
66
+ --config_path models/t5-v1_1/small_config.json \
67
+ --output_model_path models/cluecorpussmall_t5-v1_1_small_seq128_model.bin \
68
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
69
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
70
+ --learning_rate 1e-3 --batch_size 64 \
71
+ --span_masking --span_geo_prob 0.3 --span_max_length 5 \
72
+ --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
73
+ --encoder transformer --mask fully_visible --layernorm_positioning pre \
74
+ --feed_forward gated --decoder transformer \
75
+ --target t5 --tie_weights
76
+
77
+ ```
78
+
79
+ Stage2:
80
+
81
+ ```
82
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
83
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
84
+ --dataset_path cluecorpussmall_t5-v1_1_seq512_dataset.pt \
85
+ --processes_num 32 --seq_length 512 \
86
+ --dynamic_masking --target t5
87
+ ```
88
+
89
+ ```
90
+ python3 pretrain.py --dataset_path cluecorpussmall_t5-v1_1_seq512_dataset.pt \
91
+ --pretrained_model_path models/cluecorpussmall_t5-v1_1_small_seq128_model.bin-1000000 \
92
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
93
+ --config_path models/t5-v1_1/small_config.json \
94
+ --output_model_path models/cluecorpussmall_t5-v1_1_small_seq512_model.bin \
95
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
96
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
97
+ --learning_rate 5e-4 --batch_size 16 \
98
+ --span_masking --span_geo_prob 0.3 --span_max_length 5 \
99
+ --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
100
+ --encoder transformer --mask fully_visible --layernorm_positioning pre \
101
+ --feed_forward gated --decoder transformer \
102
+ --target t5 --tie_weights
103
+ ```
104
+
105
+ Finally, we convert the pre-trained model into Huggingface's format:
106
+
107
+ ```
108
+ python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_small_seq512_model.bin-250000 \
109
+ --output_model_path pytorch_model.bin \
110
+ --layers_num 6 \
111
+ --type t5-v1_1
112
+ ```
113
+
114
+
115
+ ### BibTeX entry and citation info
116
+
117
+ ```
118
+ @article{2020t5,
119
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
120
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
121
+ journal = {Journal of Machine Learning Research},
122
+ pages = {1-67},
123
+ year = {2020}
124
+ }
125
+
126
+ @article{zhao2019uer,
127
+ title={UER: An Open-Source Toolkit for Pre-training Models},
128
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
129
+ journal={EMNLP-IJCNLP 2019},
130
+ pages={241},
131
+ year={2019}
132
+ }
133
+ ```
134
+
135
+ [small]:https://huggingface.co/uer/t5-v1_1-small-chinese-cluecorpussmall
136
+ [base]:https://huggingface.co/uer/t5-v1_1-base-chinese-cluecorpussmall