Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: Chinese
|
3 |
+
datasets: CLUECorpusSmall
|
4 |
+
widget:
|
5 |
+
- text: "作为电子extra0的平台,京东绝对是领先者。如今的刘强extra1已经是身价过extra2的老板。"
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
---
|
10 |
+
|
11 |
+
# Chinese T5 Version 1.1
|
12 |
+
|
13 |
+
## Model description
|
14 |
+
|
15 |
+
This is the set of Chinese T5 Version 1.1 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
|
16 |
+
|
17 |
+
**Version 1.1**
|
18 |
+
|
19 |
+
Chinese T5 Version 1.1 includes the following improvements compared to our Chinese T5 model:
|
20 |
+
|
21 |
+
- GEGLU activation in feed-forward hidden layer, rather than ReLU
|
22 |
+
- Dropout was turned off in pre-training
|
23 |
+
- no parameter sharing between embedding and classifier layer
|
24 |
+
|
25 |
+
| | Link |
|
26 |
+
| ----------------- | :----------------------------: |
|
27 |
+
| **T5-v1_1-Small** | [**L=8/H=512 (Small)**][small] |
|
28 |
+
| **T5-v1_1-Base** | [**L=12/H=768 (Base)**][base] |
|
29 |
+
|
30 |
+
In T5 Version 1.1, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.
|
31 |
+
|
32 |
+
## How to use
|
33 |
+
|
34 |
+
You can use this model directly with a pipeline for text2text generation (take the case of T5-v1_1-Small):
|
35 |
+
|
36 |
+
```python
|
37 |
+
>>> from transformers import BertTokenizer, MT5ForConditionalGeneration, Text2TextGenerationPipeline
|
38 |
+
>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-v1_1-small-chinese-cluecorpussmall")
|
39 |
+
>>> model = MT5ForConditionalGeneration.from_pretrained("uer/t5-v1_1-small-chinese-cluecorpussmall")
|
40 |
+
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
|
41 |
+
>>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
|
42 |
+
[{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}]
|
43 |
+
```
|
44 |
+
|
45 |
+
## Training data
|
46 |
+
|
47 |
+
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
|
48 |
+
|
49 |
+
## Training procedure
|
50 |
+
|
51 |
+
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
52 |
+
|
53 |
+
Taking the case of T5-v1_1-Small
|
54 |
+
|
55 |
+
Stage1:
|
56 |
+
|
57 |
+
```
|
58 |
+
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
59 |
+
--vocab_path models/google_zh_with_sentinel_vocab.txt \
|
60 |
+
--dataset_path cluecorpussmall_t5-v1_1_seq128_dataset.pt \
|
61 |
+
--processes_num 32 --seq_length 128 \
|
62 |
+
--dynamic_masking --target t5
|
63 |
+
```
|
64 |
+
|
65 |
+
```
|
66 |
+
python3 pretrain.py --dataset_path cluecorpussmall_t5-v1_1_seq128_dataset.pt \
|
67 |
+
--vocab_path models/google_zh_with_sentinel_vocab.txt \
|
68 |
+
--config_path models/t5-v1_1/small_config.json \
|
69 |
+
--output_model_path models/cluecorpussmall_t5-v1_1_small_seq128_model.bin \
|
70 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
71 |
+
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
72 |
+
--learning_rate 1e-3 --batch_size 64 \
|
73 |
+
--span_masking --span_geo_prob 0.3 --span_max_length 5 \
|
74 |
+
--embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
|
75 |
+
--encoder transformer --mask fully_visible --layernorm_positioning pre \
|
76 |
+
--feed_forward gated --decoder transformer --target t5
|
77 |
+
|
78 |
+
```
|
79 |
+
|
80 |
+
Stage2:
|
81 |
+
|
82 |
+
```
|
83 |
+
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
84 |
+
--vocab_path models/google_zh_with_sentinel_vocab.txt \
|
85 |
+
--dataset_path cluecorpussmall_t5-v1_1_seq512_dataset.pt \
|
86 |
+
--processes_num 32 --seq_length 512 \
|
87 |
+
--dynamic_masking --target t5
|
88 |
+
```
|
89 |
+
|
90 |
+
```
|
91 |
+
python3 pretrain.py --dataset_path cluecorpussmall_t5-v1_1_seq512_dataset.pt \
|
92 |
+
--pretrained_model_path models/cluecorpussmall_t5-v1_1_small_seq128_model.bin-1000000 \
|
93 |
+
--vocab_path models/google_zh_with_sentinel_vocab.txt \
|
94 |
+
--config_path models/t5-v1_1/small_config.json \
|
95 |
+
--output_model_path models/cluecorpussmall_t5-v1_1_small_seq512_model.bin \
|
96 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
97 |
+
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
98 |
+
--learning_rate 5e-4 --batch_size 16 \
|
99 |
+
--span_masking --span_geo_prob 0.3 --span_max_length 5 \
|
100 |
+
--embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
|
101 |
+
--encoder transformer --mask fully_visible --layernorm_positioning pre \
|
102 |
+
--feed_forward gated --decoder transformer --target t5
|
103 |
+
```
|
104 |
+
|
105 |
+
Finally, we convert the pre-trained model into Huggingface's format:
|
106 |
+
|
107 |
+
```
|
108 |
+
python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_small_seq512_model.bin-250000 \
|
109 |
+
--output_model_path pytorch_model.bin \
|
110 |
+
--layers_num 8 \
|
111 |
+
--type t5-v1_1
|
112 |
+
```
|
113 |
+
|
114 |
+
|
115 |
+
### BibTeX entry and citation info
|
116 |
+
|
117 |
+
```
|
118 |
+
@article{2020t5,
|
119 |
+
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
|
120 |
+
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
|
121 |
+
journal = {Journal of Machine Learning Research},
|
122 |
+
pages = {1-67},
|
123 |
+
year = {2020}
|
124 |
+
}
|
125 |
+
|
126 |
+
@article{zhao2019uer,
|
127 |
+
title={UER: An Open-Source Toolkit for Pre-training Models},
|
128 |
+
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
|
129 |
+
journal={EMNLP-IJCNLP 2019},
|
130 |
+
pages={241},
|
131 |
+
year={2019}
|
132 |
+
}
|
133 |
+
```
|
134 |
+
|
135 |
+
[small]:https://huggingface.co/uer/t5-v1_1-small-chinese-cluecorpussmall
|
136 |
+
[base]:https://huggingface.co/uer/t5-v1_1-base-chinese-cluecorpussmall
|