uer commited on
Commit
c2687df
1 Parent(s): bf639ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -1
README.md CHANGED
@@ -1 +1,129 @@
1
- ---\nlanguage: Chinese\ndatasets: CLUECorpusSmall\nwidget: \n- text: "作为电子extra0的平台,京东绝对是领先者。如今的刘强extra1已经是身价过extra2的老板。"\n\n\n---\n\n# Chinese T5\n\n## Model description\n\nThis is the set of Chinese T5 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).\n\nThe Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their work, we released a series of Chinese T5 models.\n\n| | Link |\n| -------- | :-----------------------: |\n| **T5-Small** | [**L=6/H=512 (Small)**][small] |\n| **T5-Base** | [**L=12/H=768 (Base)**][base] |\n\nIn T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.\n\n## How to use\n\nYou can use this model directly with a pipeline for text2text generation (take the case of T5-Small):\n\n```python\n>>> from transformers import BertTokenizer, T5ForConditionalGeneration, Text2TextGenerationPipeline\n>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")\n>>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")\n>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer) \n>>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)\n [{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}]\n```\n\n## Training data\n\n[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. \n\n## Training procedure\n\nThe model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.\n\nTaking the case of T5-Small\n\nStage1:\n\n```\npython3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \\\n --vocab_path models/google_zh_with_sentinel_vocab.txt \\\n --dataset_path cluecorpussmall_t5_seq128_dataset.pt \\\n --processes_num 32 --seq_length 128 \\\n --dynamic_masking --target t5 \n```\n\n```\npython3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \\\n --vocab_path models/google_zh_with_sentinel_vocab.txt \\\n --config_path models/t5/small_config.json \\\n --output_model_path models/cluecorpussmall_t5_small_seq128_model.bin \\\n --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \\\n --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \\\n --learning_rate 1e-3 --batch_size 64 \\\n --span_masking --span_geo_prob 0.3 --span_max_length 5 \\\n --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \\\n --encoder transformer --mask fully_visible --layernorm_positioning pre --decoder transformer \\\n --target t5 --tie_weights\n\n```\n\nStage2:\n\n```\npython3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \\\n --vocab_path models/google_zh_with_sentinel_vocab.txt \\\n --dataset_path cluecorpussmall_t5_small_seq512_dataset.pt \\\n --processes_num 32 --seq_length 512 \\\n --dynamic_masking --target t5\n```\n\n```\npython3 pretrain.py --dataset_path cluecorpussmall_t5_seq512_dataset.pt \\\n --pretrained_model_path models/cluecorpussmall_t5_small_seq128_model.bin-1000000 \\\n --vocab_path models/google_zh_with_sentinel_vocab.txt \\\n --config_path models/t5/small_config.json \\\n --output_model_path models/cluecorpussmall_t5_small_seq512_model.bin \\\n --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \\\n --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \\\n --learning_rate 5e-4 --batch_size 16 \\\n --span_masking --span_geo_prob 0.3 --span_max_length 5 \\\n --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \\\n --encoder transformer --mask fully_visible --layernorm_positioning pre --decoder transformer \\\n --target t5 --tie_weights\n```\n\nFinally, we convert the pre-trained model into Huggingface's format:\n\n```\npython3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_small_seq512_model.bin-250000 \\\n --output_model_path pytorch_model.bin \\\n --layers_num 6 \\\n --type t5\n```\n\n\n### BibTeX entry and citation info\n\n```\n@article{2020t5,\n title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},\n author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},\n journal = {Journal of Machine Learning Research},\n pages = {1-67},\n year = {2020}\n}\n\n@article{zhao2019uer,\n title={UER: An Open-Source Toolkit for Pre-training Models},\n author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},\n journal={EMNLP-IJCNLP 2019},\n pages={241},\n year={2019}\n}\n```\n\n[small]:https://huggingface.co/uer/t5-small-chinese-cluecorpussmall\n[base]:https://huggingface.co/uer/t5-base-chinese-cluecorpussmall
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "作为电子extra0的平台,京东绝对是领先者。如今的刘强extra1已经是身价过extra2的老板。"
6
+
7
+
8
+ ---
9
+
10
+ # Chinese T5
11
+
12
+ ## Model description
13
+
14
+ This is the set of Chinese T5 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
15
+
16
+ The Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their work, we released a series of Chinese T5 models.
17
+
18
+ | | Link |
19
+ | -------- | :-----------------------: |
20
+ | **T5-Small** | [**L=6/H=512 (Small)**][small] |
21
+ | **T5-Base** | [**L=12/H=768 (Base)**][base] |
22
+
23
+ In T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.
24
+
25
+ ## How to use
26
+
27
+ You can use this model directly with a pipeline for text2text generation (take the case of T5-Small):
28
+
29
+ ```python
30
+ >>> from transformers import BertTokenizer, T5ForConditionalGeneration, Text2TextGenerationPipeline
31
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
32
+ >>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
33
+ >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
34
+ >>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
35
+ [{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}]
36
+ ```
37
+
38
+ ## Training data
39
+
40
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
41
+
42
+ ## Training procedure
43
+
44
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
45
+
46
+ Taking the case of T5-Small
47
+
48
+ Stage1:
49
+
50
+ ```
51
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
52
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
53
+ --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
54
+ --processes_num 32 --seq_length 128 \
55
+ --dynamic_masking --target t5
56
+ ```
57
+
58
+ ```
59
+ python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
60
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
61
+ --config_path models/t5/small_config.json \
62
+ --output_model_path models/cluecorpussmall_t5_small_seq128_model.bin \
63
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
64
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
65
+ --learning_rate 1e-3 --batch_size 64 \
66
+ --span_masking --span_geo_prob 0.3 --span_max_length 5 \
67
+ --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
68
+ --encoder transformer --mask fully_visible --layernorm_positioning pre --decoder transformer \
69
+ --target t5 --tie_weights
70
+
71
+ ```
72
+
73
+ Stage2:
74
+
75
+ ```
76
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
77
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
78
+ --dataset_path cluecorpussmall_t5_small_seq512_dataset.pt \
79
+ --processes_num 32 --seq_length 512 \
80
+ --dynamic_masking --target t5
81
+ ```
82
+
83
+ ```
84
+ python3 pretrain.py --dataset_path cluecorpussmall_t5_seq512_dataset.pt \
85
+ --pretrained_model_path models/cluecorpussmall_t5_small_seq128_model.bin-1000000 \
86
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
87
+ --config_path models/t5/small_config.json \
88
+ --output_model_path models/cluecorpussmall_t5_small_seq512_model.bin \
89
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
90
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
91
+ --learning_rate 5e-4 --batch_size 16 \
92
+ --span_masking --span_geo_prob 0.3 --span_max_length 5 \
93
+ --embedding word --relative_position_embedding --remove_embedding_layernorm --tgt_embedding word \
94
+ --encoder transformer --mask fully_visible --layernorm_positioning pre --decoder transformer \
95
+ --target t5 --tie_weights
96
+ ```
97
+
98
+ Finally, we convert the pre-trained model into Huggingface's format:
99
+
100
+ ```
101
+ python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_small_seq512_model.bin-250000 \
102
+ --output_model_path pytorch_model.bin \
103
+ --layers_num 6 \
104
+ --type t5
105
+ ```
106
+
107
+
108
+ ### BibTeX entry and citation info
109
+
110
+ ```
111
+ @article{2020t5,
112
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
113
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
114
+ journal = {Journal of Machine Learning Research},
115
+ pages = {1-67},
116
+ year = {2020}
117
+ }
118
+
119
+ @article{zhao2019uer,
120
+ title={UER: An Open-Source Toolkit for Pre-training Models},
121
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
122
+ journal={EMNLP-IJCNLP 2019},
123
+ pages={241},
124
+ year={2019}
125
+ }
126
+ ```
127
+
128
+ [small]:https://huggingface.co/uer/t5-small-chinese-cluecorpussmall
129
+ [base]:https://huggingface.co/uer/t5-base-chinese-cluecorpussmall