uer commited on
Commit
efff5fd
1 Parent(s): fafda48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -1 +1,117 @@
1
- The model is coming soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "中国的首都是extra0京"
6
+
7
+
8
+
9
+ ---
10
+
11
+
12
+ # Chinese T5-small Model
13
+
14
+ ## Model description
15
+
16
+ The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. Based on this, we released this Chinese t5-small model. You can download the model via HuggingFace from the link [t5-small-chinese-cluecorpussmall](https://huggingface.co/uer/t5-small-chinese-cluecorpussmall).
17
+
18
+ ## How to use
19
+
20
+ We provide two vocabs ( vocab.txt and google_zh_with_sentinel_vocab.txt ) for this model and use the google_zh_with_sentinel_vocab.txt to train this model. In order to use Hosted inference API, we replaced characters like [extra_id_0] in the google_zh_with_sentinel_vocab.txt with characters extra0 to prevent characters from being split .
21
+
22
+ You can use the model directly with a pipeline for text2text generation:
23
+
24
+ ```python
25
+ >>> from transformers import BertTokenizer, T5ForConditionalGeneration,Text2TextGenerationPipeline
26
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
27
+ >>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
28
+ >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
29
+ >>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
30
+
31
+ ```
32
+
33
+
34
+
35
+ ## Training data
36
+
37
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
38
+
39
+ ## Training procedure
40
+
41
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
42
+
43
+ Stage1:
44
+
45
+ ```
46
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
47
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
48
+ --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
49
+ --seq_length 128 --processes_num 32 \
50
+ --dynamic_masking --target t5
51
+ ```
52
+
53
+ ```
54
+ python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
55
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
56
+ --output_model_path models/cluecorpussmall_t5_seq128_model.bin \
57
+ --config_path models/t5/small_config.json \
58
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
59
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
60
+ --learning_rate 1e-3 --batch_size 64 \
61
+ --embedding word --tgt_embedding word \
62
+ --remove_embedding_layernorm --relative_position_embedding \
63
+ --encoder transformer --decoder transformer \
64
+ --mask fully_visible --layernorm_positioning pre \
65
+ --target t5 --tie_weights \
66
+ --span_masking --span_max_length 5 --span_geo_prob 0.3
67
+
68
+ ```
69
+
70
+ Stage2:
71
+
72
+ ```
73
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
74
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
75
+ --dataset_path cluecorpussmall_t5_seq512_dataset.pt \
76
+ --seq_length 512 --processes_num 32 --target t5 \
77
+ --dynamic_masking
78
+ ```
79
+
80
+ ```
81
+ python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
82
+ --pretrained_model_path models/cluecorpussmall_t5_seq128_model.bin-1000000 \
83
+ --vocab_path models/google_zh_with_sentinel_vocab.txt \
84
+ --output_model_path models/cluecorpussmall_t5_seq512_model.bin \
85
+ --config_path models/t5/small_config.json \
86
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
87
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
88
+ --learning_rate 1e-3 --batch_size 16 \
89
+ --embedding word --tgt_embedding word \
90
+ --remove_embedding_layernorm --relative_position_embedding \
91
+ --encoder transformer --decoder transformer \
92
+ --mask fully_visible --layernorm_positioning pre \
93
+ --target t5 --tie_weights \
94
+ --span_masking --span_max_length 5 --span_geo_prob 0.3
95
+ ```
96
+
97
+ Finally, we convert the pre-trained model into Huggingface's format:
98
+
99
+ ```
100
+ python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_seq512_model.bin-250000 \
101
+ --output_model_path pytorch_model.bin \
102
+ --layers_num 12 \
103
+ --type t5
104
+ ```
105
+
106
+ ### BibTeX entry and citation info
107
+
108
+ ```
109
+ @article{zhao2019uer,
110
+ title={UER: An Open-Source Toolkit for Pre-training Models},
111
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
112
+ journal={EMNLP-IJCNLP 2019},
113
+ pages={241},
114
+ year={2019}
115
+ }
116
+ ```
117
+