uer commited on
Commit
f819596
1 Parent(s): 9cc724a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -1
README.md CHANGED
@@ -1 +1,86 @@
1
- 模型正在训练中,敬请期待~~
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。"
6
+
7
+
8
+ ---
9
+
10
+ # Chinese Pegasus
11
+
12
+ ## Model description
13
+
14
+ This model is pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
15
+
16
+ ## How to use
17
+
18
+ You can use this model directly with a pipeline for text2text generation :
19
+
20
+ ```python
21
+ >>> from transformers import BertTokenizer, PegasusForConditionalGeneration, Text2TextGenerationPipeline
22
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
23
+ >>> model = PegasusForConditionalGeneration.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
24
+ >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
25
+ >>> text2text_generator("内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。", max_length=50, do_sample=False)
26
+ [{'generated_text': '书 的 质 量 很 好 。'}]
27
+ ```
28
+
29
+ ## Training data
30
+
31
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
32
+
33
+ ## Training procedure
34
+
35
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
36
+
37
+
38
+ ```
39
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
40
+ --vocab_path models/google_zh_vocab.txt \
41
+ --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
42
+ --processes_num 32 --seq_length 512 \
43
+ --dynamic_masking --target bart
44
+ ```
45
+
46
+ ```
47
+ python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
48
+ --vocab_path models/google_zh_vocab.txt \
49
+ --config_path models/bart/base_config.json \
50
+ --output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \
51
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
52
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
53
+ --learning_rate 1e-4 --batch_size 16 \
54
+ --span_masking --span_max_length 3 \
55
+ --embedding word_pos --tgt_embedding word_pos \
56
+ --encoder transformer --mask fully_visible --decoder transformer \
57
+ --target bart --tie_weights --has_lmtarget_bias
58
+ ```
59
+
60
+ Finally, we convert the pre-trained model into Huggingface's format:
61
+
62
+ ```
63
+ python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path cluecorpussmall_bart_base_seq512_model.bin-250000 \
64
+ --output_model_path pytorch_model.bin \
65
+ --layers_num 6
66
+ ```
67
+
68
+
69
+ ### BibTeX entry and citation info
70
+
71
+ ```
72
+ @article{lewis2019bart,
73
+ title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
74
+ author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
75
+ journal={arXiv preprint arXiv:1910.13461},
76
+ year={2019}
77
+ }
78
+
79
+ @article{zhao2019uer,
80
+ title={UER: An Open-Source Toolkit for Pre-training Models},
81
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
82
+ journal={EMNLP-IJCNLP 2019},
83
+ pages={241},
84
+ year={2019}
85
+ }
86
+ ```