File size: 4,642 Bytes
f819596 50bb33c f819596 531062b ac244d3 228a2e2 f819596 ac244d3 f819596 ac244d3 f819596 b07a4cb f819596 c056704 f819596 4fb0f25 f819596 a362fac f819596 4fb0f25 f819596 4fb0f25 f819596 a362fac f819596 0787554 4fb0f25 f819596 4fb0f25 f819596 531062b ac244d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。"
---
# Chinese Pegasus
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the set of Chinese PEGASUS models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **PEGASUS-Base** | [**L=12/H=768 (Base)**][base] |
| **PEGASUS-Large** | [**L=16/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of PEGASUS-Base):
```python
>>> from transformers import BertTokenizer, PegasusForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> model = PegasusForConditionalGeneration.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。", max_length=50, do_sample=False)
[{'generated_text': '书 的 质 量 很 好 。'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of PEGASUS-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall_bert.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor gsg --sentence_selection_strategy random
```
```
python3 pretrain.py --dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/pegasus/base_config.json \
--output_model_path models/cluecorpussmall_pegasus_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 8
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_pegasus_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_pegasus_base_seq512_model.bin-1000000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@inproceedings{zhang2020pegasus,
title={Pegasus: Pre-training with extracted gap-sentences for abstractive summarization},
author={Zhang, Jingqing and Zhao, Yao and Saleh, Mohammad and Liu, Peter},
booktitle={International Conference on Machine Learning},
pages={11328--11339},
year={2020},
organization={PMLR}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[base]:https://huggingface.co/uer/pegasus-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/pegasus-large-chinese-cluecorpussmall |