wanng commited on
Commit
c48a375
1 Parent(s): 82892ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -11
README.md CHANGED
@@ -6,19 +6,33 @@ tags:
6
  inference: False
7
  ---
8
 
 
9
 
10
- IDEA-CCNL/Randeng-Pegasus-238M-Chinese model (Chinese),codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
 
11
 
12
- The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
13
- Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
14
 
15
- Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
16
 
17
- We also pretained a large model , available with [IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese)
18
 
19
- Task: Summarization
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- ## Usage
22
  ```python
23
  from transformers import PegasusForConditionalGeneration
24
  # Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
@@ -41,13 +55,31 @@ tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenizat
41
  # model output: 截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内
42
  ```
43
 
44
- ## Citation
45
- If you find the resource is useful, please cite the following website in your paper.
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
 
 
 
 
 
 
47
  @misc{Fengshenbang-LM,
48
  title={Fengshenbang-LM},
49
  author={IDEA-CCNL},
50
- year={2022},
51
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
52
  }
53
- ```
 
6
  inference: False
7
  ---
8
 
9
+ # Randeng-Pegasus-238M-Chinese
10
 
11
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
12
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
13
 
14
+ ## 简介 Brief Introduction
 
15
 
16
+ 善于处理摘要任务的,中文版的PAGASUS-base。
17
 
18
+ Good at solving text summarization tasks, Chinese PAGASUS-base.
19
 
20
+ ## 模型分类 Model Taxonomy
21
+
22
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
23
+ | :----: | :----: | :----: | :----: | :----: | :----: |
24
+ | 通用 General | 自然语言转换 NLT | 燃灯 Randeng | PEFASUS | 238M | Chinese |
25
+
26
+ ## 模型信息 Model Information
27
+
28
+ 参考论文:[PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf)
29
+
30
+ 为了解决中文的自动摘要任务,我们遵循PEGASUS的设计来训练中文的版本。我们使用了悟道语料库(180G版本)作为预训练数据集。此外,考虑到中文sentence piece不稳定,我们在Randeng-PEGASUS中同时使用了结巴分词和BERT分词器。我们也提供large的版本:[IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese)。
31
+
32
+ To solve Chinese abstractive summarization tasks, we follow the PEGASUS~\cite{DBLP:conf/icml/PEGASUS} guidelines. We employ a version of WuDao Corpora (180 GB version) as a pre-training dataset. In addition, considering that the Chinese sentence chunk is unstable, we utilize jieba\footnotemark and BERT tokenizer in our Randeng-PEGASUS. We also provide a large size version, available with [IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese)
33
+
34
+ ## 使用 Usage
35
 
 
36
  ```python
37
  from transformers import PegasusForConditionalGeneration
38
  # Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
 
55
  # model output: 截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内
56
  ```
57
 
58
+ ## 引用 Citation
59
+
60
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
61
+
62
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
63
+
64
+ ```text
65
+ @article{fengshenbang,
66
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
67
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
68
+ journal = {CoRR},
69
+ volume = {abs/2209.02970},
70
+ year = {2022}
71
+ }
72
  ```
73
+
74
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
75
+
76
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
77
+
78
+ ```text
79
  @misc{Fengshenbang-LM,
80
  title={Fengshenbang-LM},
81
  author={IDEA-CCNL},
82
+ year={2021},
83
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
84
  }
85
+ ```