dongxiaoqun commited on
Commit
8fa7c7a
1 Parent(s): 8d3f419

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ tags:
4
+ - summarization
5
+ inference: False
6
+ ---
7
+
8
+ Randeng_egasus_523M_summary model (Chinese),which codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
9
+
10
+ The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
11
+ Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
12
+
13
+ Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
14
+
15
+ We also pretained a large model , available with [Randeng_Pegasus_523M_Summary](https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M_Summary)
16
+
17
+ Task: Summarization
18
+
19
+ ## Usage
20
+ ```python
21
+ from transformers import PegasusForConditionalGeneration
22
+ # Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance
23
+ # Strong
24
+ from tokenizers_pegasus import PegasusTokenizer
25
+
26
+ model = PegasusForConditionalGeneration.from_pretrained("dongxq/randeng_pegasus_523M_summary")
27
+ tokenizer = PegasusTokenizer.from_pretrained("path/to/vocab.txt")
28
+
29
+ text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
30
+ inputs = tokenizer(text, max_length=1024, return_tensors="pt")
31
+
32
+ # Generate Summary
33
+ summary_ids = model.generate(inputs["input_ids"])
34
+ tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
35
+ ```
36
+
37
+ ## Citation
38
+ If you find the resource is useful, please cite the following website in your paper.
39
+ ```
40
+ @misc{Fengshenbang-LM,
41
+ title={Fengshenbang-LM},
42
+ author={IDEA-CCNL},
43
+ year={2022},
44
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
45
+ }
46
+ ```