dongxiaoqun
commited on
Commit
•
dbe655b
1
Parent(s):
f1d2fb2
Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,11 @@
|
|
2 |
language: zh
|
3 |
tags:
|
4 |
- summarization
|
|
|
5 |
inference: False
|
6 |
---
|
7 |
|
|
|
8 |
Randeng_egasus_238M_summary model (Chinese),codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
|
9 |
|
10 |
The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
|
|
|
2 |
language: zh
|
3 |
tags:
|
4 |
- summarization
|
5 |
+
- chinese
|
6 |
inference: False
|
7 |
---
|
8 |
|
9 |
+
|
10 |
Randeng_egasus_238M_summary model (Chinese),codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
|
11 |
|
12 |
The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
|