File size: 1,493 Bytes
6fe5ad0
facfbb7
 
 
6fe5ad0
facfbb7
1e7260b
facfbb7
4feb14b
facfbb7
 
 
 
 
 
 
2b647ca
 
facfbb7
 
 
 
 
 
 
 
9e190b7
facfbb7
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language: 
- zh
license: "apache-2.0"
---

## A Chinese MRC model built on Chinese PERT-base

**Please use `BertForQuestionAnswering` to load this model!**

This is a Chinese machine reading comprehension (MRC) model built on PERT-base and fine-tuned on a mixture of Chinese MRC datasets.

PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling.

Results on Chinese MRC datasets (EM/F1):

(We report the checkpoint that has the best AVG score)

|           | CMRC 2018 Dev | DRCD Dev  | SQuAD-Zen Dev (Answerable) |    AVG    |
| :-------: | :-----------: | :-------: | :------------------------: | :-------: |
| PERT-base |   73.2/90.6   | 88.7/94.1 |         59.7/76.5          | 73.9/87.1 |

Please visit our GitHub repo for more information: https://github.com/ymcui/PERT

You may also be interested in,

Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM     
Chinese MacBERT: https://github.com/ymcui/MacBERT  
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm  
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA  
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet  
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer  

More resources by HFL: https://github.com/ymcui/HFL-Anthology