File size: 1,495 Bytes
809dc69
8ae701d
 
 
809dc69
8ae701d
 
 
 
 
 
 
 
 
 
 
8d98147
 
8ae701d
 
 
 
 
 
 
 
c74f3cb
8ae701d
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language: 
- zh
license: "apache-2.0"
---

## A Chinese MRC model built on Chinese PERT-large

**Please use `BertForQuestionAnswering` to load this model!**

This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets.

PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling.

Results on Chinese MRC datasets (EM/F1):

(We report the checkpoint that has the best AVG score)

|           | CMRC 2018 Dev | DRCD Dev  | SQuAD-Zen Dev (Answerable) |    AVG    |
| :-------: | :-----------: | :-------: | :------------------------: | :-------: |
| PERT-large |   73.5/90.8   | 91.2/95.7 |         63.0/79.3          | 75.9/88.6 |

Please visit our GitHub repo for more information: https://github.com/ymcui/PERT

You may also be interested in,

Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM    
Chinese MacBERT: https://github.com/ymcui/MacBERT  
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm  
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA  
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet  
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer  

More resources by HFL: https://github.com/ymcui/HFL-Anthology