xww033 commited on
Commit
d5aec00
1 Parent(s): fc4a858

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -11
README.md CHANGED
@@ -1,16 +1,9 @@
1
  ---
2
  license: mit
3
  ---
4
- # From Clozing to Comprehending: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader
5
- Pre-trained Machine Reader (PMR) is pre-trained with 18 million Machine Reading Comprehension (MRC) examples constructed with Wikipedia Hyperlinks.
6
- It was introduced in the paper From Clozing to Comprehending: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader by
7
- Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing
8
- and first released in [this repository](https://github.com/DAMO-NLP-SG/PMR).
9
 
10
-
11
-
12
- ## Model description
13
- This model is initialized with [PMR-large](https://huggingface.co/DAMO-NLP-SG/PMR-large) and further fine-tuned with 4 NER training data, namely [CoNLL](https://huggingface.co/datasets/conll2003), [WNUT17](https://huggingface.co/datasets/wnut_17), [ACE2004](https://paperswithcode.com/sota/nested-named-entity-recognition-on-ace-2004), and [ACE2005](https://paperswithcode.com/sota/nested-named-entity-recognition-on-ace-2005).
14
 
15
  The model performance on the test sets are:
16
 
@@ -19,11 +12,11 @@ The model performance on the test sets are:
19
  |RoBERTa-large (single-task model)| 92.8 | 57.1 | 86.3|87.0|
20
  |PMR-large (single-task model)| 93.6 | 60.8 | 87.5 | 87.4|
21
  |NER-PMR-large (multi-task model)| 92.9 | 54.7| 87.8| 88.4|
 
22
  Note that the performance of RoBERTa-large and PMR-large are single-task fine-tuning, while NER-PMR-large is a multi-task fine-tuned model.
23
 
24
  ### How to use
25
- You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR/NER).
26
-
27
 
28
 
29
  ### BibTeX entry and citation info
 
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
4
 
5
+ ## NER-PMR-large
6
+ NER-PMR-large is initialized with [PMR-large](https://huggingface.co/DAMO-NLP-SG/PMR-large) and further fine-tuned with 4 NER training data, namely [CoNLL](https://huggingface.co/datasets/conll2003), [WNUT17](https://huggingface.co/datasets/wnut_17), [ACE2004](https://paperswithcode.com/sota/nested-named-entity-recognition-on-ace-2004), and [ACE2005](https://paperswithcode.com/sota/nested-named-entity-recognition-on-ace-2005).
 
 
7
 
8
  The model performance on the test sets are:
9
 
 
12
  |RoBERTa-large (single-task model)| 92.8 | 57.1 | 86.3|87.0|
13
  |PMR-large (single-task model)| 93.6 | 60.8 | 87.5 | 87.4|
14
  |NER-PMR-large (multi-task model)| 92.9 | 54.7| 87.8| 88.4|
15
+
16
  Note that the performance of RoBERTa-large and PMR-large are single-task fine-tuning, while NER-PMR-large is a multi-task fine-tuned model.
17
 
18
  ### How to use
19
+ You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR/NER) for both training and inference.
 
20
 
21
 
22
  ### BibTeX entry and citation info