update readme
Browse files
README.md
CHANGED
@@ -7,6 +7,8 @@ license: cc-by-4.0
|
|
7 |
-- This is the model checkpoint of our [ACL 2022](https://www.2022.aclweb.org/) paper "*Dict-BERT: Enhancing Language Model Pre-training with Dictionary*" [\[PDF\]](https://aclanthology.org/2022.findings-acl.150/).
|
8 |
In this paper, we propose DictBERT, which is a novel pre-trained language model by leveraging rare word definitions in English dictionaries (e.g., Wiktionary). DictBERT is based on the BERT architecture, trained under the same setting as BERT. Please refer more details in our paper.
|
9 |
|
|
|
|
|
10 |
## Evaluation results
|
11 |
|
12 |
We show performance of fine-tuning BERT and DictBERT on the GLEU benchmarks tasks. CoLA is evaluated by matthews, STS-B is evaluated by pearson, and other tasks are evaluated by accuracy. The models achieve the following results:
|
|
|
7 |
-- This is the model checkpoint of our [ACL 2022](https://www.2022.aclweb.org/) paper "*Dict-BERT: Enhancing Language Model Pre-training with Dictionary*" [\[PDF\]](https://aclanthology.org/2022.findings-acl.150/).
|
8 |
In this paper, we propose DictBERT, which is a novel pre-trained language model by leveraging rare word definitions in English dictionaries (e.g., Wiktionary). DictBERT is based on the BERT architecture, trained under the same setting as BERT. Please refer more details in our paper.
|
9 |
|
10 |
+
-- See code for fine-tuning the model at https://github.com/wyu97/DictBERT
|
11 |
+
|
12 |
## Evaluation results
|
13 |
|
14 |
We show performance of fine-tuning BERT and DictBERT on the GLEU benchmarks tasks. CoLA is evaluated by matthews, STS-B is evaluated by pearson, and other tasks are evaluated by accuracy. The models achieve the following results:
|