DictBERT model (uncased)
-- This is the model checkpoint of our ACL 2022 paper "Dict-BERT: Enhancing Language Model Pre-training with Dictionary" [PDF]. In this paper, we propose DictBERT, which is a novel pre-trained language model by leveraging rare word definitions in English dictionaries (e.g., Wiktionary). DictBERT is based on the BERT architecture, trained under the same setting as BERT. Please refer more details in our paper.
-- See code for fine-tuning the model at https://github.com/wyu97/DictBERT
Evaluation results
We show performance of fine-tuning BERT and DictBERT on the GLEU benchmarks tasks. CoLA is evaluated by matthews, STS-B is evaluated by pearson, and other tasks are evaluated by accuracy. The models achieve the following results:
MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average | |
---|---|---|---|---|---|---|---|---|---|
BERT(HF) | 84.12 | 90.69 | 90.75 | 92.52 | 58.89 | 86.17 | 68.67 | 89.39 | 82.65 |
DictBERT | 84.36 | 91.02 | 90.78 | 92.43 | 61.81 | 87.25 | 72.90 | 89.40 | 83.74 |
HF: huggingface checkpoint for BERT-base uncased
If no dictionary is provided during fine-tuning (i.e., the same as BERT fine-tuning), DictBERT can still achieve better performance than BERT.
MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average | |
---|---|---|---|---|---|---|---|---|---|
w/o dict | 84.24 | 90.99 | 90.80 | 92.51 | 60.50 | 87.04 | 73.75 | 89.37 | 83.69 |
BibTeX entry and citation info
@inproceedings{yu2022dict,
title={Dict-BERT: Enhancing Language Model Pre-training with Dictionary},
author={Yu, Wenhao and Zhu, Chenguang and Fang, Yuwei and Yu, Donghan and Wang, Shuohang and Xu, Yichong and Zeng, Michael and Jiang, Meng},
booktitle={Findings of the Association for Computational Linguistics: ACL 2022},
pages={1907--1918},
year={2022}
}
- Downloads last month
- 9