update readme
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ We show performance of fine-tuning BERT and DictBERT on the GLEU benchmarks task
|
|
19 |
|
20 |
HF: huggingface checkpoint for BERT-base uncased
|
21 |
|
22 |
-
If no dictionary
|
23 |
|
24 |
| | MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average |
|
25 |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
|
|
19 |
|
20 |
HF: huggingface checkpoint for BERT-base uncased
|
21 |
|
22 |
+
If no dictionary is provided during fine-tuning (i.e., the same as BERT fine-tuning), DictBERT can still achieve better performance than BERT.
|
23 |
|
24 |
| | MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average |
|
25 |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|