Update README.md
Browse files
README.md
CHANGED
@@ -68,7 +68,8 @@ Please use the following BibTeX entry if you use this model in your project:
|
|
68 |
|
69 |
# Limitations
|
70 |
|
71 |
-
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
|
|
|
72 |
Please refer to our paper for a quantitative evaluation of this mitigation.
|
73 |
|
74 |
# License
|
|
|
68 |
|
69 |
# Limitations
|
70 |
|
71 |
+
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
|
72 |
+
|
73 |
Please refer to our paper for a quantitative evaluation of this mitigation.
|
74 |
|
75 |
# License
|