Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
0.929 | 1.0 | 750 | 0.6983 | 0.8609 | 0.4073 |
0.481 | 2.0 | 1500 | 0.7179 | 0.8689 | 0.3605 |
0.3564 | 3.0 | 2250 | 0.7269 | 0.8703 | 0.3834 |
0.2369 | 4.0 | 3000 | 0.7006 | 0.8465 | 0.5631 |
0.1536 | 5.0 | 3750 | 0.7237 | 0.8591 | 0.6596 |
0.1228 | 6.0 | 4500 | 0.7285 | 0.8660 | 0.7316 |
0.0831 | 7.0 | 5250 | 0.7454 | 0.8817 | 0.6420 |
0.0687 | 8.0 | 6000 | 0.6955 | 0.8354 | 1.1172 |
0.0541 | 9.0 | 6750 | 0.7143 | 0.8479 | 1.0556 |
0.0465 | 10.0 | 7500 | 0.7473 | 0.8889 | 0.7691 |
0.0404 | 11.0 | 8250 | 0.7209 | 0.8636 | 1.0274 |
0.0315 | 12.0 | 9000 | 0.7082 | 0.8401 | 1.2706 |
0.0329 | 13.0 | 9750 | 0.7380 | 0.8773 | 0.9558 |
Base model
google-bert/bert-base-uncased