Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the ag_news dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
0.8065 | 1.0 | 600 | 0.9060 | 0.9059 | 0.3013 |
0.2872 | 2.0 | 1200 | 0.9171 | 0.9170 | 0.2598 |
0.2156 | 3.0 | 1800 | 0.9178 | 0.9184 | 0.3117 |
0.1486 | 4.0 | 2400 | 0.9200 | 0.9197 | 0.3631 |
0.0683 | 5.0 | 3000 | 0.9202 | 0.9201 | 0.3782 |
0.045 | 6.0 | 3600 | 0.9186 | 0.9188 | 0.4846 |
0.0218 | 7.0 | 4200 | 0.9155 | 0.9155 | 0.5898 |
0.0245 | 8.0 | 4800 | 0.9162 | 0.9162 | 0.6033 |
Base model
google-bert/bert-base-uncased