GQA_RoBERTa_German_legal_SQuAD_100
This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8819
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 3 | 4.0023 |
No log | 2.0 | 6 | 3.5126 |
No log | 3.0 | 9 | 2.9212 |
No log | 4.0 | 12 | 2.5720 |
No log | 5.0 | 15 | 2.2681 |
No log | 6.0 | 18 | 2.0376 |
No log | 7.0 | 21 | 1.7947 |
No log | 8.0 | 24 | 1.6217 |
No log | 9.0 | 27 | 1.4761 |
No log | 10.0 | 30 | 1.3091 |
No log | 11.0 | 33 | 1.2686 |
No log | 12.0 | 36 | 1.0582 |
No log | 13.0 | 39 | 1.0224 |
No log | 14.0 | 42 | 0.9295 |
No log | 15.0 | 45 | 0.8822 |
No log | 16.0 | 48 | 0.9108 |
No log | 17.0 | 51 | 0.7824 |
No log | 18.0 | 54 | 0.7688 |
No log | 19.0 | 57 | 0.7769 |
No log | 20.0 | 60 | 0.7093 |
No log | 21.0 | 63 | 0.7340 |
No log | 22.0 | 66 | 0.7588 |
No log | 23.0 | 69 | 0.7251 |
No log | 24.0 | 72 | 0.7637 |
No log | 25.0 | 75 | 0.7774 |
No log | 26.0 | 78 | 0.7714 |
No log | 27.0 | 81 | 0.7963 |
No log | 28.0 | 84 | 0.7883 |
No log | 29.0 | 87 | 0.7879 |
No log | 30.0 | 90 | 0.8032 |
No log | 31.0 | 93 | 0.8192 |
No log | 32.0 | 96 | 0.8438 |
No log | 33.0 | 99 | 0.8508 |
No log | 34.0 | 102 | 0.8582 |
No log | 35.0 | 105 | 0.8507 |
No log | 36.0 | 108 | 0.8469 |
No log | 37.0 | 111 | 0.8766 |
No log | 38.0 | 114 | 0.8956 |
No log | 39.0 | 117 | 0.9050 |
No log | 40.0 | 120 | 0.8936 |
No log | 41.0 | 123 | 0.8893 |
No log | 42.0 | 126 | 0.8863 |
No log | 43.0 | 129 | 0.8841 |
No log | 44.0 | 132 | 0.8710 |
No log | 45.0 | 135 | 0.8681 |
No log | 46.0 | 138 | 0.8886 |
No log | 47.0 | 141 | 0.8762 |
No log | 48.0 | 144 | 0.8697 |
No log | 49.0 | 147 | 0.8881 |
No log | 50.0 | 150 | 0.9220 |
No log | 51.0 | 153 | 0.9257 |
No log | 52.0 | 156 | 0.9059 |
No log | 53.0 | 159 | 0.9010 |
No log | 54.0 | 162 | 0.9085 |
No log | 55.0 | 165 | 0.9128 |
No log | 56.0 | 168 | 0.9034 |
No log | 57.0 | 171 | 0.8920 |
No log | 58.0 | 174 | 0.8910 |
No log | 59.0 | 177 | 0.8974 |
No log | 60.0 | 180 | 0.8969 |
No log | 61.0 | 183 | 0.8762 |
No log | 62.0 | 186 | 0.8602 |
No log | 63.0 | 189 | 0.8599 |
No log | 64.0 | 192 | 0.8621 |
No log | 65.0 | 195 | 0.8713 |
No log | 66.0 | 198 | 0.8793 |
No log | 67.0 | 201 | 0.8698 |
No log | 68.0 | 204 | 0.8604 |
No log | 69.0 | 207 | 0.8602 |
No log | 70.0 | 210 | 0.8600 |
No log | 71.0 | 213 | 0.8731 |
No log | 72.0 | 216 | 0.8828 |
No log | 73.0 | 219 | 0.8876 |
No log | 74.0 | 222 | 0.8857 |
No log | 75.0 | 225 | 0.8779 |
No log | 76.0 | 228 | 0.8786 |
No log | 77.0 | 231 | 0.8739 |
No log | 78.0 | 234 | 0.8649 |
No log | 79.0 | 237 | 0.8607 |
No log | 80.0 | 240 | 0.8558 |
No log | 81.0 | 243 | 0.8586 |
No log | 82.0 | 246 | 0.8645 |
No log | 83.0 | 249 | 0.8691 |
No log | 84.0 | 252 | 0.8724 |
No log | 85.0 | 255 | 0.8737 |
No log | 86.0 | 258 | 0.8749 |
No log | 87.0 | 261 | 0.8751 |
No log | 88.0 | 264 | 0.8757 |
No log | 89.0 | 267 | 0.8800 |
No log | 90.0 | 270 | 0.8844 |
No log | 91.0 | 273 | 0.8869 |
No log | 92.0 | 276 | 0.8855 |
No log | 93.0 | 279 | 0.8837 |
No log | 94.0 | 282 | 0.8803 |
No log | 95.0 | 285 | 0.8788 |
No log | 96.0 | 288 | 0.8789 |
No log | 97.0 | 291 | 0.8794 |
No log | 98.0 | 294 | 0.8804 |
No log | 99.0 | 297 | 0.8813 |
No log | 100.0 | 300 | 0.8819 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
- Downloads last month
- 14