Edit model card

Misinformation-Covid-LowLearningRatebert-base-german-cased

This model is a fine-tuned version of bert-base-german-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5151
  • F1: 0.3793

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
0.6534 1.0 189 0.6298 0.1000
0.6467 2.0 378 0.6222 0.1379
0.6302 3.0 567 0.6121 0.0784
0.6259 4.0 756 0.6042 0.0870
0.6255 5.0 945 0.5987 0.0870
0.6091 6.0 1134 0.5922 0.0909
0.6237 7.0 1323 0.5881 0.1224
0.6019 8.0 1512 0.5826 0.1277
0.6038 9.0 1701 0.5779 0.2
0.5996 10.0 1890 0.5730 0.1961
0.5858 11.0 2079 0.5678 0.2353
0.5794 12.0 2268 0.5636 0.24
0.5806 13.0 2457 0.5587 0.2264
0.5586 14.0 2646 0.5548 0.24
0.5682 15.0 2835 0.5514 0.24
0.5631 16.0 3024 0.5471 0.2353
0.5603 17.0 3213 0.5425 0.2593
0.5437 18.0 3402 0.5393 0.2593
0.5439 19.0 3591 0.5368 0.2642
0.547 20.0 3780 0.5329 0.2909
0.5408 21.0 3969 0.5297 0.3158
0.5327 22.0 4158 0.5270 0.3158
0.5194 23.0 4347 0.5256 0.3214
0.5206 24.0 4536 0.5227 0.3214
0.516 25.0 4725 0.5205 0.3214
0.5103 26.0 4914 0.5191 0.3214
0.5037 27.0 5103 0.5172 0.3214
0.4974 28.0 5292 0.5180 0.3214
0.5116 29.0 5481 0.5156 0.3214
0.5006 30.0 5670 0.5150 0.3214
0.509 31.0 5859 0.5141 0.3214
0.4832 32.0 6048 0.5150 0.3273
0.4877 33.0 6237 0.5133 0.3214
0.49 34.0 6426 0.5131 0.3158
0.4827 35.0 6615 0.5143 0.3214
0.4986 36.0 6804 0.5125 0.3214
0.4794 37.0 6993 0.5131 0.3793
0.4809 38.0 7182 0.5137 0.3793
0.4929 39.0 7371 0.5114 0.3793
0.465 40.0 7560 0.5135 0.3793
0.4867 41.0 7749 0.5121 0.3793
0.4685 42.0 7938 0.5129 0.3793
0.4643 43.0 8127 0.5142 0.3793
0.4804 44.0 8316 0.5144 0.3793
0.4779 45.0 8505 0.5141 0.3793
0.4701 46.0 8694 0.5139 0.3793
0.4619 47.0 8883 0.5146 0.3793
0.4558 48.0 9072 0.5151 0.3793
0.4824 49.0 9261 0.5152 0.3793
0.4758 50.0 9450 0.5151 0.3793

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.2
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
5

Finetuned from