Edit model card

BERT-evidence-types

This model is a fine-tuned version of bert-base-uncased on the evidence types dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8008
  • Macro f1: 0.4227
  • Weighted f1: 0.6976
  • Accuracy: 0.7154
  • Balanced accuracy: 0.3876

Training and evaluation data

The data set, as well as the code that was used to fine tune this model can be found in the GitHub repository BA-Thesis-Information-Science-Persuasion-Strategies

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Macro f1 Weighted f1 Accuracy Balanced accuracy
1.1148 1.0 125 1.0531 0.2566 0.6570 0.6705 0.2753
0.7546 2.0 250 0.9725 0.3424 0.6947 0.7002 0.3334
0.4757 3.0 375 1.1375 0.3727 0.7113 0.7184 0.3680
0.2637 4.0 500 1.3585 0.3807 0.6836 0.6910 0.3805
0.1408 5.0 625 1.6605 0.3785 0.6765 0.6872 0.3635
0.0856 6.0 750 1.9703 0.3802 0.6890 0.7047 0.3704
0.0502 7.0 875 2.1245 0.4067 0.6995 0.7169 0.3751
0.0265 8.0 1000 2.2676 0.3756 0.6816 0.6925 0.3647
0.0147 9.0 1125 2.4286 0.4052 0.6887 0.7062 0.3803
0.0124 10.0 1250 2.5773 0.4084 0.6853 0.7040 0.3695
0.0111 11.0 1375 2.5941 0.4146 0.6915 0.7085 0.3834
0.0076 12.0 1500 2.6124 0.4157 0.6936 0.7078 0.3863
0.0067 13.0 1625 2.7050 0.4139 0.6925 0.7108 0.3798
0.0087 14.0 1750 2.6695 0.4252 0.7009 0.7169 0.3920
0.0056 15.0 1875 2.7357 0.4257 0.6985 0.7161 0.3868
0.0054 16.0 2000 2.7389 0.4249 0.6955 0.7116 0.3890
0.0051 17.0 2125 2.7767 0.4197 0.6967 0.7146 0.3863
0.004 18.0 2250 2.7947 0.4211 0.6977 0.7154 0.3876
0.0041 19.0 2375 2.8030 0.4204 0.6953 0.7131 0.3855
0.0042 20.0 2500 2.8008 0.4227 0.6976 0.7154 0.3876

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.