z-dickson's picture
Update README.md
20fa6d3
|
raw
history blame
2.14 kB
metadata
tags:
  - generated_from_keras_callback
model-index:
  - name: US_politicians_covid_skepticism
    results: []

US_politicians_covid_skepticism

This model is a fine-tuned version of vinai/bertweet-covid19-base-uncased on a dataset of 20,000 handcoded tweets about COVID-19 policies sent by US legislators. The model is trained to identify tweets that are either in support of covid policies (masks, social distancing, lockdowns, vaccine mandates) or are opposed to such policies. Before training the model, all URLs and @Usernames were removed from the tweets. Accuracy is very high (probably) because US legislators tweet a lot of the same messages and retweet each other often. The model is uncased.

It achieves the following results on the evaluation set:

  • Train Loss: 0.0141
  • Train Sparse Categorical Accuracy: 0.9968
  • Validation Loss: 0.0115
  • Validation Sparse Categorical Accuracy: 0.9970
  • Epoch: 2

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 5e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Train Sparse Categorical Accuracy Validation Loss Validation Sparse Categorical Accuracy Epoch
0.1240 0.9721 0.0206 0.9957 0
0.0194 0.9957 0.0117 0.9972 1
0.0141 0.9968 0.0115 0.9970 2

Framework versions

  • Transformers 4.20.1
  • TensorFlow 2.8.2
  • Datasets 2.3.2
  • Tokenizers 0.12.1