roberta-ideology-classifier
This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0033
- Accuracy: 1.0
- F1: 1.0
Model description
Note This model might look okay but it is extremely flawed as it was supposed to be. Because the data used to train this is generated synthetically from Chatgpt using Zero Shot Prompt : "You have to create a dataset of 10,000 rows, including tweets from various people, and label them as politically aligned: Extreme Left, Left, Centre, Right, or Extreme Right"
roBERTa fine tuned on custom dataset generated synthetically from GPT-4o. It basically classify any given tweet/text into 5 of pre defined classes:
- Extreme Left
- Left
- Centre
- Right
- Extreme Right
Intended uses & limitations
USE: Classifying tweets/ short texts into above mentioned classes.
Limitation: As the data was synthetic and had similarities in it Model provides accuracy of 100%.
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.006 | 0.2 | 100 | 0.0033 | 1.0 | 1.0 |
0.0024 | 0.4 | 200 | 0.0014 | 1.0 | 1.0 |
0.0015 | 0.6 | 300 | 0.0008 | 1.0 | 1.0 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 53
Model tree for kartiksrma/roberta-political-ideology-classifier
Base model
FacebookAI/roberta-base