BogdanTurbal's picture
End of training
61b5f68 verified
---
license: mit
base_model: BogdanTurbal/model_gpt2_medium_d_political_bias_ep_1_sqn_a_p_100_v_11
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_gpt2_medium_d_political_bias_hate_bias_ep_1_3_a_sqn_a_b_p_100_5_v_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_gpt2_medium_d_political_bias_hate_bias_ep_1_3_a_sqn_a_b_p_100_5_v_11
This model is a fine-tuned version of [BogdanTurbal/model_gpt2_medium_d_political_bias_ep_1_sqn_a_p_100_v_11](https://huggingface.co/BogdanTurbal/model_gpt2_medium_d_political_bias_ep_1_sqn_a_p_100_v_11) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6665
- Accuracy: 0.8306
- F1 Micro: 0.8306
- Auc: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------:|
| 1.2003 | 0.2632 | 10 | 0.9105 | 0.5099 | 0.5099 | 0.6179 |
| 0.706 | 0.5263 | 20 | 0.6271 | 0.6661 | 0.6661 | 0.7414 |
| 0.5644 | 0.7895 | 30 | 0.6324 | 0.7245 | 0.7245 | 0.8714 |
| 0.6951 | 1.0526 | 40 | 1.2089 | 0.6982 | 0.6982 | 0.8885 |
| 0.5193 | 1.3158 | 50 | 0.4731 | 0.8043 | 0.8043 | 0.9025 |
| 0.4362 | 1.5789 | 60 | 0.5781 | 0.7278 | 0.7278 | 0.8981 |
| 0.3432 | 1.8421 | 70 | 0.4461 | 0.7985 | 0.7985 | 0.9110 |
| 0.1877 | 2.1053 | 80 | 0.4417 | 0.8199 | 0.8199 | 0.9107 |
| 0.1681 | 2.3684 | 90 | 0.5411 | 0.8150 | 0.8150 | 0.9077 |
| 0.1213 | 2.6316 | 100 | 0.5569 | 0.8174 | 0.8174 | 0.9087 |
| 0.1969 | 2.8947 | 110 | 0.5222 | 0.8191 | 0.8191 | 0.9090 |
| 0.0464 | 3.1579 | 120 | 0.6640 | 0.8067 | 0.8067 | 0.9080 |
| 0.0454 | 3.4211 | 130 | 0.6253 | 0.8281 | 0.8281 | 0.9122 |
| 0.0351 | 3.6842 | 140 | 0.6577 | 0.8289 | 0.8289 | 0.9130 |
| 0.0219 | 3.9474 | 150 | 0.6665 | 0.8306 | 0.8306 | 0.9135 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1