File size: 4,993 Bytes
85c59ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: mit
base_model: haryoaw/scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_a
tags:
- generated_from_trainer
datasets:
- tweet_sentiment_multilingual
metrics:
- accuracy
- f1
model-index:
- name: scenario-KD-PO-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual_all_alpha
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# scenario-KD-PO-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual_all_alpha

This model is a fine-tuned version of [haryoaw/scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_a](https://huggingface.co/haryoaw/scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_a) on the tweet_sentiment_multilingual dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4750
- Accuracy: 0.5637
- F1: 0.5640

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2222
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 4.905         | 1.09  | 500   | 4.5461          | 0.4363   | 0.4113 |
| 4.0059        | 2.17  | 1000  | 3.6093          | 0.5058   | 0.5040 |
| 3.4109        | 3.26  | 1500  | 3.4190          | 0.5208   | 0.5131 |
| 3.0676        | 4.35  | 2000  | 3.2675          | 0.5490   | 0.5477 |
| 2.7673        | 5.43  | 2500  | 3.2746          | 0.5467   | 0.5412 |
| 2.5421        | 6.52  | 3000  | 3.1951          | 0.5475   | 0.5367 |
| 2.3609        | 7.61  | 3500  | 3.3137          | 0.5432   | 0.5410 |
| 2.1176        | 8.7   | 4000  | 3.5963          | 0.5451   | 0.5303 |
| 1.9583        | 9.78  | 4500  | 3.5109          | 0.5571   | 0.5583 |
| 1.8268        | 10.87 | 5000  | 3.3664          | 0.5471   | 0.5477 |
| 1.7388        | 11.96 | 5500  | 3.3858          | 0.5517   | 0.5528 |
| 1.5976        | 13.04 | 6000  | 3.4404          | 0.5617   | 0.5577 |
| 1.4912        | 14.13 | 6500  | 3.3307          | 0.5586   | 0.5585 |
| 1.4157        | 15.22 | 7000  | 3.5579          | 0.5432   | 0.5355 |
| 1.3536        | 16.3  | 7500  | 3.3542          | 0.5617   | 0.5603 |
| 1.2883        | 17.39 | 8000  | 3.6026          | 0.5571   | 0.5543 |
| 1.2443        | 18.48 | 8500  | 3.6866          | 0.5478   | 0.5458 |
| 1.1637        | 19.57 | 9000  | 3.6125          | 0.5536   | 0.5547 |
| 1.1391        | 20.65 | 9500  | 3.5456          | 0.5613   | 0.5574 |
| 1.1029        | 21.74 | 10000 | 3.4366          | 0.5513   | 0.5526 |
| 1.0417        | 22.83 | 10500 | 3.6791          | 0.5586   | 0.5585 |
| 1.0169        | 23.91 | 11000 | 3.6637          | 0.5656   | 0.5607 |
| 1.0107        | 25.0  | 11500 | 3.5452          | 0.5575   | 0.5578 |
| 0.9502        | 26.09 | 12000 | 3.4362          | 0.5748   | 0.5742 |
| 0.9455        | 27.17 | 12500 | 3.4865          | 0.5694   | 0.5703 |
| 0.9194        | 28.26 | 13000 | 3.4523          | 0.5737   | 0.5716 |
| 0.9053        | 29.35 | 13500 | 3.5411          | 0.5586   | 0.5572 |
| 0.8737        | 30.43 | 14000 | 3.6550          | 0.5586   | 0.5586 |
| 0.865         | 31.52 | 14500 | 3.5079          | 0.5594   | 0.5611 |
| 0.8444        | 32.61 | 15000 | 3.4885          | 0.5509   | 0.5526 |
| 0.8343        | 33.7  | 15500 | 3.5705          | 0.5710   | 0.5698 |
| 0.8122        | 34.78 | 16000 | 3.4910          | 0.5521   | 0.5519 |
| 0.8161        | 35.87 | 16500 | 3.5302          | 0.5559   | 0.5563 |
| 0.7923        | 36.96 | 17000 | 3.5031          | 0.5656   | 0.5632 |
| 0.7824        | 38.04 | 17500 | 3.4182          | 0.5594   | 0.5592 |
| 0.7658        | 39.13 | 18000 | 3.5265          | 0.5594   | 0.5586 |
| 0.7588        | 40.22 | 18500 | 3.4465          | 0.5706   | 0.5711 |
| 0.7541        | 41.3  | 19000 | 3.4879          | 0.5540   | 0.5534 |
| 0.7488        | 42.39 | 19500 | 3.4246          | 0.5687   | 0.5693 |
| 0.7412        | 43.48 | 20000 | 3.4806          | 0.5745   | 0.5750 |
| 0.7314        | 44.57 | 20500 | 3.5638          | 0.5590   | 0.5586 |
| 0.7283        | 45.65 | 21000 | 3.4212          | 0.5664   | 0.5667 |
| 0.7179        | 46.74 | 21500 | 3.4444          | 0.5556   | 0.5560 |
| 0.7168        | 47.83 | 22000 | 3.4104          | 0.5602   | 0.5606 |
| 0.7161        | 48.91 | 22500 | 3.3766          | 0.5667   | 0.5676 |
| 0.7052        | 50.0  | 23000 | 3.4750          | 0.5637   | 0.5640 |


### Framework versions

- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3