File size: 3,131 Bytes
a7a20ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
756bef2
 
 
a7a20ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5480f7
48ee8a2
c0f05da
defcc56
126bebc
3711359
bce9aaa
b0ed9b0
8cf435f
8b937b1
3d5f8ef
b884c75
d7121a8
9eacc4e
9c8eeaa
5c2916c
940c5d8
5302b6d
aaf1cfc
c3a7c25
f018d57
c3d6643
7f60133
8cb4472
cfee28d
65931c2
e4a6dcd
211dffb
10e15af
756bef2
a7a20ef
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: mit
base_model: w11wo/indonesian-roberta-base-posp-tagger
tags:
- generated_from_keras_callback
model-index:
- name: tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu2024
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu2024

This model is a fine-tuned version of [w11wo/indonesian-roberta-base-posp-tagger](https://huggingface.co/w11wo/indonesian-roberta-base-posp-tagger) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6139
- Validation Loss: 4.5342
- Epoch: 29

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.7870    | 11.4348         | 0     |
| 10.8383    | 10.1366         | 1     |
| 9.6098     | 9.0621          | 2     |
| 8.7602     | 8.2954          | 3     |
| 8.0949     | 7.7276          | 4     |
| 7.6334     | 7.2756          | 5     |
| 7.3192     | 7.0363          | 6     |
| 7.1297     | 6.8447          | 7     |
| 6.8798     | 6.6169          | 8     |
| 6.6715     | 6.4639          | 9     |
| 6.5429     | 6.3752          | 10    |
| 6.4095     | 6.2620          | 11    |
| 6.2638     | 6.1581          | 12    |
| 6.1540     | 5.9689          | 13    |
| 6.0265     | 5.8920          | 14    |
| 5.8897     | 5.7454          | 15    |
| 5.8217     | 5.6647          | 16    |
| 5.6666     | 5.4978          | 17    |
| 5.5835     | 5.4511          | 18    |
| 5.4664     | 5.3607          | 19    |
| 5.4165     | 5.2142          | 20    |
| 5.2469     | 5.0818          | 21    |
| 5.2076     | 5.0844          | 22    |
| 5.0905     | 4.9672          | 23    |
| 4.9729     | 4.9139          | 24    |
| 4.8886     | 4.8487          | 25    |
| 4.8239     | 4.7208          | 26    |
| 4.7037     | 4.6975          | 27    |
| 4.6815     | 4.5338          | 28    |
| 4.6139     | 4.5342          | 29    |


### Framework versions

- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0