File size: 5,762 Bytes
4cb3372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9388ac9
 
 
4cb3372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6693629
 
 
e0ca898
7ebd331
 
 
3c6f937
 
 
1582b8c
 
 
7551bbe
 
 
0ff0213
 
 
1042e2a
 
 
524170f
 
 
74ebb20
 
 
c06b110
 
 
f5189ab
 
6f77144
0981fc1
 
 
7890430
 
 
36841ca
 
5eb0b54
 
 
635e9a7
 
286a17c
03488dd
0811546
6b0adc3
 
 
e1f03c9
 
 
f3d7be3
 
9321dba
44196db
 
b29b07f
e5b3a98
da887e8
69089df
 
7aa2c9a
868dc81
b8d28f6
 
54f6607
6a11306
a72250d
 
6c86ee5
 
602ed4b
a73e746
 
 
a19c8cd
 
 
bed3cfd
8f43ac7
 
 
f555c62
 
 
41e165c
c1a0a05
1fbdee6
7c5feb1
9842f81
9388ac9
 
 
4cb3372
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Labira/LabiraEdu-v1.0x
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# Labira/LabiraEdu-v1.0x

This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0206
- Validation Loss: 4.5266
- Epoch: 98

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.0565     | 3.9761          | 0     |
| 3.6621     | 3.2932          | 1     |
| 3.0961     | 3.2587          | 2     |
| 2.7357     | 3.2031          | 3     |
| 2.3059     | 3.2519          | 4     |
| 1.8933     | 3.4772          | 5     |
| 1.9076     | 3.1664          | 6     |
| 1.5492     | 3.4201          | 7     |
| 1.2578     | 3.5190          | 8     |
| 1.0478     | 3.4076          | 9     |
| 1.0130     | 3.5961          | 10    |
| 0.9073     | 3.4919          | 11    |
| 0.7071     | 3.5013          | 12    |
| 0.5616     | 4.0259          | 13    |
| 0.4798     | 3.9766          | 14    |
| 0.5938     | 3.8146          | 15    |
| 0.6476     | 3.7065          | 16    |
| 0.4264     | 4.1631          | 17    |
| 0.5290     | 3.7455          | 18    |
| 0.4637     | 3.6362          | 19    |
| 0.3826     | 3.8389          | 20    |
| 0.2876     | 3.7611          | 21    |
| 0.2221     | 4.0540          | 22    |
| 0.1752     | 4.0683          | 23    |
| 0.1544     | 4.0452          | 24    |
| 0.1600     | 4.0417          | 25    |
| 0.1390     | 4.0668          | 26    |
| 0.1134     | 4.0659          | 27    |
| 0.0965     | 4.0700          | 28    |
| 0.0820     | 4.2026          | 29    |
| 0.0810     | 4.3008          | 30    |
| 0.1166     | 4.0835          | 31    |
| 0.0776     | 4.0886          | 32    |
| 0.1033     | 4.1303          | 33    |
| 0.0512     | 4.1014          | 34    |
| 0.0484     | 4.1462          | 35    |
| 0.0565     | 4.2404          | 36    |
| 0.0652     | 4.2064          | 37    |
| 0.0538     | 4.1032          | 38    |
| 0.0516     | 4.0948          | 39    |
| 0.0611     | 4.2563          | 40    |
| 0.0523     | 4.3629          | 41    |
| 0.0571     | 4.3032          | 42    |
| 0.0479     | 4.3147          | 43    |
| 0.0308     | 4.3639          | 44    |
| 0.0370     | 4.3490          | 45    |
| 0.0406     | 4.3471          | 46    |
| 0.0300     | 4.4078          | 47    |
| 0.0270     | 4.4253          | 48    |
| 0.0283     | 4.4177          | 49    |
| 0.0228     | 4.4394          | 50    |
| 0.0538     | 4.4019          | 51    |
| 0.0342     | 4.3553          | 52    |
| 0.0249     | 4.3161          | 53    |
| 0.0657     | 4.4426          | 54    |
| 0.0309     | 4.5678          | 55    |
| 0.0467     | 4.4247          | 56    |
| 0.0356     | 4.5058          | 57    |
| 0.0431     | 4.4563          | 58    |
| 0.0366     | 4.5242          | 59    |
| 0.0624     | 4.3149          | 60    |
| 0.0471     | 4.3177          | 61    |
| 0.0248     | 4.3159          | 62    |
| 0.0388     | 4.3554          | 63    |
| 0.0262     | 4.3888          | 64    |
| 0.0360     | 4.4544          | 65    |
| 0.0319     | 4.4608          | 66    |
| 0.0269     | 4.4676          | 67    |
| 0.0373     | 4.3847          | 68    |
| 0.0205     | 4.3560          | 69    |
| 0.0223     | 4.3715          | 70    |
| 0.0306     | 4.3894          | 71    |
| 0.0235     | 4.4409          | 72    |
| 0.0189     | 4.4767          | 73    |
| 0.0280     | 4.5137          | 74    |
| 0.0165     | 4.5471          | 75    |
| 0.0098     | 4.5553          | 76    |
| 0.0173     | 4.5465          | 77    |
| 0.0234     | 4.5461          | 78    |
| 0.0231     | 4.5485          | 79    |
| 0.0237     | 4.5326          | 80    |
| 0.0158     | 4.5293          | 81    |
| 0.0178     | 4.5309          | 82    |
| 0.0225     | 4.5306          | 83    |
| 0.0191     | 4.5213          | 84    |
| 0.0213     | 4.5231          | 85    |
| 0.0144     | 4.5332          | 86    |
| 0.0191     | 4.5365          | 87    |
| 0.0188     | 4.5487          | 88    |
| 0.0272     | 4.5426          | 89    |
| 0.0126     | 4.5390          | 90    |
| 0.0224     | 4.5384          | 91    |
| 0.0218     | 4.5389          | 92    |
| 0.0083     | 4.5394          | 93    |
| 0.0246     | 4.5326          | 94    |
| 0.0199     | 4.5284          | 95    |
| 0.0174     | 4.5264          | 96    |
| 0.0130     | 4.5259          | 97    |
| 0.0206     | 4.5266          | 98    |


### Framework versions

- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.19.2
- Tokenizers 0.19.1