btamm12 commited on
Commit
dd4d04c
1 Parent(s): 62688e4

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -43
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [DTAI-KULeuven/robbert-2023-dutch-large](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 2.0046
19
 
20
  ## Model description
21
 
@@ -34,58 +34,31 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 0.0001
38
  - train_batch_size: 32
39
  - eval_batch_size: 32
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 40
 
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | 2.4819 | 1.0 | 69 | 2.1984 |
50
- | 2.1774 | 2.0 | 138 | 2.1685 |
51
- | 2.0555 | 3.0 | 207 | 2.1070 |
52
- | 1.9779 | 4.0 | 276 | 2.0563 |
53
- | 1.9261 | 5.0 | 345 | 2.0365 |
54
- | 1.8381 | 6.0 | 414 | 2.0416 |
55
- | 1.8136 | 7.0 | 483 | 2.0397 |
56
- | 1.7682 | 8.0 | 552 | 2.0639 |
57
- | 1.7484 | 9.0 | 621 | 2.0264 |
58
- | 1.6742 | 10.0 | 690 | 2.0665 |
59
- | 1.6311 | 11.0 | 759 | 2.0448 |
60
- | 1.5907 | 12.0 | 828 | 2.0722 |
61
- | 1.5301 | 13.0 | 897 | 1.9631 |
62
- | 1.5052 | 14.0 | 966 | 2.0467 |
63
- | 1.4834 | 15.0 | 1035 | 1.9810 |
64
- | 1.4219 | 16.0 | 1104 | 2.0255 |
65
- | 1.4029 | 17.0 | 1173 | 2.0746 |
66
- | 1.3628 | 18.0 | 1242 | 1.9811 |
67
- | 1.3356 | 19.0 | 1311 | 2.0329 |
68
- | 1.3028 | 20.0 | 1380 | 2.0039 |
69
- | 1.2955 | 21.0 | 1449 | 1.9837 |
70
- | 1.2231 | 22.0 | 1518 | 1.9871 |
71
- | 1.2093 | 23.0 | 1587 | 2.0143 |
72
- | 1.1945 | 24.0 | 1656 | 1.9659 |
73
- | 1.1657 | 25.0 | 1725 | 2.0569 |
74
- | 1.1369 | 26.0 | 1794 | 1.9878 |
75
- | 1.0946 | 27.0 | 1863 | 2.0062 |
76
- | 1.063 | 28.0 | 1932 | 2.0421 |
77
- | 1.0521 | 29.0 | 2001 | 2.0320 |
78
- | 1.0443 | 30.0 | 2070 | 2.0580 |
79
- | 1.0325 | 31.0 | 2139 | 1.9606 |
80
- | 0.9804 | 32.0 | 2208 | 2.1121 |
81
- | 0.9674 | 33.0 | 2277 | 2.0156 |
82
- | 0.9563 | 34.0 | 2346 | 2.0292 |
83
- | 0.927 | 35.0 | 2415 | 2.0528 |
84
- | 0.9236 | 36.0 | 2484 | 1.9851 |
85
- | 0.9319 | 37.0 | 2553 | 2.0392 |
86
- | 0.8921 | 38.0 | 2622 | 2.0334 |
87
- | 0.8742 | 39.0 | 2691 | 2.0492 |
88
- | 0.8955 | 40.0 | 2760 | 1.9491 |
89
 
90
 
91
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [DTAI-KULeuven/robbert-2023-dutch-large](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.8711
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 6e-05
38
  - train_batch_size: 32
39
  - eval_batch_size: 32
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - lr_scheduler_warmup_steps: 100
44
+ - num_epochs: 12
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 2.3132 | 1.0 | 69 | 2.1302 |
51
+ | 2.1372 | 2.0 | 138 | 2.0750 |
52
+ | 2.0204 | 3.0 | 207 | 1.9985 |
53
+ | 1.9231 | 4.0 | 276 | 1.9298 |
54
+ | 1.8671 | 5.0 | 345 | 1.9031 |
55
+ | 1.764 | 6.0 | 414 | 1.9241 |
56
+ | 1.7491 | 7.0 | 483 | 1.9061 |
57
+ | 1.7057 | 8.0 | 552 | 1.9247 |
58
+ | 1.6751 | 9.0 | 621 | 1.8435 |
59
+ | 1.5922 | 10.0 | 690 | 1.8714 |
60
+ | 1.5859 | 11.0 | 759 | 1.8203 |
61
+ | 1.5551 | 12.0 | 828 | 1.8823 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
 
64
  ### Framework versions