lamaabdulaziz commited on
Commit
260341e
1 Parent(s): f075d07

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -11
README.md CHANGED
@@ -17,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1577
21
- - Macro F1: 0.9379
22
- - Accuracy: 0.9398
23
- - Precision: 0.9372
24
- - Recall: 0.9386
25
 
26
  ## Model description
27
 
@@ -40,23 +40,24 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - learning_rate: 2e-05
44
  - train_batch_size: 16
45
- - eval_batch_size: 32
46
  - seed: 123
47
  - gradient_accumulation_steps: 2
48
  - total_train_batch_size: 32
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - num_epochs: 3
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Precision | Recall |
56
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
57
- | 0.3524 | 1.0 | 798 | 0.1748 | 0.9312 | 0.9335 | 0.9324 | 0.9300 |
58
- | 0.2299 | 2.0 | 1596 | 0.1678 | 0.9316 | 0.9340 | 0.9330 | 0.9303 |
59
- | 0.172 | 3.0 | 2394 | 0.1577 | 0.9379 | 0.9398 | 0.9372 | 0.9386 |
 
60
 
61
 
62
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.2410
21
+ - Macro F1: 0.9174
22
+ - Accuracy: 0.9197
23
+ - Precision: 0.9159
24
+ - Recall: 0.9191
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 3e-05
44
  - train_batch_size: 16
45
+ - eval_batch_size: 16
46
  - seed: 123
47
  - gradient_accumulation_steps: 2
48
  - total_train_batch_size: 32
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 4
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Precision | Recall |
56
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
57
+ | 0.1488 | 1.0 | 798 | 0.2533 | 0.9109 | 0.9125 | 0.9078 | 0.9179 |
58
+ | 0.1264 | 2.0 | 1596 | 0.2410 | 0.9174 | 0.9197 | 0.9159 | 0.9191 |
59
+ | 0.145 | 3.0 | 2394 | 0.2779 | 0.9083 | 0.9117 | 0.9107 | 0.9064 |
60
+ | 0.0934 | 4.0 | 3192 | 0.2858 | 0.9171 | 0.9199 | 0.9179 | 0.9163 |
61
 
62
 
63
  ### Framework versions