malduwais commited on
Commit
9183752
1 Parent(s): 947ea8c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -19,7 +19,7 @@ model-index:
19
  metrics:
20
  - name: F1
21
  type: f1
22
- value: 0.6837988826815643
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,8 +29,8 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
- - Loss: 0.3984
33
- - F1: 0.6838
34
 
35
  ## Model description
36
 
@@ -50,8 +50,8 @@ More information needed
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
- - train_batch_size: 24
54
- - eval_batch_size: 24
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
@@ -61,14 +61,14 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | F1 |
63
  |:-------------:|:-----:|:----:|:---------------:|:------:|
64
- | 1.1357 | 1.0 | 50 | 0.5871 | 0.4590 |
65
- | 0.5236 | 2.0 | 100 | 0.4412 | 0.6478 |
66
- | 0.3765 | 3.0 | 150 | 0.3984 | 0.6838 |
67
 
68
 
69
  ### Framework versions
70
 
71
  - Transformers 4.16.2
72
- - Pytorch 2.0.1+cu118
73
  - Datasets 1.16.1
74
- - Tokenizers 0.13.3
 
19
  metrics:
20
  - name: F1
21
  type: f1
22
+ value: 0.7116357504215851
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
+ - Loss: 0.3999
33
+ - F1: 0.7116
34
 
35
  ## Model description
36
 
 
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
+ - train_batch_size: 8
54
+ - eval_batch_size: 8
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
 
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | F1 |
63
  |:-------------:|:-----:|:----:|:---------------:|:------:|
64
+ | 0.9179 | 1.0 | 148 | 0.4641 | 0.6123 |
65
+ | 0.4452 | 2.0 | 296 | 0.4123 | 0.6785 |
66
+ | 0.2949 | 3.0 | 444 | 0.3999 | 0.7116 |
67
 
68
 
69
  ### Framework versions
70
 
71
  - Transformers 4.16.2
72
+ - Pytorch 2.1.0+cu121
73
  - Datasets 1.16.1
74
+ - Tokenizers 0.15.0