muhammadravi251001 commited on
Commit
4c634ba
1 Parent(s): 4fcb886

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -21
README.md CHANGED
@@ -17,9 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5146
21
- - Accuracy: 0.8579
22
- - F1: 0.8583
23
 
24
  ## Model description
25
 
@@ -39,10 +39,10 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
- - train_batch_size: 8
43
- - eval_batch_size: 8
44
  - seed: 42
45
- - gradient_accumulation_steps: 16
46
  - total_train_batch_size: 128
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
@@ -50,24 +50,21 @@ The following hyperparameters were used during training:
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
55
- | 0.4787 | 0.5 | 1574 | 0.4285 | 0.8364 | 0.8358 |
56
- | 0.4418 | 1.0 | 3148 | 0.4040 | 0.8494 | 0.8496 |
57
- | 0.3942 | 1.5 | 4722 | 0.3971 | 0.8514 | 0.8505 |
58
- | 0.3722 | 2.0 | 6296 | 0.3835 | 0.8579 | 0.8581 |
59
- | 0.3206 | 2.5 | 7870 | 0.4139 | 0.8587 | 0.8586 |
60
- | 0.3229 | 3.0 | 9444 | 0.4033 | 0.8600 | 0.8602 |
61
- | 0.2616 | 3.5 | 11018 | 0.4457 | 0.8585 | 0.8591 |
62
- | 0.2862 | 4.0 | 12592 | 0.4319 | 0.8619 | 0.8617 |
63
- | 0.2261 | 4.5 | 14166 | 0.4859 | 0.8562 | 0.8570 |
64
- | 0.2215 | 5.0 | 15740 | 0.4728 | 0.8592 | 0.8599 |
65
- | 0.1874 | 5.5 | 17314 | 0.5146 | 0.8579 | 0.8583 |
66
 
67
 
68
  ### Framework versions
69
 
70
  - Transformers 4.26.1
71
- - Pytorch 1.13.1+cu117
72
  - Datasets 2.2.0
73
- - Tokenizers 0.13.2
 
17
 
18
  This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4158
21
+ - Accuracy: 0.8600
22
+ - F1: 0.8612
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
  - seed: 42
45
+ - gradient_accumulation_steps: 8
46
  - total_train_batch_size: 128
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
 
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
54
+ |:-------------:|:-----:|:-----:|:--------:|:------:|:---------------:|
55
+ | 0.4647 | 0.5 | 1613 | 0.8396 | 0.8403 | 0.4262 |
56
+ | 0.4437 | 1.0 | 3226 | 0.8511 | 0.8522 | 0.4042 |
57
+ | 0.3956 | 1.5 | 4839 | 0.3783 | 0.8604 | 0.8602 |
58
+ | 0.3639 | 2.0 | 6452 | 0.3913 | 0.8592 | 0.8600 |
59
+ | 0.323 | 2.5 | 8065 | 0.3783 | 0.8657 | 0.8659 |
60
+ | 0.3186 | 3.0 | 9678 | 0.3850 | 0.8626 | 0.8625 |
61
+ | 0.2485 | 3.5 | 11291 | 0.4326 | 0.8597 | 0.8592 |
62
+ | 0.2509 | 4.0 | 12904 | 0.4158 | 0.8600 | 0.8612 |
 
 
 
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.26.1
68
+ - Pytorch 2.0.1+cu117
69
  - Datasets 2.2.0
70
+ - Tokenizers 0.13.3