affahrizain commited on
Commit
ead426b
1 Parent(s): e541772

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -3,8 +3,8 @@ license: mit
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
- - f1
7
  - accuracy
 
8
  model-index:
9
  - name: roberta-base-finetuned-jigsaw-toxic
10
  results: []
@@ -17,10 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.0412
21
- - F1: 0.7908
22
- - Roc Auc: 0.9048
23
- - Accuracy: 0.9257
24
 
25
  ## Model description
26
 
@@ -40,24 +39,25 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 2e-05
43
- - train_batch_size: 48
44
- - eval_batch_size: 48
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - num_epochs: 2
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
53
- |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
54
- | 0.0524 | 1.0 | 2774 | 0.0432 | 0.7805 | 0.8940 | 0.9254 |
55
- | 0.0348 | 2.0 | 5548 | 0.0412 | 0.7908 | 0.9048 | 0.9257 |
 
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.21.0
61
- - Pytorch 1.12.0+cu113
62
- - Datasets 2.4.0
63
  - Tokenizers 0.12.1
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
6
  - accuracy
7
+ - f1
8
  model-index:
9
  - name: roberta-base-finetuned-jigsaw-toxic
10
  results: []
 
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.0859
21
+ - Accuracy: 0.9747
22
+ - F1: 0.9746
 
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 2e-05
42
+ - train_batch_size: 128
43
+ - eval_batch_size: 128
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 3
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
52
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
53
+ | 0.1179 | 1.0 | 2116 | 0.0982 | 0.9694 | 0.9694 |
54
+ | 0.0748 | 2.0 | 4232 | 0.0859 | 0.9747 | 0.9746 |
55
+ | 0.0582 | 3.0 | 6348 | 0.0916 | 0.9750 | 0.9750 |
56
 
57
 
58
  ### Framework versions
59
 
60
+ - Transformers 4.22.1
61
+ - Pytorch 1.12.1+cu113
62
+ - Datasets 2.5.1
63
  - Tokenizers 0.12.1