Antalela commited on
Commit
f19ffee
1 Parent(s): 1806efd

End of training

Browse files
README.md CHANGED
@@ -3,6 +3,9 @@ license: mit
3
  base_model: roberta-base
4
  tags:
5
  - generated_from_trainer
 
 
 
6
  model-index:
7
  - name: roberta-base_disaster_tweets
8
  results: []
@@ -15,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4884
 
 
19
 
20
  ## Model description
21
 
@@ -34,29 +39,27 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
- - num_epochs: 5
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 0.5065 | 1.0 | 667 | 0.5845 |
51
- | 0.6939 | 2.0 | 1334 | 0.5068 |
52
- | 0.5018 | 3.0 | 2001 | 0.4925 |
53
- | 0.4037 | 4.0 | 2668 | 0.5137 |
54
- | 0.498 | 5.0 | 3335 | 0.4884 |
55
 
56
 
57
  ### Framework versions
58
 
59
- - Transformers 4.40.2
60
  - Pytorch 2.2.1+cu121
61
  - Datasets 2.19.1
62
  - Tokenizers 0.19.1
 
3
  base_model: roberta-base
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
  model-index:
10
  - name: roberta-base_disaster_tweets
11
  results: []
 
18
 
19
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4446
22
+ - Accuracy: 0.8262
23
+ - F1: 0.8219
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 2e-05
43
  - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
+ - num_epochs: 3
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
+ | 0.4209 | 1.0 | 667 | 0.4367 | 0.8306 | 0.8269 |
56
+ | 0.5695 | 2.0 | 1334 | 0.5604 | 0.8087 | 0.8093 |
57
+ | 0.3804 | 3.0 | 2001 | 0.6302 | 0.8301 | 0.8294 |
 
 
58
 
59
 
60
  ### Framework versions
61
 
62
+ - Transformers 4.41.0
63
  - Pytorch 2.2.1+cu121
64
  - Datasets 2.19.1
65
  - Tokenizers 0.19.1
logs/events.out.tfevents.1716208219.8a83622ab2eb.3312.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34e1641a996ccacc92eb2b034bb8a11f5d772cf477cc287137f941080d8d97ad
3
- size 48154
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0f7db4922e3c470368638c5d72715630c17e24fe04d093bd326ba95c12d27a7
3
+ size 48508
logs/events.out.tfevents.1716208758.8a83622ab2eb.3312.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a028441722b2b579dcae06d258517918a9e76cb80e506e7610a6b3bed577a286
3
+ size 409
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed6f12ab42c27c967293bfa01ab90810b629d344da2462ee814dda4a5b1b5a2a
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68b311f27ac28db97cdfd20cb26ef461a4b3bce28b677665bc4a0611990f359d
3
  size 498612824