SimoneJLaudani commited on
Commit
82e22a3
1 Parent(s): 50a221b

End of training

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: distilbert-base-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -18,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # trainer3
20
 
21
- This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0000
24
- - Precision: 1.0
25
- - Recall: 1.0
26
- - F1: 1.0
27
- - Accuracy: 1.0
28
 
29
  ## Model description
30
 
@@ -44,39 +44,20 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 5e-05
47
- - train_batch_size: 8
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
- - num_epochs: 10
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
58
- | 0.0053 | 0.57 | 30 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
59
- | 0.0011 | 1.13 | 60 | 0.0004 | 1.0 | 1.0 | 1.0 | 1.0 |
60
- | 0.0006 | 1.7 | 90 | 0.0003 | 1.0 | 1.0 | 1.0 | 1.0 |
61
- | 0.0004 | 2.26 | 120 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
62
- | 0.0003 | 2.83 | 150 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
63
- | 0.0002 | 3.4 | 180 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
64
- | 0.0002 | 3.96 | 210 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
65
- | 0.0002 | 4.53 | 240 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
66
- | 0.0001 | 5.09 | 270 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
67
- | 0.0001 | 5.66 | 300 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
68
- | 0.0001 | 6.23 | 330 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
69
- | 0.0001 | 6.79 | 360 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
70
- | 0.0001 | 7.36 | 390 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
71
- | 0.0001 | 7.92 | 420 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
72
- | 0.0001 | 8.49 | 450 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
73
- | 0.0001 | 9.06 | 480 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
74
- | 0.0001 | 9.62 | 510 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
75
 
76
 
77
  ### Framework versions
78
 
79
- - Transformers 4.38.2
80
  - Pytorch 2.2.1+cu121
81
  - Datasets 2.18.0
82
  - Tokenizers 0.15.2
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
18
 
19
  # trainer3
20
 
21
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.7977
24
+ - Precision: 0.8653
25
+ - Recall: 0.8624
26
+ - F1: 0.8621
27
+ - Accuracy: 0.8624
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 5e-05
47
+ - train_batch_size: 64
48
+ - eval_batch_size: 64
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - num_epochs: 20
53
 
54
  ### Training results
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
+ - Transformers 4.39.3
61
  - Pytorch 2.2.1+cu121
62
  - Datasets 2.18.0
63
  - Tokenizers 0.15.2
runs/Apr17_19-30-29_ee976d1206e9/events.out.tfevents.1713382940.ee976d1206e9.1226.5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a55252516530b39260fc7cf410f1f28b850facfbc3c6731704bf8fe33a48c5aa
3
+ size 560