SimoneJLaudani commited on
Commit
504883a
1 Parent(s): 37b8e8e

End of training

Browse files
Files changed (1) hide show
  1. README.md +4 -25
README.md CHANGED
@@ -3,11 +3,6 @@ license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
- metrics:
7
- - precision
8
- - recall
9
- - f1
10
- - accuracy
11
  model-index:
12
  - name: trainer
13
  results: []
@@ -19,12 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
19
  # trainer
20
 
21
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 4.5429
24
- - Precision: 0.6049
25
- - Recall: 0.5714
26
- - F1: 0.5559
27
- - Accuracy: 0.5714
28
 
29
  ## Model description
30
 
@@ -43,9 +32,9 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 8
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -53,21 +42,11 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
- | 0.0126 | 0.57 | 30 | 3.5750 | 0.6388 | 0.6190 | 0.6167 | 0.6190 |
59
- | 0.0144 | 1.13 | 60 | 3.7088 | 0.6787 | 0.6548 | 0.6479 | 0.6548 |
60
- | 0.0364 | 1.7 | 90 | 3.5580 | 0.6614 | 0.6548 | 0.6488 | 0.6548 |
61
- | 0.0991 | 2.26 | 120 | 3.8208 | 0.6775 | 0.6429 | 0.6407 | 0.6429 |
62
- | 0.0 | 2.83 | 150 | 4.5110 | 0.6127 | 0.5833 | 0.5646 | 0.5833 |
63
- | 0.0 | 3.4 | 180 | 4.5298 | 0.6127 | 0.5833 | 0.5646 | 0.5833 |
64
- | 0.0 | 3.96 | 210 | 4.5318 | 0.6127 | 0.5833 | 0.5646 | 0.5833 |
65
- | 0.0 | 4.53 | 240 | 4.5429 | 0.6049 | 0.5714 | 0.5559 | 0.5714 |
66
 
67
 
68
  ### Framework versions
69
 
70
- - Transformers 4.38.2
71
  - Pytorch 2.2.1+cu121
72
  - Datasets 2.18.0
73
  - Tokenizers 0.15.2
 
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
 
 
 
 
 
6
  model-index:
7
  - name: trainer
8
  results: []
 
14
  # trainer
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
+ - learning_rate: 2e-05
36
+ - train_batch_size: 16
37
+ - eval_batch_size: 16
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
 
42
 
43
  ### Training results
44
 
 
 
 
 
 
 
 
 
 
 
45
 
46
 
47
  ### Framework versions
48
 
49
+ - Transformers 4.39.3
50
  - Pytorch 2.2.1+cu121
51
  - Datasets 2.18.0
52
  - Tokenizers 0.15.2