bobbyw commited on
Commit
8d3d6d8
1 Parent(s): 4fa75e1

End of training

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [bobbyw/deberta-v3-large_v1_no_entities_with_context](https://huggingface.co/bobbyw/deberta-v3-large_v1_no_entities_with_context) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0259
24
  - Accuracy: 0.0045
25
  - F1: 0.0090
26
  - Precision: 0.0045
@@ -44,7 +44,7 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 0.0002
48
  - train_batch_size: 3
49
  - eval_batch_size: 3
50
  - seed: 42
@@ -56,15 +56,15 @@ The following hyperparameters were used during training:
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
59
- | 0.0312 | 1.0 | 540 | 0.0264 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 0.0002 |
60
- | 0.0302 | 2.0 | 1080 | 0.0266 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 0.0001 |
61
- | 0.0315 | 3.0 | 1620 | 0.0259 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 5e-05 |
62
- | 0.03 | 4.0 | 2160 | 0.0259 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 0.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.1
68
  - Pytorch 2.2.1+cu121
69
- - Datasets 2.19.0
70
  - Tokenizers 0.19.1
 
20
 
21
  This model is a fine-tuned version of [bobbyw/deberta-v3-large_v1_no_entities_with_context](https://huggingface.co/bobbyw/deberta-v3-large_v1_no_entities_with_context) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0260
24
  - Accuracy: 0.0045
25
  - F1: 0.0090
26
  - Precision: 0.0045
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
+ - learning_rate: 2e-05
48
  - train_batch_size: 3
49
  - eval_batch_size: 3
50
  - seed: 42
 
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
59
+ | 0.0296 | 1.0 | 540 | 0.0260 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 0.0000 |
60
+ | 0.0292 | 2.0 | 1080 | 0.0261 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 1e-05 |
61
+ | 0.0306 | 3.0 | 1620 | 0.0260 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 5e-06 |
62
+ | 0.0297 | 4.0 | 2160 | 0.0260 | 0.0045 | 0.0090 | 0.0045 | 1.0 | 0.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.1
68
  - Pytorch 2.2.1+cu121
69
+ - Datasets 2.19.1
70
  - Tokenizers 0.19.1