xshubhamx commited on
Commit
b65cc96
1 Parent(s): 2036dec

End of training

Browse files
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: cc-by-sa-4.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: nlpaueb/legal-bert-base-uncased
6
  metrics:
7
  - accuracy
8
  - precision
@@ -19,21 +19,21 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 2.2259
23
- - Accuracy: 0.2455
24
- - Precision: 0.0603
25
- - Recall: 0.2455
26
- - Precision Macro: 0.0164
27
- - Recall Macro: 0.0667
28
- - Macro Fpr: 0.0667
29
- - Weighted Fpr: 0.1800
30
- - Weighted Specificity: 0.7545
31
- - Macro Specificity: 0.9333
32
- - Weighted Sensitivity: 0.2455
33
- - Macro Sensitivity: 0.0667
34
- - F1 Micro: 0.2455
35
- - F1 Macro: 0.0263
36
- - F1 Weighted: 0.0968
37
 
38
  ## Model description
39
 
@@ -59,21 +59,22 @@ The following hyperparameters were used during training:
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - num_epochs: 10
 
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
67
- | 2.2376 | 1.0 | 643 | 2.2455 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
68
- | 2.2504 | 2.0 | 1286 | 2.2412 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
69
- | 2.2292 | 3.0 | 1929 | 2.2300 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
70
- | 2.218 | 4.0 | 2572 | 2.2316 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
71
- | 2.2317 | 5.0 | 3215 | 2.2295 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
72
- | 2.2355 | 6.0 | 3858 | 2.2310 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
73
- | 2.2231 | 7.0 | 4501 | 2.2300 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
74
- | 2.2212 | 8.0 | 5144 | 2.2291 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
75
- | 2.2318 | 9.0 | 5787 | 2.2258 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
76
- | 2.2128 | 10.0 | 6430 | 2.2259 | 0.2455 | 0.0603 | 0.2455 | 0.0164 | 0.0667 | 0.0667 | 0.1800 | 0.7545 | 0.9333 | 0.2455 | 0.0667 | 0.2455 | 0.0263 | 0.0968 |
77
 
78
 
79
  ### Framework versions
 
1
  ---
2
  license: cc-by-sa-4.0
3
+ base_model: nlpaueb/legal-bert-base-uncased
4
  tags:
5
  - generated_from_trainer
 
6
  metrics:
7
  - accuracy
8
  - precision
 
19
 
20
  This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.2564
23
+ - Accuracy: 0.8273
24
+ - Precision: 0.8292
25
+ - Recall: 0.8273
26
+ - Precision Macro: 0.7794
27
+ - Recall Macro: 0.7759
28
+ - Macro Fpr: 0.0153
29
+ - Weighted Fpr: 0.0147
30
+ - Weighted Specificity: 0.9772
31
+ - Macro Specificity: 0.9870
32
+ - Weighted Sensitivity: 0.8273
33
+ - Macro Sensitivity: 0.7759
34
+ - F1 Micro: 0.8273
35
+ - F1 Macro: 0.7741
36
+ - F1 Weighted: 0.8269
37
 
38
  ## Model description
39
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - num_epochs: 10
62
+ - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
68
+ | 1.2105 | 1.0 | 643 | 0.7916 | 0.7761 | 0.7729 | 0.7761 | 0.6230 | 0.5920 | 0.0214 | 0.0202 | 0.9664 | 0.9828 | 0.7761 | 0.5920 | 0.7761 | 0.5703 | 0.7551 |
69
+ | 0.6521 | 2.0 | 1286 | 0.6834 | 0.8025 | 0.8067 | 0.8025 | 0.7779 | 0.7152 | 0.0180 | 0.0173 | 0.9721 | 0.9850 | 0.8025 | 0.7152 | 0.8025 | 0.7181 | 0.7983 |
70
+ | 0.513 | 3.0 | 1929 | 0.8107 | 0.8141 | 0.8142 | 0.8141 | 0.7859 | 0.7227 | 0.0168 | 0.0160 | 0.9740 | 0.9859 | 0.8141 | 0.7227 | 0.8141 | 0.7261 | 0.8083 |
71
+ | 0.2635 | 4.0 | 2572 | 0.8442 | 0.8249 | 0.8285 | 0.8249 | 0.8298 | 0.7733 | 0.0156 | 0.0149 | 0.9759 | 0.9867 | 0.8249 | 0.7733 | 0.8249 | 0.7812 | 0.8242 |
72
+ | 0.1821 | 5.0 | 3215 | 0.9549 | 0.8226 | 0.8287 | 0.8226 | 0.8135 | 0.7623 | 0.0157 | 0.0152 | 0.9766 | 0.9866 | 0.8226 | 0.7623 | 0.8226 | 0.7758 | 0.8233 |
73
+ | 0.1123 | 6.0 | 3858 | 1.0790 | 0.8273 | 0.8316 | 0.8273 | 0.7865 | 0.7758 | 0.0152 | 0.0147 | 0.9779 | 0.9870 | 0.8273 | 0.7758 | 0.8273 | 0.7671 | 0.8268 |
74
+ | 0.0465 | 7.0 | 4501 | 1.1538 | 0.8280 | 0.8324 | 0.8280 | 0.7857 | 0.8054 | 0.0152 | 0.0146 | 0.9780 | 0.9871 | 0.8280 | 0.8054 | 0.8280 | 0.7890 | 0.8285 |
75
+ | 0.0256 | 8.0 | 5144 | 1.2413 | 0.8180 | 0.8263 | 0.8180 | 0.7780 | 0.8012 | 0.0162 | 0.0156 | 0.9771 | 0.9863 | 0.8180 | 0.8012 | 0.8180 | 0.7792 | 0.8196 |
76
+ | 0.0166 | 9.0 | 5787 | 1.2510 | 0.8218 | 0.8222 | 0.8218 | 0.7782 | 0.7600 | 0.0159 | 0.0152 | 0.9755 | 0.9865 | 0.8218 | 0.7600 | 0.8218 | 0.7660 | 0.8210 |
77
+ | 0.0107 | 10.0 | 6430 | 1.2564 | 0.8273 | 0.8292 | 0.8273 | 0.7794 | 0.7759 | 0.0153 | 0.0147 | 0.9772 | 0.9870 | 0.8273 | 0.7759 | 0.8273 | 0.7741 | 0.8269 |
78
 
79
 
80
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:21f79c2749555502499dfe5674da29c3e782737419b23af6f8b562cfd98d3872
3
  size 437998636
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:713f7f1922f60377afc55832a3f17ced0a3a0fb9b9ac8ba0d021522243a2abaa
3
  size 437998636
runs/Apr12_19-33-47_85119520632e/events.out.tfevents.1712951052.85119520632e.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3b409d9f4f9ce16376ee02aa1a1613954e71a11b45c5bacce1dcbffedcde53cd
3
- size 18549
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc727c10596807dfe766c5728521d2f4e24cf7ba4decd3d472060248698282d0
3
+ size 18903