dtorber commited on
Commit
2086896
·
verified ·
1 Parent(s): 5d1e78b

Model save

Browse files
Files changed (2) hide show
  1. README.md +15 -17
  2. model.safetensors +1 -1
README.md CHANGED
@@ -17,14 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1222
21
- - F1 Macro: 0.9798
22
- - F1: 0.9852
23
- - F1 Neg: 0.9744
24
- - Acc: 0.9812
25
- - Prec: 0.9862
26
- - Recall: 0.9843
27
- - Mcc: 0.9596
28
 
29
  ## Model description
30
 
@@ -50,23 +50,21 @@ The following hyperparameters were used during training:
50
  - distributed_type: multi-GPU
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - num_epochs: 5
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 | F1 Neg | Acc | Prec | Recall | Mcc |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:------:|:------:|
60
- | No log | 1.0 | 400 | 0.0777 | 0.9784 | 0.9843 | 0.9725 | 0.98 | 0.9824 | 0.9862 | 0.9568 |
61
- | 0.2063 | 2.0 | 800 | 0.1065 | 0.9784 | 0.9843 | 0.9726 | 0.98 | 0.9843 | 0.9843 | 0.9569 |
62
- | 0.0791 | 3.0 | 1200 | 0.1388 | 0.9743 | 0.9814 | 0.9672 | 0.9762 | 0.9766 | 0.9862 | 0.9487 |
63
- | 0.0298 | 4.0 | 1600 | 0.1222 | 0.9798 | 0.9852 | 0.9744 | 0.9812 | 0.9862 | 0.9843 | 0.9596 |
64
- | 0.0052 | 5.0 | 2000 | 0.1608 | 0.9732 | 0.9801 | 0.9663 | 0.975 | 0.9900 | 0.9705 | 0.9468 |
65
 
66
 
67
  ### Framework versions
68
 
69
- - Transformers 4.38.2
70
- - Pytorch 2.2.1+cu121
71
- - Datasets 2.18.0
72
  - Tokenizers 0.15.2
 
17
 
18
  This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.1503
21
+ - F1 Macro: 0.9566
22
+ - F1: 0.9688
23
+ - F1 Neg: 0.9444
24
+ - Acc: 0.96
25
+ - Prec: 0.9612
26
+ - Recall: 0.9764
27
+ - Mcc: 0.9134
28
 
29
  ## Model description
30
 
 
50
  - distributed_type: multi-GPU
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - num_epochs: 3
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 | F1 Neg | Acc | Prec | Recall | Mcc |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:------:|:------:|
60
+ | No log | 1.0 | 400 | 0.1503 | 0.9566 | 0.9688 | 0.9444 | 0.96 | 0.9612 | 0.9764 | 0.9134 |
61
+ | 0.2729 | 2.0 | 800 | 0.1966 | 0.9494 | 0.9643 | 0.9345 | 0.9537 | 0.9469 | 0.9823 | 0.9000 |
62
+ | 0.0926 | 3.0 | 1200 | 0.1825 | 0.9565 | 0.9688 | 0.9443 | 0.96 | 0.9595 | 0.9783 | 0.9134 |
 
 
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.37.2
68
+ - Pytorch 2.2.0+cu121
69
+ - Datasets 2.16.1
70
  - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c5a4f9b072d62ffbcc422fcbc54840f3f0b1db55b309aa20d16da0442c62906
3
  size 439433208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:837b92d1c909ca88a1689356d8edaeab375787162e2c2cf3298da70fa510eaf4
3
  size 439433208