Mawiwawi commited on
Commit
582b66a
1 Parent(s): 4f0e0f5

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  base_model: dccuchile/albert-base-spanish-finetuned-ner
3
  tags:
4
  - generated_from_trainer
@@ -19,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [dccuchile/albert-base-spanish-finetuned-ner](https://huggingface.co/dccuchile/albert-base-spanish-finetuned-ner) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3328
23
- - Precision: 0.8824
24
- - Recall: 0.8219
25
- - F1: 0.8511
26
- - Accuracy: 0.9517
27
 
28
  ## Model description
29
 
@@ -54,31 +55,31 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | No log | 1.0 | 13 | 1.8310 | 0.0 | 0.0 | 0.0 | 0.5913 |
58
- | No log | 2.0 | 26 | 1.4524 | 0.0 | 0.0 | 0.0 | 0.6561 |
59
- | No log | 3.0 | 39 | 1.1442 | 0.04 | 0.0274 | 0.0325 | 0.6964 |
60
- | No log | 4.0 | 52 | 0.9493 | 0.3333 | 0.2877 | 0.3088 | 0.8009 |
61
- | No log | 5.0 | 65 | 0.8080 | 0.3759 | 0.3425 | 0.3584 | 0.8214 |
62
- | No log | 6.0 | 78 | 0.6958 | 0.4214 | 0.4041 | 0.4126 | 0.8406 |
63
- | No log | 7.0 | 91 | 0.6084 | 0.5734 | 0.5616 | 0.5675 | 0.8697 |
64
- | No log | 8.0 | 104 | 0.5407 | 0.6405 | 0.6712 | 0.6555 | 0.8869 |
65
- | No log | 9.0 | 117 | 0.4802 | 0.7534 | 0.7534 | 0.7534 | 0.9187 |
66
- | No log | 10.0 | 130 | 0.4406 | 0.8214 | 0.7877 | 0.8042 | 0.9286 |
67
- | No log | 11.0 | 143 | 0.4134 | 0.8603 | 0.8014 | 0.8298 | 0.9299 |
68
- | No log | 12.0 | 156 | 0.3900 | 0.8551 | 0.8082 | 0.8310 | 0.9372 |
69
- | No log | 13.0 | 169 | 0.3727 | 0.85 | 0.8151 | 0.8322 | 0.9418 |
70
- | No log | 14.0 | 182 | 0.3585 | 0.8561 | 0.8151 | 0.8351 | 0.9431 |
71
- | No log | 15.0 | 195 | 0.3507 | 0.8633 | 0.8219 | 0.8421 | 0.9458 |
72
- | No log | 16.0 | 208 | 0.3431 | 0.8696 | 0.8219 | 0.8451 | 0.9497 |
73
- | No log | 17.0 | 221 | 0.3393 | 0.8824 | 0.8219 | 0.8511 | 0.9511 |
74
- | No log | 18.0 | 234 | 0.3355 | 0.8759 | 0.8219 | 0.8481 | 0.9517 |
75
- | No log | 19.0 | 247 | 0.3335 | 0.8824 | 0.8219 | 0.8511 | 0.9517 |
76
- | No log | 20.0 | 260 | 0.3328 | 0.8824 | 0.8219 | 0.8511 | 0.9517 |
77
 
78
 
79
  ### Framework versions
80
 
81
- - Transformers 4.40.2
82
- - Pytorch 2.3.0+cpu
83
- - Datasets 2.19.1
84
  - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  base_model: dccuchile/albert-base-spanish-finetuned-ner
4
  tags:
5
  - generated_from_trainer
 
20
 
21
  This model is a fine-tuned version of [dccuchile/albert-base-spanish-finetuned-ner](https://huggingface.co/dccuchile/albert-base-spanish-finetuned-ner) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.3012
24
+ - Precision: 0.8356
25
+ - Recall: 0.8356
26
+ - F1: 0.8356
27
+ - Accuracy: 0.9385
28
 
29
  ## Model description
30
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 1.0 | 13 | 1.8849 | 0.0 | 0.0 | 0.0 | 0.5939 |
59
+ | No log | 2.0 | 26 | 1.4600 | 0.0 | 0.0 | 0.0 | 0.6687 |
60
+ | No log | 3.0 | 39 | 1.1449 | 0.0 | 0.0 | 0.0 | 0.6832 |
61
+ | No log | 4.0 | 52 | 0.9138 | 0.2857 | 0.2329 | 0.2566 | 0.8056 |
62
+ | No log | 5.0 | 65 | 0.7441 | 0.4504 | 0.4041 | 0.4260 | 0.8399 |
63
+ | No log | 6.0 | 78 | 0.6292 | 0.5310 | 0.5274 | 0.5292 | 0.875 |
64
+ | No log | 7.0 | 91 | 0.5406 | 0.6786 | 0.6507 | 0.6643 | 0.9041 |
65
+ | No log | 8.0 | 104 | 0.4747 | 0.7397 | 0.7397 | 0.7397 | 0.9259 |
66
+ | No log | 9.0 | 117 | 0.4228 | 0.7945 | 0.7945 | 0.7945 | 0.9306 |
67
+ | No log | 10.0 | 130 | 0.3900 | 0.8333 | 0.8219 | 0.8276 | 0.9332 |
68
+ | No log | 11.0 | 143 | 0.3685 | 0.8392 | 0.8219 | 0.8304 | 0.9339 |
69
+ | No log | 12.0 | 156 | 0.3487 | 0.8333 | 0.8219 | 0.8276 | 0.9339 |
70
+ | No log | 13.0 | 169 | 0.3325 | 0.8219 | 0.8219 | 0.8219 | 0.9339 |
71
+ | No log | 14.0 | 182 | 0.3227 | 0.8472 | 0.8356 | 0.8414 | 0.9339 |
72
+ | No log | 15.0 | 195 | 0.3150 | 0.8531 | 0.8356 | 0.8443 | 0.9358 |
73
+ | No log | 16.0 | 208 | 0.3094 | 0.8345 | 0.8288 | 0.8316 | 0.9358 |
74
+ | No log | 17.0 | 221 | 0.3047 | 0.8414 | 0.8356 | 0.8385 | 0.9378 |
75
+ | No log | 18.0 | 234 | 0.3027 | 0.8356 | 0.8356 | 0.8356 | 0.9385 |
76
+ | No log | 19.0 | 247 | 0.3017 | 0.8414 | 0.8356 | 0.8385 | 0.9385 |
77
+ | No log | 20.0 | 260 | 0.3012 | 0.8356 | 0.8356 | 0.8356 | 0.9385 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.44.2
83
+ - Pytorch 2.4.1+cu118
84
+ - Datasets 2.21.0
85
  - Tokenizers 0.19.1
runs/Sep09_22-22-00_DESKTOP-97F14OL/events.out.tfevents.1725913323.DESKTOP-97F14OL.13800.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5641940df8e92f52333ce7e5dc1a31e8cfe19d1f71ee72a6e9875e202594423
3
- size 15355
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e477f7dec25d09a3112dd6e7c4332854e706808687346b41b0e04ca1084583a1
3
+ size 16181
runs/Sep09_22-22-00_DESKTOP-97F14OL/events.out.tfevents.1725913744.DESKTOP-97F14OL.13800.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da270e33a91f7ed963cacd6b736aba2cc741234a25701fa70570b92de28b7109
3
+ size 560