JoshuaAAX commited on
Commit
556b6b2
1 Parent(s): f88165e

Training complete

Browse files
Files changed (1) hide show
  1. README.md +13 -20
README.md CHANGED
@@ -25,16 +25,16 @@ model-index:
25
  metrics:
26
  - name: Precision
27
  type: precision
28
- value: 0.8412017167381974
29
  - name: Recall
30
  type: recall
31
- value: 0.8556985294117647
32
  - name: F1
33
  type: f1
34
- value: 0.8483881991115162
35
  - name: Accuracy
36
  type: accuracy
37
- value: 0.9705709149516489
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -44,11 +44,11 @@ should probably proofread and complete it, then remove this comment. -->
44
 
45
  This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on the conll2002 dataset.
46
  It achieves the following results on the evaluation set:
47
- - Loss: 0.2190
48
- - Precision: 0.8412
49
- - Recall: 0.8557
50
- - F1: 0.8484
51
- - Accuracy: 0.9706
52
 
53
  ## Model description
54
 
@@ -73,22 +73,15 @@ The following hyperparameters were used during training:
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
- - num_epochs: 10
77
 
78
  ### Training results
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
- | 0.0512 | 1.0 | 521 | 0.1314 | 0.8328 | 0.8562 | 0.8443 | 0.9703 |
83
- | 0.0305 | 2.0 | 1042 | 0.1553 | 0.8340 | 0.8451 | 0.8395 | 0.9688 |
84
- | 0.0195 | 3.0 | 1563 | 0.1462 | 0.8483 | 0.8568 | 0.8525 | 0.9710 |
85
- | 0.0148 | 4.0 | 2084 | 0.1809 | 0.8395 | 0.8460 | 0.8428 | 0.9683 |
86
- | 0.0112 | 5.0 | 2605 | 0.1889 | 0.8394 | 0.8516 | 0.8454 | 0.9701 |
87
- | 0.0079 | 6.0 | 3126 | 0.1815 | 0.8431 | 0.8571 | 0.8500 | 0.9707 |
88
- | 0.0062 | 7.0 | 3647 | 0.2037 | 0.8410 | 0.8571 | 0.8490 | 0.9704 |
89
- | 0.0049 | 8.0 | 4168 | 0.2065 | 0.84 | 0.8541 | 0.8470 | 0.9706 |
90
- | 0.0038 | 9.0 | 4689 | 0.2189 | 0.8434 | 0.8539 | 0.8486 | 0.9697 |
91
- | 0.0032 | 10.0 | 5210 | 0.2190 | 0.8412 | 0.8557 | 0.8484 | 0.9706 |
92
 
93
 
94
  ### Framework versions
 
25
  metrics:
26
  - name: Precision
27
  type: precision
28
+ value: 0.8347884486232371
29
  - name: Recall
30
  type: recall
31
+ value: 0.8568474264705882
32
  - name: F1
33
  type: f1
34
+ value: 0.8456741127111919
35
  - name: Accuracy
36
  type: accuracy
37
+ value: 0.9702609719811555
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
44
 
45
  This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on the conll2002 dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.1589
48
+ - Precision: 0.8348
49
+ - Recall: 0.8568
50
+ - F1: 0.8457
51
+ - Accuracy: 0.9703
52
 
53
  ## Model description
54
 
 
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
+ - num_epochs: 3
77
 
78
  ### Training results
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | 0.0499 | 1.0 | 521 | 0.1304 | 0.8278 | 0.8536 | 0.8405 | 0.9704 |
83
+ | 0.0272 | 2.0 | 1042 | 0.1510 | 0.8355 | 0.8486 | 0.8420 | 0.9687 |
84
+ | 0.0153 | 3.0 | 1563 | 0.1589 | 0.8348 | 0.8568 | 0.8457 | 0.9703 |
 
 
 
 
 
 
 
85
 
86
 
87
  ### Framework versions