arubenruben commited on
Commit
38d7f6a
1 Parent(s): 677fc8a

End of training

Browse files
Files changed (2) hide show
  1. README.md +20 -12
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: neuralmind/bert-large-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -9,22 +9,22 @@ metrics:
9
  - precision
10
  - recall
11
  model-index:
12
- - name: LVI_bert-large-portuguese-cased
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # LVI_bert-large-portuguese-cased
20
 
21
- This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0755
24
- - Accuracy: 0.9775
25
- - F1: 0.9775
26
- - Precision: 0.9758
27
- - Recall: 0.9793
28
 
29
  ## Model description
30
 
@@ -53,9 +53,17 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
- | 0.1071 | 1.0 | 3217 | 0.0755 | 0.9775 | 0.9775 | 0.9758 | 0.9793 |
 
 
 
 
 
 
 
 
59
 
60
 
61
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: PORTULAN/albertina-100m-portuguese-ptpt-encoder
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
9
  - precision
10
  - recall
11
  model-index:
12
+ - name: LVI_albertina-100m-portuguese-ptpt-encoder
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # LVI_albertina-100m-portuguese-ptpt-encoder
20
 
21
+ This model is a fine-tuned version of [PORTULAN/albertina-100m-portuguese-ptpt-encoder](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.1867
24
+ - Accuracy: 0.9802
25
+ - F1: 0.9800
26
+ - Precision: 0.9905
27
+ - Recall: 0.9696
28
 
29
  ## Model description
30
 
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 0.1284 | 1.0 | 3217 | 0.1454 | 0.9581 | 0.9567 | 0.9882 | 0.9272 |
59
+ | 0.0946 | 2.0 | 6434 | 0.1211 | 0.9737 | 0.9734 | 0.9864 | 0.9607 |
60
+ | 0.0575 | 3.0 | 9651 | 0.1087 | 0.9776 | 0.9774 | 0.9892 | 0.9659 |
61
+ | 0.0374 | 4.0 | 12868 | 0.1033 | 0.981 | 0.9809 | 0.9854 | 0.9765 |
62
+ | 0.0311 | 5.0 | 16085 | 0.1154 | 0.981 | 0.9808 | 0.9896 | 0.9722 |
63
+ | 0.0125 | 6.0 | 19302 | 0.1143 | 0.9830 | 0.9830 | 0.9833 | 0.9826 |
64
+ | 0.0107 | 7.0 | 22519 | 0.1562 | 0.9807 | 0.9805 | 0.9910 | 0.9702 |
65
+ | 0.0032 | 8.0 | 25736 | 0.1711 | 0.9808 | 0.9806 | 0.9892 | 0.9721 |
66
+ | 0.0036 | 9.0 | 28953 | 0.1867 | 0.9802 | 0.9800 | 0.9905 | 0.9696 |
67
 
68
 
69
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52e9e4666d6d81e734958b4ada5253311063fa5efe81eabc409a358a226cbad7
3
  size 556799560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:051dfb67a46d7af8d6b487d10c031643e56109817bf461ee71359f014e0263ec
3
  size 556799560