arubenruben commited on
Commit
d3b802b
1 Parent(s): 35f00e6

End of training

Browse files
Files changed (2) hide show
  1. README.md +12 -9
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: neuralmind/bert-large-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -9,18 +9,18 @@ metrics:
9
  - precision
10
  - recall
11
  model-index:
12
- - name: LVI_bert-large-portuguese-cased
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # LVI_bert-large-portuguese-cased
20
 
21
- This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.6945
24
  - Accuracy: 0.5
25
  - F1: 0.0
26
  - Precision: 0.0
@@ -55,10 +55,13 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
- | 0.2141 | 1.0 | 3217 | 0.2055 | 0.9405 | 0.9423 | 0.9140 | 0.9724 |
59
- | 0.7115 | 2.0 | 6434 | 0.6959 | 0.5 | 0.0 | 0.0 | 0.0 |
60
- | 0.7041 | 3.0 | 9651 | 0.6931 | 0.5 | 0.0 | 0.0 | 0.0 |
61
- | 0.7056 | 4.0 | 12868 | 0.6945 | 0.5 | 0.0 | 0.0 | 0.0 |
 
 
 
62
 
63
 
64
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: PORTULAN/albertina-100m-portuguese-ptpt-encoder
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
9
  - precision
10
  - recall
11
  model-index:
12
+ - name: LVI_albertina-100m-portuguese-ptpt-encoder
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # LVI_albertina-100m-portuguese-ptpt-encoder
20
 
21
+ This model is a fine-tuned version of [PORTULAN/albertina-100m-portuguese-ptpt-encoder](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.6932
24
  - Accuracy: 0.5
25
  - F1: 0.0
26
  - Precision: 0.0
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 0.5678 | 1.0 | 3217 | 0.6316 | 0.6653 | 0.5619 | 0.8128 | 0.4294 |
59
+ | 0.6042 | 2.0 | 6434 | 0.6911 | 0.5 | 0.0 | 0.0 | 0.0 |
60
+ | 0.6946 | 3.0 | 9651 | 0.6932 | 0.5 | 0.0 | 0.0 | 0.0 |
61
+ | 0.694 | 4.0 | 12868 | 0.6932 | 0.5 | 0.6667 | 0.5 | 1.0 |
62
+ | 0.6942 | 5.0 | 16085 | 0.6933 | 0.5 | 0.6667 | 0.5 | 1.0 |
63
+ | 0.6936 | 6.0 | 19302 | 0.6937 | 0.5 | 0.6667 | 0.5 | 1.0 |
64
+ | 0.6937 | 7.0 | 22519 | 0.6932 | 0.5 | 0.0 | 0.0 | 0.0 |
65
 
66
 
67
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab7cf2f1a0dcfabc1e683851ce3da20cdee8388658c44da0419e88f1c050f737
3
  size 556799560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5333189bc54454612d23d846c0995dfc4cb235496fc187f0d2f217687e6b99c0
3
  size 556799560