qfrodicio commited on
Commit
0deac63
1 Parent(s): 52869dc

Training complete

Browse files
README.md CHANGED
@@ -3,10 +3,10 @@ base_model: MMG/mlm-spanish-roberta-base
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
6
  - precision
7
  - recall
8
  - f1
9
- - accuracy
10
  model-index:
11
  - name: roberta-finetuned-intention-prediction-es
12
  results: []
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [MMG/mlm-spanish-roberta-base](https://huggingface.co/MMG/mlm-spanish-roberta-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.8935
23
- - Precision: 0.6851
24
- - Recall: 0.6851
25
- - F1: 0.6851
26
- - Accuracy: 0.6745
27
 
28
  ## Model description
29
 
@@ -52,28 +52,28 @@ The following hyperparameters were used during training:
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | 2.3195 | 1.0 | 102 | 1.7653 | 0.4977 | 0.4977 | 0.4977 | 0.4881 |
58
- | 1.3397 | 2.0 | 204 | 1.3826 | 0.6064 | 0.6064 | 0.6064 | 0.5933 |
59
- | 0.884 | 3.0 | 306 | 1.2726 | 0.6495 | 0.6495 | 0.6495 | 0.6372 |
60
- | 0.5805 | 4.0 | 408 | 1.3527 | 0.6571 | 0.6571 | 0.6571 | 0.6444 |
61
- | 0.3923 | 5.0 | 510 | 1.3805 | 0.6732 | 0.6732 | 0.6732 | 0.6600 |
62
- | 0.2565 | 6.0 | 612 | 1.4492 | 0.6801 | 0.6801 | 0.6801 | 0.6687 |
63
- | 0.1782 | 7.0 | 714 | 1.4983 | 0.6766 | 0.6766 | 0.6766 | 0.6643 |
64
- | 0.1196 | 8.0 | 816 | 1.5517 | 0.6840 | 0.6840 | 0.6840 | 0.6726 |
65
- | 0.0922 | 9.0 | 918 | 1.5745 | 0.6777 | 0.6777 | 0.6777 | 0.6658 |
66
- | 0.0577 | 10.0 | 1020 | 1.6238 | 0.6866 | 0.6866 | 0.6866 | 0.6748 |
67
- | 0.042 | 11.0 | 1122 | 1.7542 | 0.6697 | 0.6697 | 0.6697 | 0.6578 |
68
- | 0.0298 | 12.0 | 1224 | 1.7861 | 0.6842 | 0.6842 | 0.6842 | 0.6730 |
69
- | 0.0201 | 13.0 | 1326 | 1.8079 | 0.6906 | 0.6906 | 0.6906 | 0.6812 |
70
- | 0.0147 | 14.0 | 1428 | 1.8380 | 0.6833 | 0.6833 | 0.6833 | 0.6732 |
71
- | 0.0109 | 15.0 | 1530 | 1.8730 | 0.6808 | 0.6808 | 0.6808 | 0.6708 |
72
- | 0.0079 | 16.0 | 1632 | 1.8702 | 0.6864 | 0.6864 | 0.6864 | 0.6763 |
73
- | 0.0067 | 17.0 | 1734 | 1.8907 | 0.6873 | 0.6873 | 0.6873 | 0.6766 |
74
- | 0.0061 | 18.0 | 1836 | 1.8998 | 0.6826 | 0.6826 | 0.6826 | 0.6719 |
75
- | 0.0057 | 19.0 | 1938 | 1.8974 | 0.6850 | 0.6850 | 0.6850 | 0.6741 |
76
- | 0.0051 | 20.0 | 2040 | 1.8935 | 0.6851 | 0.6851 | 0.6851 | 0.6745 |
77
 
78
 
79
  ### Framework versions
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
+ - accuracy
7
  - precision
8
  - recall
9
  - f1
 
10
  model-index:
11
  - name: roberta-finetuned-intention-prediction-es
12
  results: []
 
19
 
20
  This model is a fine-tuned version of [MMG/mlm-spanish-roberta-base](https://huggingface.co/MMG/mlm-spanish-roberta-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.8994
23
+ - Accuracy: 0.6775
24
+ - Precision: 0.6807
25
+ - Recall: 0.6775
26
+ - F1: 0.6699
27
 
28
  ## Model description
29
 
 
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
56
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
57
+ | 2.2919 | 1.0 | 102 | 1.6955 | 0.5146 | 0.4756 | 0.5146 | 0.4444 |
58
+ | 1.3437 | 2.0 | 204 | 1.3834 | 0.5850 | 0.5693 | 0.5850 | 0.5589 |
59
+ | 0.8812 | 3.0 | 306 | 1.2719 | 0.6372 | 0.6285 | 0.6372 | 0.6200 |
60
+ | 0.5649 | 4.0 | 408 | 1.3629 | 0.6361 | 0.6518 | 0.6361 | 0.6253 |
61
+ | 0.39 | 5.0 | 510 | 1.4240 | 0.6439 | 0.6585 | 0.6439 | 0.6396 |
62
+ | 0.2704 | 6.0 | 612 | 1.4209 | 0.6536 | 0.6583 | 0.6536 | 0.6383 |
63
+ | 0.1785 | 7.0 | 714 | 1.4966 | 0.6647 | 0.6760 | 0.6647 | 0.6533 |
64
+ | 0.1208 | 8.0 | 816 | 1.5681 | 0.6623 | 0.6764 | 0.6623 | 0.6504 |
65
+ | 0.0802 | 9.0 | 918 | 1.6662 | 0.6612 | 0.6691 | 0.6612 | 0.6501 |
66
+ | 0.0565 | 10.0 | 1020 | 1.6779 | 0.6752 | 0.6721 | 0.6752 | 0.6653 |
67
+ | 0.0368 | 11.0 | 1122 | 1.6974 | 0.6735 | 0.6806 | 0.6735 | 0.6653 |
68
+ | 0.0267 | 12.0 | 1224 | 1.7552 | 0.6842 | 0.6860 | 0.6842 | 0.6754 |
69
+ | 0.0177 | 13.0 | 1326 | 1.8757 | 0.6714 | 0.6725 | 0.6714 | 0.6613 |
70
+ | 0.0142 | 14.0 | 1428 | 1.8640 | 0.6723 | 0.6745 | 0.6723 | 0.6622 |
71
+ | 0.0114 | 15.0 | 1530 | 1.8517 | 0.6795 | 0.6864 | 0.6795 | 0.6716 |
72
+ | 0.0081 | 16.0 | 1632 | 1.8696 | 0.6755 | 0.6780 | 0.6755 | 0.6668 |
73
+ | 0.0075 | 17.0 | 1734 | 1.8885 | 0.6752 | 0.6840 | 0.6752 | 0.6667 |
74
+ | 0.0059 | 18.0 | 1836 | 1.8893 | 0.6775 | 0.6809 | 0.6775 | 0.6698 |
75
+ | 0.0051 | 19.0 | 1938 | 1.9001 | 0.6775 | 0.6810 | 0.6775 | 0.6700 |
76
+ | 0.0048 | 20.0 | 2040 | 1.8994 | 0.6775 | 0.6807 | 0.6775 | 0.6699 |
77
 
78
 
79
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b3eb5f18ca3a146d4e410a24bffe39f5262bd494e30bd11bb524cc7d61a91e48
3
  size 501743200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c600231ce1a7e6b2bbf1c97471a896a1a0e51813489252dd6e568500cc16666
3
  size 501743200
runs/Jan18_21-11-04_9790f53cdeef/events.out.tfevents.1705612267.9790f53cdeef.253.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cdd122de0bc91ec581f2d7b320a5ee401d59eb10e30e10390ad5be1de6e0edae
3
- size 20195
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:377a616f478ef2150852915c54f5a22e15adac031334d39967e6b5dc6a00b057
3
+ size 20549