RafaelMayer commited on
Commit
7310ca9
1 Parent(s): d7ee898

Training in progress epoch 1

Browse files
Files changed (1) hide show
  1. README.md +4 -13
README.md CHANGED
@@ -15,7 +15,7 @@ probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 12.5328
19
  - Validation Loss: 11.7955
20
  - Train Accuracy: 0.7647
21
  - Train Precision: [0. 0.76470588]
@@ -24,7 +24,7 @@ It achieves the following results on the evaluation set:
24
  - Train Recall W: 0.7647
25
  - Train F1: [0. 0.86666667]
26
  - Train F1 W: 0.6627
27
- - Epoch: 10
28
 
29
  ## Model description
30
 
@@ -43,23 +43,14 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -460, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 500, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
47
  - training_precision: float32
48
 
49
  ### Training results
50
 
51
  | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
52
  |:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
53
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 1 |
54
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 2 |
55
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 3 |
56
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 4 |
57
- | 12.5328 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 5 |
58
- | 12.0507 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 6 |
59
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 7 |
60
- | 12.0507 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 8 |
61
- | 12.2918 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 9 |
62
- | 12.5328 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 10 |
63
 
64
 
65
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 12.0507
19
  - Validation Loss: 11.7955
20
  - Train Accuracy: 0.7647
21
  - Train Precision: [0. 0.76470588]
 
24
  - Train Recall W: 0.7647
25
  - Train F1: [0. 0.86666667]
26
  - Train F1 W: 0.6627
27
+ - Epoch: 1
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
47
  - training_precision: float32
48
 
49
  ### Training results
50
 
51
  | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
52
  |:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
53
+ | 12.0507 | 11.7955 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 1 |
 
 
 
 
 
 
 
 
 
54
 
55
 
56
  ### Framework versions