fermaat commited on
Commit
7fde946
1 Parent(s): 04ce47d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -15
README.md CHANGED
@@ -48,21 +48,6 @@ It's a straightforward and fast solution that creates a positive impact in the c
48
 
49
  By using gender inclusive models we can help reducing gender bias in a language corpus by, for instance, adding data augmentation and creating different examples
50
 
51
- ## Model specs
52
-
53
- This model is a fine-tuned version of [spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the data described below.
54
- It achieves the following results on the evaluation set:
55
- - 'eval_bleu': 93.8347,
56
- - 'eval_f1': 0.9904,
57
-
58
- ## Training procedure
59
- ### Training hyperparameters
60
- The following hyperparameters were used during training:
61
- - learning_rate: 1e-04
62
- - train_batch_size: 32
63
- - seed: 42
64
- - num_epochs: 10
65
- - weight_decay: 0,01
66
 
67
  ## Training and evaluation data
68
 
@@ -97,6 +82,23 @@ The data used for the model training has been manually created form a compilatio
97
  [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
98
 
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  ## Metrics
101
 
102
  For training, we used both Blue (sacrebleu implementation in HF) and BertScore. The first one, a standard in Machine Translation processes, has been added for ensuring robustness of the newly generated data, while the second one is kept for keeping the expected semantic similarity.
 
48
 
49
  By using gender inclusive models we can help reducing gender bias in a language corpus by, for instance, adding data augmentation and creating different examples
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ## Training and evaluation data
53
 
 
82
  [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
83
 
84
 
85
+ ## Model specs
86
+
87
+ This model is a fine-tuned version of [spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the data described below.
88
+ It achieves the following results on the evaluation set:
89
+ - 'eval_bleu': 93.8347,
90
+ - 'eval_f1': 0.9904,
91
+
92
+ ## Training procedure
93
+ ### Training hyperparameters
94
+ The following hyperparameters were used during training:
95
+ - learning_rate: 1e-04
96
+ - train_batch_size: 32
97
+ - seed: 42
98
+ - num_epochs: 10
99
+ - weight_decay: 0,01
100
+
101
+
102
  ## Metrics
103
 
104
  For training, we used both Blue (sacrebleu implementation in HF) and BertScore. The first one, a standard in Machine Translation processes, has been added for ensuring robustness of the newly generated data, while the second one is kept for keeping the expected semantic similarity.