File size: 3,224 Bytes
bc31490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24ef19b
 
 
ca3044e
24ef19b
 
ca3044e
bc31490
 
51187ae
 
d3f038e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51187ae
 
 
d3f038e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3db3099
 
 
 
 
 
 
d3f038e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- es
license: apache-2.0  
tags:
- Text2Text Generation
- Inclusive Language
- Text Neutralization
- pytorch
# datasets:
#- {Pending}  # Example: common_voice. Use dataset id from https://hf.co/datasets
metrics:
- sacrebleu

model-index:
- name: es_nlp_text_neutralizer
  results:
  - task: 
      type: Text2Text Generation
      name: Neutralization of texts in Spanish
#     dataset:
#       type: {Pending}  # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
#      name: {handcrafted dataset}  # Optional. Example: Common Voice zh-CN
#       args: {es}         # Optional. Example: zh-CN
    metrics:
      - type: sacrebleu    # Required. Example: wer
        value: 93.8347  # Required. Example: 20.90
        name: sacrebleu    # Optional. Example: Test WER
      - type: bertscore    # Required. Example: wer
        value: 0.99
        name: BertScoreF1    # Optional. Example: Test WER
      - type: DiffBleu    # Required. Example: wer
        value: 0.38
        name: DiffBleu    # Optional. Example: Test WER
---
## Model objective

TBF

## Model specs

This model is a fine-tuned version of [spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the data described below.
It achieves the following results on the evaluation set:
- 'eval_bleu': 93.8347,
- 'eval_f1': 0.9904,

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 32
- seed: 42
- num_epochs: 10
- weight_decay: 0,01

## Training and evaluation data

TBF

## Metrics

For training, we used both Blue (sacrebleu implementation in HF) and BertScore. The first one, a standard in Machine Translation processes, has been added for ensuring robustness of the newly generated data, while the second one is kept for keeping the expected semantic similarity.

However, given the actual use case, we expect generated segments to be very close to input segments and to label segments in training. As an example, we can take the following:

inputSegment = 'De acuerdo con las informaciones anteriores , las alumnas se han quejado de la actitud de los profesores en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
expectedOutput (label) = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
actualOutput = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son el alumnado Juanju y Javi.'

As you can see, segments are pretty similar. So, instead of measuring Bleu or BertScore here, we propose an alternate metric that would be DiffBleu:

$$DiffBleu = BLEU(actualOutput - inputSegment, labels - inputSegment)$$

Where the minuses as in set notation. This way, we also evaluate DiffBleu after the model has been trained.


## Team Members

- Fernando Velasco (fermaat)
- Cibeles Redondo (CibelesR)
- Juan Julian Cea (Juanju)
- Magdalena Kujalowicz (MacadellaCosta)
- Javier Blasco (javiblasco)




Enjoy!