milmor commited on
Commit
81d6ca1
1 Parent(s): 36edbdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -93
README.md CHANGED
@@ -1,94 +1,94 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - es
5
- - nah
6
- tags:
7
- - translation
8
- widget:
9
- - text: "translate Spanish to Nahuatl: Mi hermano es un ajolote"
10
-
11
- ---
12
-
13
- # t5-small-spanish-nahuatl
14
- Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work we leverage the T5 text-to-text sufix training strategy to compensate the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
15
-
16
-
17
- ## Model description
18
- This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
19
-
20
-
21
- ## Usage
22
- ```python
23
- from transformers import AutoModelForSeq2SeqLM
24
- from transformers import AutoTokenizer
25
-
26
- model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
27
- tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
28
-
29
- model.eval()
30
- sentence = 'muchas flores son blancas'
31
- input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
32
- outputs = model.generate(input_ids)
33
- # outputs = miak xochitl istak
34
- outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
35
- ```
36
-
37
- ## Approach
38
- ### Dataset
39
- Since the Axolotl corpus contains misaligments, we just select the best samples (12,207 samples). We also use the [bible-corpus](https://github.com/christos-c/bible-corpus) (7,821 samples).
40
-
41
- | Axolotl best aligned books |
42
- |:-----------------------------------------------------:|
43
- | Anales de Tlatelolco |
44
- | Diario |
45
- | Documentos nauas de la Ciudad de México del siglo XVI |
46
- | Historia de México narrada en náhuatl y español |
47
- | La tinta negra y roja (antología de poesía náhuatl) |
48
- | Memorial Breve (Libro las ocho relaciones) |
49
- | Método auto-didáctico náhuatl-español |
50
- | Nican Mopohua |
51
- | Quinta Relación (Libro las ocho relaciones) |
52
- | Recetario Nahua de Milpa Alta D.F |
53
- | Testimonios de la antigua palabra |
54
- | Trece Poetas del Mundo Azteca |
55
- | Una tortillita nomás - Se taxkaltsin saj |
56
- | Vida económica de Tenochtitlan |
57
-
58
- Also, to increase the amount of data, we collected 3,000 extra samples from the web.
59
-
60
- ### Model and training
61
- We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
62
-
63
- ### Training-stage 1 (learning Spanish)
64
- In training stage 1 we first introduce Spanish to the model. The objective is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
65
-
66
- ### Training-stage 2 (learning Nahuatl)
67
- We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
68
-
69
- ### Training setup
70
- We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.
71
-
72
-
73
- ## Evaluation results
74
- For a fair comparison, the models are evaluated on the same 505 validation Nahuatl sentences. We report the results using chrf and sacrebleu hugging face metrics:
75
-
76
- | English-Spanish pretraining | Validation loss | BLEU | Chrf |
77
- |:----------------------------:|:---------------:|:-----|-------:|
78
- | False | 1.34 | 6.17 | 26.96 |
79
- | True | 1.31 | 6.18 | 28.21 |
80
-
81
- The English-Spanish pretraining improves BLEU and Chrf, and leads to faster convergence. You can reproduce the evaluation on the [eval.ipynb](https://github.com/milmor/spanish-nahuatl-translation/blob/main/eval.ipynb) notebook.
82
-
83
- ## References
84
- - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
85
- of transfer learning with a unified Text-to-Text transformer.
86
-
87
- - Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
88
-
89
-
90
- ## Team members
91
- - Emilio Alejandro Morales [(milmor)](https://huggingface.co/milmor)
92
- - Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
93
- - Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
94
  - Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - es
5
+ - nah
6
+ tags:
7
+ - translation
8
+ widget:
9
+ - text: "translate Spanish to Nahuatl: Mi hermano es un ajolote"
10
+
11
+ ---
12
+
13
+ # t5-small-spanish-nahuatl
14
+ Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work we leverage the T5 text-to-text sufix training strategy to compensate the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
15
+
16
+
17
+ ## Model description
18
+ This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
19
+
20
+
21
+ ## Usage
22
+ ```python
23
+ from transformers import AutoModelForSeq2SeqLM
24
+ from transformers import AutoTokenizer
25
+
26
+ model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
27
+ tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
28
+
29
+ model.eval()
30
+ sentence = 'muchas flores son blancas'
31
+ input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
32
+ outputs = model.generate(input_ids)
33
+ # outputs = miak xochitl istak
34
+ outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
35
+ ```
36
+
37
+ ## Approach
38
+ ### Dataset
39
+ Since the Axolotl corpus contains misaligments, we just select the best samples (12,207 samples). We also use the [bible-corpus](https://github.com/christos-c/bible-corpus) (7,821 samples).
40
+
41
+ | Axolotl best aligned books |
42
+ |:-----------------------------------------------------:|
43
+ | Anales de Tlatelolco |
44
+ | Diario |
45
+ | Documentos nauas de la Ciudad de México del siglo XVI |
46
+ | Historia de México narrada en náhuatl y español |
47
+ | La tinta negra y roja (antología de poesía náhuatl) |
48
+ | Memorial Breve (Libro las ocho relaciones) |
49
+ | Método auto-didáctico náhuatl-español |
50
+ | Nican Mopohua |
51
+ | Quinta Relación (Libro las ocho relaciones) |
52
+ | Recetario Nahua de Milpa Alta D.F |
53
+ | Testimonios de la antigua palabra |
54
+ | Trece Poetas del Mundo Azteca |
55
+ | Una tortillita nomás - Se taxkaltsin saj |
56
+ | Vida económica de Tenochtitlan |
57
+
58
+ Also, to increase the amount of data, we collected 3,000 extra samples from the web.
59
+
60
+ ### Model and training
61
+ We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
62
+
63
+ ### Training-stage 1 (learning Spanish)
64
+ In training stage 1 we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
65
+
66
+ ### Training-stage 2 (learning Nahuatl)
67
+ We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
68
+
69
+ ### Training setup
70
+ We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.
71
+
72
+
73
+ ## Evaluation results
74
+ For a fair comparison, the models are evaluated on the same 505 validation Nahuatl sentences. We report the results using chrf and sacrebleu hugging face metrics:
75
+
76
+ | English-Spanish pretraining | Validation loss | BLEU | Chrf |
77
+ |:----------------------------:|:---------------:|:-----|-------:|
78
+ | False | 1.34 | 6.17 | 26.96 |
79
+ | True | 1.31 | 6.18 | 28.21 |
80
+
81
+ The English-Spanish pretraining improves BLEU and Chrf, and leads to faster convergence. You can reproduce the evaluation on the [eval.ipynb](https://github.com/milmor/spanish-nahuatl-translation/blob/main/eval.ipynb) notebook.
82
+
83
+ ## References
84
+ - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
85
+ of transfer learning with a unified Text-to-Text transformer.
86
+
87
+ - Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
88
+
89
+
90
+ ## Team members
91
+ - Emilio Alejandro Morales [(milmor)](https://huggingface.co/milmor)
92
+ - Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
93
+ - Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
94
  - Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)