|
--- |
|
language: |
|
- fr |
|
thumbnail: "url to a thumbnail used in social sharing" |
|
tags: |
|
- text-generation |
|
datasets: |
|
- Marxav/frpron |
|
metrics: |
|
- loss/eval |
|
- perplexity |
|
widget: |
|
- text: "salut:" |
|
- text: "bonjour:" |
|
- text: "comment ça va ?:" |
|
- text: "bonjour le monde:" |
|
- text: "anticonstitutionnellement:" |
|
inference: |
|
parameters: |
|
temperature: 0.7 |
|
return_full_text: False |
|
num_return_sequences: 3 |
|
--- |
|
# Fr-word to IPA pronunciation |
|
Converting French words into their phonemic pronunciation |
|
|
|
This module aims at predicting the usual prononciation of the French words in the |
|
[French Wiktionary](https://fr.wiktionary.org/). |
|
More precisely, it aims at predicting the ***IPA*** string contained in the {{pron|***IPA***|fr}} tag of a French Wiktionary entry. |
|
|
|
To use it, simply give an input containing the word that you want to translate followed by ":", for example: "bonjour:". |
|
Upon submission, the model will produced an output, for example: "bonjour:bɔ̃.ʒuʁ". |
|
|
|
***This module has not yet been fully tuned, it thus remained experimental.*** |
|
|
|
***The input lenght is currently limited to a maximum of 20 letters.*** |
|
|
|
## More information on the model, dataset, hardware, environmental consideration: |
|
|
|
### **The trainingg data** |
|
The dataset used for training this models comes from data of the [French Wiktionary](https://fr.wiktionary.org/). |
|
|
|
### **The model** |
|
The model is build on [gpt2](https://huggingface.co/gpt2) |