What is meant with '😕'?

#2
by StephanAkkerman - opened

I noticed this emoji a lot while taking a look at this dataset. Is this something that the Huggingface dataset could not handle or is this done intentionally?

Hi, this is documented (albeit hidden) in the github repo's README.

Note that unknown/low frequency phonemes or letters are replaced with 😕.

There are some technical reasons for that so that we could evaluate all the previous models.

Only in retrospect we looked at the statistics and found that it disproportionately penalizes languages using non-latin scripts. For the evaluation it is not a big problem because all models are compared against each other on the same train/dev data (and performance across languages seems to be constant anyway --- Figure 3 here). For linguistic analysis it could be a problem and a more principled solution is planned if a new version comes out.

Thanks, I am currently using charsiu/g2p_multilingual_byT5_small_100 to transcribe to IPA and I think it does create some of those characters that you mentioned.

Could you elaborate specifically what your goal is? That way I can help you better. :-)

For the PWESuite it's still good to process these unknown phonemes. The 😕 is just a single-character "equivalent" of an [UNK] token.

Thanks! So, my goal is to use phonetic embeddings to find a word in English that sounds similar to a given foreign word. This can then be applied to create mnemonics, which I will use for this project.
Let's say the user wants to learn Indonesian and wants to find a mnemonic for the word 'kucing' (cat in Indonesian). Then I want to find a top x amount of English words (or the user's native language) that sound similar to it, for instance ['cucci', 'ikeuchi', 'kikuchi', 'cuccio', 'micucci'] (apparently these words are in the dataset for English).
I'm having difficulties finding the best approach for this, my current idea is to convert a given foreign word to its IPA transcription using the g2p model mentioned. Then using the IPA transcription I want to use one of the methods mentioned in your paper, for instance Count-based, as that scored high on human similarity. I could also look into the model that scored highest on articulatory distance.

Let me know your thoughts!

Depending on your usecase and speed requirements, you can skip embeddings altogether. Here's a snippet that uses articulatory feature distance.

# pip3 install panphon
import panphon.distance

dst = panphon.distance.Distance().feature_edit_distance

# add all English words here
WORDS = ['closing', 'cucci',  'gnocchi', 'kissing']

closest_word = min(WORDS, key=lambda x: dst("kucing", x))
# cucci

Unfortunately this is not the contribution of this work. However if you need speed and scale, then embed all words into vectors and then perform maximum-inner-product-search.

Thanks for the tip and snippet!

StephanAkkerman changed discussion status to closed
StephanAkkerman changed discussion status to open

I noticed there is a difference between the IPA and ARP transcriptions for words that are in main and human_similarity. For instance screech has the following values in main skriːtʃ and S K R IY1 CH. However, when the purpose is human_similarity then the fields become sk😕it͡ʃ and S K 😕 IY T 😕 SH. Is there any reason for that?

  • Good catch, there was a disprepancy! Similar issue extended also to cognates.
  • Should be fixed now. To get the new version, please run:
from datasets import load_dataset
data = load_dataset('zouharvi/pwesuite-eval', download_mode="force_redownload")
  • See this commit with acknowledgements.
  • There are still some disprepancies but those come from the ORT2IPA or IPA2ARP transliteration (see new test). As an example, screech has different IPA transliterations. The second one leads to an unknown ARP character so is replaced by the unk char.
> list(data["train"].filter(lambda x: x["token_ort"] == "screech"))
[
    {'token_ort': 'screech', 'token_ipa': 'skriːtʃ', 'token_arp': 'S K R IY1 CH', 'lang': 'en', 'purpose': 'main'},
    {'token_ort': 'screech', 'token_ipa': 'skɹit͡ʃ', 'token_arp': 'S K R IY T 😕 SH', 'lang': 'en', 'purpose': 'human_similarity'}
]

Great, thanks for the quick fix!

StephanAkkerman changed discussion status to closed

Sign up or log in to comment