Dev/test splits

#1
by haukurpj - opened

Hi Carlos,

Thanks for uploading this dataset to Huggingface and providing a good description of it!

I have a few questions/worries about the splits.

  1. Are all dialects represented in the dev/test splits? The description does not say.
  2. In the description it says that the number of unique prompts in the dataset is around 55%. I assume that prompts mean normalized_text as they are called in the dataset. It does not say whether the prompts in the dev/test sets are present in the training set. Are they? Having the same prompts in the training set and then also in the dev/test causes a serious data leak as a model might start guessing the whole prompt when it hears something which sounds alike. When selecting prompts for the dev/test sets they cannot be in the training set.

Hello Haukur,

Thanks for your nice questions.

  • "Are all dialects represented in the dev/test splits? The description does not say".

No, the main criteria to create the Dev and Test splits was the gender balancing of the speakers.

  • "I assume that prompts mean normalized_text as they are called in the dataset.":

Yes, that is correct.

  • " It does not say whether the prompts in the dev/test sets are present in the training set."

Most the sentences in the corpus are repeated at least once by another speaker. So, it is almost impossible to create a Dev or Test portion with unique sentences. One possible solution could be to select senteces with exactly n repetitions and put them in the DEV and Test portions. But doing so, the problem now would be that all portions would share spekaers. So, we had to make a decision, which was: not to share speakers.

  • " Having the same prompts in the training set and then also in the dev/test causes a serious data leak as a model might start guessing the whole prompt when it hears something which sounds alike".

I am not completely agree with this. Every speaker is different and due to the concept of "speech variability" it is possible for the model to learn acoustic features from the speakers that are present in the training set and at the same time to ignore acoustic features present in the speakers of the Dev and Test portions.

On the other hand, we aware users about the problems of using the training prompts in a language modeling task in the section "Discussion of Biases" of the dataset card of the corpus.

Hope this helps,
Carlos

carlosdanielhernandezmena changed discussion status to closed

Sign up or log in to comment