{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:24:33.216428Z" }, "title": "Learning to Understand Child-directed and Adult-directed Speech", "authors": [ { "first": "Lieke", "middle": [], "last": "Gelderloos", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tilburg University", "location": {} }, "email": "l.j.gelderloos@uvt.nl" }, { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tilburg University", "location": {} }, "email": "g.chrupala@uvt.nl" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tilburg University", "location": {} }, "email": "a.alishahi@uvt.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Speech directed to children differs from adultdirected speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation. Human language acquisition research indicates that child-directed speech helps language learners. This study explores the effect of child-directed speech when learning to extract semantic information from speech directly. We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS). We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better. The results suggest that this is at least partially due to linguistic rather than acoustic properties of the two registers, as we see the same pattern when looking at models trained on acoustically comparable synthetic speech.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Speech directed to children differs from adultdirected speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation. Human language acquisition research indicates that child-directed speech helps language learners. This study explores the effect of child-directed speech when learning to extract semantic information from speech directly. We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS). We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better. The results suggest that this is at least partially due to linguistic rather than acoustic properties of the two registers, as we see the same pattern when looking at models trained on acoustically comparable synthetic speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Speech directed to children (CDS) differs from adult-directed speech (ADS) in many aspects. Linguistic differences include the number of words per utterance, with utterances in CDS being considerably shorter than utterances in ADS, and repetition, which is more common in child-directed speech. There are also paralinguistic, acoustic factors that characterize child-directed speech: people speaking to children typically use a higher pitch and exaggerated intonation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It has been argued that the properties of CDS help perception or comprehension. Kuhl et al. (1997) propose that CDS is optimized for learnability. Optimal learnability may, but does not necessarily align with optimization for perception or comprehension. Although speech with lower variability may be easiest to learn to understand, higher variability may provide more learning opportunities, leading to more complete language knowledge.", "cite_spans": [ { "start": 80, "end": 98, "text": "Kuhl et al. (1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we explore how learning to extract meaning from speech differs when learning from CDS and ADS. We discuss task performance on the training register as well as generalization across registers. To tease apart the effect of acoustic and linguistic differences, we also report on models trained on synthesized speech, in which linguistic differences between the registers are retained, but the acoustic properties are similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The characteristics of child-directed speech are a major topic of study in language acquisition research. For a comprehensive overview, see Soderstrom (2007) and Clark (2009, Ch. 2, p. 32-41) . With regards to acoustics, CDS is reported to have exaggerated intonation and a slower speech rate (Fernald et al., 1989) . Kuhl et al. (1997) show that CDS contains more 'extreme' realizations of vowels. McMurray et al. (2013) show that these increased means are accompanied by increased variance, and argue that any learning advantage of CDS due to extreme vowel realizations is counteracted by increased variance. However, it has also been argued that increased variance may be beneficial to learning in the long run, as it gives the learner a more complete set of examples for a category, which helps generalization. Guevara-Rukoz et al. (2018) show that word forms in childdirected speech are acoustically more diverse. At the utterance level, child-directed language consists of shorter sentences and simpler syntax (Newport et al., 1977; Fernald et al., 1989) , and words more often appear in isolation (Ratner and Rooney, 2001 ).", "cite_spans": [ { "start": 140, "end": 157, "text": "Soderstrom (2007)", "ref_id": "BIBREF21" }, { "start": 162, "end": 191, "text": "Clark (2009, Ch. 2, p. 32-41)", "ref_id": null }, { "start": 293, "end": 315, "text": "(Fernald et al., 1989)", "ref_id": "BIBREF6" }, { "start": 318, "end": 336, "text": "Kuhl et al. (1997)", "ref_id": "BIBREF12" }, { "start": 399, "end": 421, "text": "McMurray et al. (2013)", "ref_id": "BIBREF14" }, { "start": 815, "end": 842, "text": "Guevara-Rukoz et al. (2018)", "ref_id": "BIBREF7" }, { "start": 1016, "end": 1038, "text": "(Newport et al., 1977;", "ref_id": "BIBREF17" }, { "start": 1039, "end": 1060, "text": "Fernald et al., 1989)", "ref_id": "BIBREF6" }, { "start": 1104, "end": 1128, "text": "(Ratner and Rooney, 2001", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Child directed speech and learnability", "sec_num": "2" }, { "text": "Studies on home recordings show that the availability of CDS input accounts for differences in vocabulary growth between learners, whereas overheard speech is unrelated (Hoff, 2003; Weisleder and Fernald, 2013) . This does not necessarily mean that it is easier to learn from CDS. Psycholinguistic research has shown that infants across the world show a CDS preference, paying more attention to it than to ADS (ManyBabies Consortium, 2020). Learning advantages of CDS in children may therefore simply be because they grant it more attention, rather than to properties of CDS that are advantageous for learning. Computational models, however, have no choice in where they allocate attention. Any learning advantages we find of either ADS or CDS in computational studies must be due to properties that make speech in that register more learnable to the model. There has been some computational work comparing learning from ADS and CDS at the level of word learning and phonetic learning. Studies on segmentability use algorithms that learn to identify word units, with some studies reporting higher segmentability for CDS (Batchelder, 2002; Daland and Pierrehumbert, 2011) , while Cristia et al. (2019) report mixed results. Kirchhoff and Schimmel (2005) train HMM-based speech recognition systems on CDS and ADS, and test on matched and crossed test sets. They find that both ADS and CDS trained systems perform best on the matching test set, but CDS trained systems perform better on ADS than systems trained on ADS peform on CDS. They show that this is likely caused by phonetic classes have larger overlaps in CDS.", "cite_spans": [ { "start": 169, "end": 181, "text": "(Hoff, 2003;", "ref_id": "BIBREF10" }, { "start": 182, "end": 210, "text": "Weisleder and Fernald, 2013)", "ref_id": "BIBREF22" }, { "start": 1120, "end": 1138, "text": "(Batchelder, 2002;", "ref_id": "BIBREF0" }, { "start": 1139, "end": 1170, "text": "Daland and Pierrehumbert, 2011)", "ref_id": "BIBREF4" }, { "start": 1179, "end": 1200, "text": "Cristia et al. (2019)", "ref_id": "BIBREF3" }, { "start": 1223, "end": 1252, "text": "Kirchhoff and Schimmel (2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Child directed speech and learnability", "sec_num": "2" }, { "text": "To the authors' knowledge, the current work is the first to computationally explore learnability differences between ADS and CDS considering the process of speech comprehension as a whole: from audio to semantic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Child directed speech and learnability", "sec_num": "2" }, { "text": "In recent years, several studies have worked on machine learning tasks in which models directly extract semantic information from speech, without feedback on the word, character, or phoneme level. Most prominently, work on 'weakly supervised' speech recognition includes work in which accompanying visual information is used as a proxy for semantic information. By grounding speech in visual information accompanying it, models can learn to extract visually relevant semantic information from speech, without needing symbolic annotation (Harwath et al., 2016; Harwath and Glass, 2017; Chrupa\u0142a et al., 2017; Merkx et al., 2019) . The topic is of interest for automatic speech recognition, as it provides potential ways of training speech recognition without the need for vast amounts of annotation. The utilization of nonlinguistic information as supervision is particularly useful for low-resource languages. For the purpose of this study, however, we are interested in this set of problems because of the parallel to human language acquisition. A language learning child does not receive explicit feedback on the words or phonemes it perceives. Rather, they learn to infer these structural properties of language, with at their disposal only the speech signal itself and its weak and messy links to the outer world.", "cite_spans": [ { "start": 537, "end": 559, "text": "(Harwath et al., 2016;", "ref_id": "BIBREF9" }, { "start": 560, "end": 584, "text": "Harwath and Glass, 2017;", "ref_id": "BIBREF8" }, { "start": 585, "end": 607, "text": "Chrupa\u0142a et al., 2017;", "ref_id": "BIBREF1" }, { "start": 608, "end": 627, "text": "Merkx et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Speech recognition with non-linguistic supervision", "sec_num": "2.2" }, { "text": "The task is to match speech to a semantic representation of the language it contains, intuitively 'grounding' it to the semantic context. The design of this task is inspired by work in visual grounding. However, the availability of CDS data accompanied by visual data is very limited. Instead of visual representation, we use semantic sentence embeddings of the transcriptions. Rather than training our model to imagine the visual context accompanying an utterance, as in visual grounding, we train it to imagine the semantic content. Note that since the semantic embeddings are based on the transcriptions of the sentences themselves, they have a much closer relation to the sentences than visual context representations would have.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "3" }, { "text": "The semantic sentence representations were obtained using SBERT, a BERT-based architecture that yields sentence embeddings, which was finetuned on the STS benchmark of SemEval (Reimers and Gurevych, 2019 ). This particular encoding was chosen because it harnesses the semantic strength of BERT (Devlin et al., 2019) in an encoding of the sentence as a whole. Speech is converted Melfrequency cepstrum coefficients.", "cite_spans": [ { "start": 176, "end": 203, "text": "(Reimers and Gurevych, 2019", "ref_id": "BIBREF20" }, { "start": 294, "end": 315, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "3" }, { "text": "Since we are interested in the effect of learning from child-versus adult directed speech, we select data that differs in register, but is otherwise as comparable as possible. The NewmanRatner (Newman et al., 2016) . This dataset is suitable to our set-up, as it contains a reasonable amount of transcribed CDS and ADS by the same speakers, which is rare; and it is in English, for which pretrained state-of-the-art language models such as (S)BERT (Devlin et al., 2019; Reimers and Gurevych, 2019) are readily available. Child-directed speech in the NewmanRatner corpus takes place in free play between caregiver and child, whereas adult-directed speech is uttered in the context of an interview. Stretches of speech have been transcribed containing one or more utterances. We selected only utterances by caregivers and excluded segments with multiple speakers. As the CDS portion of the corpus is larger than the ADS portion, we randomly selected 21,465 CDS segments, matching the number of ADS segments by caregivers. Validation and test sets of 1,000 segments were held out, while the remaining 19,465 segments were used for training. Table 1 lists some characteristic statistics of the CDS and ADS samples that were used. The ADS sample contains a larger vocabulary than the CDS sample. On average, ADS segments contain more than twice as many words, although they are only 88 milliseconds longer on average. Therefore, the number of words per second is twice as high in ADS as it is in CDS.", "cite_spans": [ { "start": 193, "end": 214, "text": "(Newman et al., 2016)", "ref_id": "BIBREF16" }, { "start": 448, "end": 469, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 470, "end": 497, "text": "Reimers and Gurevych, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 1138, "end": 1145, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Natural speech: NewmanRatner corpus", "sec_num": "4.1" }, { "text": "To tease apart effects of the acoustic properties of speech and properties of the language itself, we repeat the experiment using synthesized version of the ADS and CDS corpora. For this variant, we feed the transcriptions to the Google text2speech API, using the 6 available US English WaveNet voices (van den Oord et al., 2016) . Note that the synthetic speech is much cleaner than the natural speech, which was recorded using a microphone attached to clothing of the caregiver, and contains a lot of silence, noise, and fluctuations in volume of the speech.", "cite_spans": [ { "start": 302, "end": 329, "text": "(van den Oord et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic speech", "sec_num": "4.2" }, { "text": "Since synthetic speech for ADS and CDS is generated using the same pipeline, the acoustic properties of these samples are comparable, but linguistic differences between them are retained. Differences remain in the vocabulary size, number of words per utterance and type token ratio, but the number of words per second is now comparable. This means the length of utterances is much larger for synthetic ADS sentences, since the average ADS sentence contains approximately twice as many words as the average CDS sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic speech", "sec_num": "4.2" }, { "text": "The model and training set-up is based on Merkx et al. (2019) . This model is suited to our task, as it allows to learn to extract semantic information from speech by grounding it in another modality, without requiring the speech to be segmented. The speech encoder comprises a convolutional filter over the speech input, feeding into a stack of 4 bidirectional-GRU layers followed by an attention operator. The difference in our set-up is the use of SBERT sentence embeddings instead of visual feature vectors. Using a margin loss, the model is trained to make the cosine distance between true pairs of speech segments and SBERT embeddings smaller than that between random counterparts. We train for 50 epochs and following Merkx et al. (2019) we use a cyclic learning rate schedule. 1", "cite_spans": [ { "start": 42, "end": 61, "text": "Merkx et al. (2019)", "ref_id": "BIBREF15" }, { "start": 725, "end": 744, "text": "Merkx et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "Trained models are evaluated by ranking all SBERT embeddings in the test set by cosine distance to speech encodings. Reported metrics are recall@1, recall@5, and recall@10, which are the proportion of cases in which the correct SBERT embedding is among the top 1, 5, or 10 most similar ones; and the median rank of the correct SBERT embedding. Test results are reported for the training epoch for which recall@1 is highest on validation data. We have trained 3 differently randomly initialized runs for all four datasets, and report the average scores on the test split of the dataset the model was trained on, as well as its CDS or ADS counterpart, and a combined test set, which is simply the union of the two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "6.1" }, { "text": "As can be observed in table 2, on the combined test set, models trained on adult directed speech slightly outperform models trained on childdirected speech. However, models in the two registers perform very similarly when we test them on the test set in the same register, with ADS having higher recall@1, but CDS scoring better on the other metrics. When we test ADS models on CDS, performance is lower than that of models that have been trained on CDS. However, the drop on ADS between models trained on ADS and models trained on CDS is even larger. The better performance on the combined test set, then, seems to come from ADS models generalizing better to CDS than the other way around.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "6.1" }, { "text": "General performance of all models trained and tested on synthetic speech, which is much cleaner than the natural speech and more similar across registers, is much higher than performance on natural speech (see table 3 ). However, the same pattern can be observed: on the combined test set, ADS models perform better than CDS models. When tested on the register they were trained on, the models perform similarly, but models trained on ADS perform better when tested on CDS than the other way around.", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 217, "text": "(see table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Performance", "sec_num": "6.1" }, { "text": "To summarize, models trained on ADS and CDS reach comparable scores when evaluated on the same register they are trained on. However, training on ADS leads to knowledge that generalizes better than training on CDS does. This pattern holds even when training and evaluating on synthetic speech, when the two registers are acoustically similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "6.1" }, { "text": "Learnability is not just about eventual attainment: it is also about the process of learning itself. Although ADS and CDS models eventually perform similarly, this is not necessarily the case during the training process. Figures 1 and 2 show the trajectory of recall performance on the validation set after the first 10 epochs of training. During these early stages of learning, the models trained on ADS (dotted lines) are outperformed by those trained on CDS (solid lines). This pattern is more pronounced in the models trained on synthetic speech, but also present for models trained on natural speech. After five epochs of training, average recall@1 is 0.12 for CDS models and 0.09 for ADS models. For models trained on synthetic speech, average recall@1 on validation data is 0.51 for ADS models and 0.59 for CDS models. In later stages of training, models trained on ADS outperform CDS models on validation data. At epoch 40, close to the optimally performing epoch for most models, average recall@1 is 0.31 for ADS models and 0.28 for CDS models, and 0.86 and 0.81 for the synthetic counterparts, respectively.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 236, "text": "Figures 1 and 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Learning trajectories", "sec_num": "6.2" }, { "text": "Although models trained on adult-directed speech eventually catch up with models trained on child-directed speech, CDS models learn more quickly at the start.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning trajectories", "sec_num": "6.2" }, { "text": "We find indications that learning to extract meaning from speech is initially faster when learning from child-directed speech, but learning from adultdirected speech eventually leads to similar task performance on the training register, and better generalization to the other register. The effect is present both in models trained on natural speech and in models trained on synthetic speech, suggesting that it is at least partly due to differences in the language itself, rather than acoustic properties of the speech register.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Our finding that models trained on ADS generalize better to CDS than the other way around contrasts with the findings of Kirchhoff and Schimmel (2005) . Our results are in contrast to the idea that CDS is optimized for leading to the most valuable knowledge, as it is the models trained on ADS that lead to better generalization. Our finding that learning is initially faster for CDS is more in line with the idea of learnability as 'easy to learn'.", "cite_spans": [ { "start": 121, "end": 150, "text": "Kirchhoff and Schimmel (2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The better generalization of models trained on ADS may be due to ADS having higher lexical and semantic variability, reflected in the larger vocabulary and higher number of words per utterance. Since there is simply more to learn, learning to perform the task is more difficult on ADS, but it leads to more valuable knowledge. It is also possible that SBERT is better suited to encode the semantic content of ADS, as ADS uterrances are likely to be more similar to the sentences SBERT was trained on than CDS utterances are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "We must be prudent in drawing conclusions from the apparent effects we see in this study, as the results on different datasets cannot be interpreted as being on the same scale. Although all metrics are based on a rank of the same number of competitors, the distribution of similarities and differences between the semantic representations of these competitors may differ across datasets. The combined test set scores are more directly comparable, but ideally, we would like to compare the generalization of both models on an independent test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In future work, we intend to curate a test set with data from separate sources, which can serve as a benchmark for the models we study. We intend to explore how a curriculum of CDS followed by ADS affects learning trajectories and outcomes. We also intend to use tools for interpreting the knowledge encoded in neural networks (such as diagnostic classifiers and representational similarity analysis) to investigate the emergent representation of linguistic units such as phonemes and words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Code is available through Github: https://github.com/lgelderloos/cds ads", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bootstrapping the lexicon: A computational model of infant speech segmentation", "authors": [ { "first": "Eleanor", "middle": [ "O" ], "last": "Batchelder", "suffix": "" } ], "year": 2002, "venue": "Cognition", "volume": "83", "issue": "2", "pages": "167--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eleanor O. Batchelder. 2002. Bootstrapping the lexi- con: A computational model of infant speech seg- mentation. Cognition, 83(2):167-206.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representations of language in a model of visually grounded speech signal", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "Lieke", "middle": [], "last": "Gelderloos", "suffix": "" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "613--622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Al- ishahi. 2017. Representations of language in a model of visually grounded speech signal. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 613-622.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "First language acquisition", "authors": [ { "first": "Eve", "middle": [ "V" ], "last": "Clark", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eve V. Clark. 2009. First language acquisition, 2nd edition. Cambridge University Press, Cambridge.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Segmentability differences between child-directed and adult-directed speech: A systematic test with an ecologically valid corpus", "authors": [ { "first": "Alejandrina", "middle": [], "last": "Cristia", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Bernstein Ratner", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Soderstrom", "suffix": "" } ], "year": 2019, "venue": "Open Mind", "volume": "3", "issue": "", "pages": "13--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alejandrina Cristia, Emmanuel Dupoux, Nan Bern- stein Ratner, and Melanie Soderstrom. 2019. Seg- mentability differences between child-directed and adult-directed speech: A systematic test with an eco- logically valid corpus. Open Mind, 3:13-22.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning diphone-based segmentation", "authors": [ { "first": "Robert", "middle": [], "last": "Daland", "suffix": "" }, { "first": "Janet", "middle": [ "B" ], "last": "Pierrehumbert", "suffix": "" } ], "year": 2011, "venue": "Cognitive science", "volume": "35", "issue": "1", "pages": "119--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Daland and Janet B. Pierrehumbert. 2011. Learning diphone-based segmentation. Cognitive science, 35(1):119-155.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A cross-language study of prosodic modifications in mothers' and fathers' speech to preverbal infants", "authors": [ { "first": "Anne", "middle": [], "last": "Fernald", "suffix": "" }, { "first": "Traute", "middle": [], "last": "Taeschner", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Mechthild", "middle": [], "last": "Papousek", "suffix": "" } ], "year": 1989, "venue": "Journal of child language", "volume": "16", "issue": "3", "pages": "477--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Fernald, Traute Taeschner, Judy Dunn, Mechthild Papousek, B\u00e9n\u00e9dicte de Boysson- Bardies, and Ikuko Fukui. 1989. A cross-language study of prosodic modifications in mothers' and fathers' speech to preverbal infants. Journal of child language, 16(3):477-501.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Are words easier to learn from infant-than adult-directed speech? a quantitative corpus-based investigation", "authors": [ { "first": "Adriana", "middle": [], "last": "Guevara-Rukoz", "suffix": "" }, { "first": "Alejandrina", "middle": [], "last": "Cristia", "suffix": "" }, { "first": "Bogdan", "middle": [], "last": "Ludusan", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Thiolli\u00e8re", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Reiko", "middle": [], "last": "Mazuka", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" } ], "year": 2018, "venue": "Cognitive Science", "volume": "42", "issue": "5", "pages": "1586--1617", "other_ids": { "DOI": [ "10.1111/cogs.12616" ] }, "num": null, "urls": [], "raw_text": "Adriana Guevara-Rukoz, Alejandrina Cristia, Bog- dan Ludusan, Roland Thiolli\u00e8re, Andrew Martin, Reiko Mazuka, and Emmanuel Dupoux. 2018. Are words easier to learn from infant-than adult-directed speech? a quantitative corpus-based investigation. Cognitive Science, 42(5):1586-1617.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning wordlike units from joint audio-visual analysis", "authors": [ { "first": "David", "middle": [], "last": "Harwath", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "506--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Harwath and James Glass. 2017. Learning word- like units from joint audio-visual analysis. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 506-517.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised learning of spoken language with visual context", "authors": [ { "first": "David", "middle": [], "last": "Harwath", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "1858--1866", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Infor- mation Processing Systems 29, pages 1858-1866.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The specificity of environmental influence: Socioeconomic status affects early vocabulary development via maternal speech", "authors": [ { "first": "Erika", "middle": [], "last": "Hoff", "suffix": "" } ], "year": 2003, "venue": "Child Development", "volume": "74", "issue": "5", "pages": "1368--1378", "other_ids": { "DOI": [ "10.1111/1467-8624.00612" ] }, "num": null, "urls": [], "raw_text": "Erika Hoff. 2003. The specificity of environmental in- fluence: Socioeconomic status affects early vocabu- lary development via maternal speech. Child Devel- opment, 74(5):1368-1378.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical properties of infant-directed versus adultdirected speech: Insights from speech recognition", "authors": [ { "first": "Katrin", "middle": [], "last": "Kirchhoff", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Schimmel", "suffix": "" } ], "year": 2005, "venue": "The Journal of the Acoustical Society of America", "volume": "117", "issue": "4", "pages": "2238--2246", "other_ids": { "DOI": [ "10.1121/1.1869172" ] }, "num": null, "urls": [], "raw_text": "Katrin Kirchhoff and Steven Schimmel. 2005. Sta- tistical properties of infant-directed versus adult- directed speech: Insights from speech recognition. The Journal of the Acoustical Society of America, 117(4):2238-2246.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cross-language analysis of phonetic units in language addressed to infants", "authors": [ { "first": "Patricia", "middle": [ "K" ], "last": "Kuhl", "suffix": "" }, { "first": "Jean", "middle": [ "E" ], "last": "Andruski", "suffix": "" }, { "first": "Inna", "middle": [ "A" ], "last": "Chistovich", "suffix": "" }, { "first": "Ludmilla", "middle": [ "A" ], "last": "Chistovich", "suffix": "" }, { "first": "Elena", "middle": [ "V" ], "last": "Kozhevnikova", "suffix": "" }, { "first": "L", "middle": [], "last": "Viktoria", "suffix": "" }, { "first": "Elvira", "middle": [ "I" ], "last": "Ryskina", "suffix": "" }, { "first": "Ulla", "middle": [], "last": "Stolyarova", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Sundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lacerda", "suffix": "" } ], "year": 1997, "venue": "Science", "volume": "277", "issue": "5326", "pages": "684--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patricia K. Kuhl, Jean E. Andruski, Inna A. Chistovich, Ludmilla A. Chistovich, Elena V. Kozhevnikova, Viktoria L. Ryskina, Elvira I. Stolyarova, Ulla Sund- berg, and Francisco Lacerda. 1997. Cross-language analysis of phonetic units in language addressed to infants. Science, 277(5326):684-686.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Quantifying sources of variability in infancy research using the infant-directed-speech preference", "authors": [], "year": null, "venue": "The ManyBabies Consortium. 2020", "volume": "3", "issue": "", "pages": "24--52", "other_ids": { "DOI": [ "10.1177/2515245919900809" ] }, "num": null, "urls": [], "raw_text": "The ManyBabies Consortium. 2020. Quantifying sources of variability in infancy research using the infant-directed-speech preference. Advances in Methods and Practices in Psychological Science, 3(1):24-52.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Infant directed speech and the development of speech perception: Enhancing development or an unintended consequence?", "authors": [ { "first": "Bob", "middle": [], "last": "Mcmurray", "suffix": "" }, { "first": "Kristine", "middle": [ "A" ], "last": "Kovack-Lesh", "suffix": "" }, { "first": "Dresden", "middle": [], "last": "Goodwin", "suffix": "" }, { "first": "William", "middle": [], "last": "Mcechron", "suffix": "" } ], "year": 2013, "venue": "Cognition", "volume": "129", "issue": "2", "pages": "362--378", "other_ids": { "DOI": [ "10.1016/j.cognition.2013.07.015" ] }, "num": null, "urls": [], "raw_text": "Bob McMurray, Kristine A. Kovack-Lesh, Dresden Goodwin, and William McEchron. 2013. Infant di- rected speech and the development of speech percep- tion: Enhancing development or an unintended con- sequence? Cognition, 129(2):362 -378.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Language Learning Using Speech to Image Retrieval", "authors": [ { "first": "Danny", "middle": [], "last": "Merkx", "suffix": "" }, { "first": "Stefan", "middle": [ "L" ], "last": "Frank", "suffix": "" }, { "first": "Mirjam", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Interspeech 2019", "volume": "", "issue": "", "pages": "1841--1845", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-3067" ] }, "num": null, "urls": [], "raw_text": "Danny Merkx, Stefan L. Frank, and Mirjam Ernes- tus. 2019. Language Learning Using Speech to Im- age Retrieval. In Proceedings of Interspeech 2019, pages 1841-1845.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Input and uptake at 7 months predicts toddler vocabulary: the role of child-directed speech and infant processing skills in language development", "authors": [ { "first": "Rochelle", "middle": [ "S" ], "last": "Newman", "suffix": "" }, { "first": "Meredith", "middle": [ "L" ], "last": "Rowe", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Bernstein Ratner", "suffix": "" } ], "year": 2016, "venue": "Journal of Child Language", "volume": "43", "issue": "5", "pages": "1158--1173", "other_ids": { "DOI": [ "10.1017/S0305000915000446" ] }, "num": null, "urls": [], "raw_text": "Rochelle S. Newman, Meredith L. Rowe, and Nan Bernstein Ratner. 2016. Input and uptake at 7 months predicts toddler vocabulary: the role of child-directed speech and infant processing skills in language development. Journal of Child Language, 43(5):1158-1173.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Mother, I'd rather do it myself: Some effects and noneffects of maternal speech style", "authors": [ { "first": "Elissa", "middle": [], "last": "Newport", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Gleitman", "suffix": "" }, { "first": "Lila", "middle": [], "last": "Gleitman", "suffix": "" } ], "year": 1977, "venue": "Talking to children: Language input and acquisition", "volume": "", "issue": "", "pages": "109--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elissa Newport, Henry Gleitman, and Lila Gleitman. 1977. Mother, I'd rather do it myself: Some ef- fects and noneffects of maternal speech style. In Catherine E. Snow and Charles A. Ferguson, editors, Talking to children: Language input and acquisition, pages 109-149. Cambridge University Press, Cam- bridge.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "WaveNet: A generative model for raw audio", "authors": [ { "first": "Aron", "middle": [], "last": "Van Den Oord", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Dieleman", "suffix": "" }, { "first": "Heiga", "middle": [], "last": "Zen", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Senior", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.03499" ] }, "num": null, "urls": [], "raw_text": "Aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alexander Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. arXiv:1609.03499.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "How accessible is the lexicon in Motherese?", "authors": [ { "first": "Nan", "middle": [], "last": "Bernstein Ratner", "suffix": "" }, { "first": "Becky", "middle": [], "last": "Rooney", "suffix": "" } ], "year": 2001, "venue": "Approaches to Bootstrapping: Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition", "volume": "23", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nan Bernstein Ratner and Becky Rooney. 2001. How accessible is the lexicon in Motherese? In J\u00fcrgen Weissenborn and Barbara H\u00f6hle, editors, Ap- proaches to Bootstrapping: Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition, volume 23 of Language Ac- quisition and Language Disorders, pages 71-78.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": { "DOI": [ "10.18653/v1/D19-1410" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Beyond babytalk: Reevaluating the nature and content of speech input to preverbal infants", "authors": [ { "first": "Melanie", "middle": [], "last": "Soderstrom", "suffix": "" } ], "year": 2007, "venue": "Developmental Review", "volume": "27", "issue": "4", "pages": "501--532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melanie Soderstrom. 2007. Beyond babytalk: Re- evaluating the nature and content of speech in- put to preverbal infants. Developmental Review, 27(4):501-532.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Talking to children matters: Early language experience strengthens processing and builds vocabulary", "authors": [ { "first": "Adriana", "middle": [], "last": "Weisleder", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Fernald", "suffix": "" } ], "year": 2013, "venue": "Psychological Science", "volume": "24", "issue": "11", "pages": "2143--2152", "other_ids": { "DOI": [ "10.1177/0956797613488145" ] }, "num": null, "urls": [], "raw_text": "Adriana Weisleder and Anne Fernald. 2013. Talk- ing to children matters: Early language experience strengthens processing and builds vocabulary. Psy- chological Science, 24(11):2143-2152.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 1: Validation performance in early training on natural speech" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Validation performance in early training on synthetic speech" }, "TABREF2": { "content": "
Model trained on synthetic CDS
Testset Med.r. R@1 R@5 R@10
CDS1.00.82.96.99
ADS1.00.59.79.86
Combined1.00.68.85.90
Model trained on synthetic ADS
Testset Med.r. R@1 R@5 R@10
CDS1.00.70.89.95
ADS1.00.84.94.97
Combined1.00.74.89.93
", "html": null, "type_str": "table", "num": null, "text": "Test performance of models trained on natural speech" }, "TABREF3": { "content": "", "html": null, "type_str": "table", "num": null, "text": "Test performance of models trained on synthetic speech" } } } }