{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:19.378588Z" }, "title": "Explicit Tone Transcription Improves ASR Performance in Extremely Low-Resource Languages: A Case Study in Bribri", "authors": [ { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "", "affiliation": {}, "email": "rolando.a.coto.solano@dartmouth.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Linguistic tone is transcribed for input into ASR systems in numerous ways. This paper shows a systematic test of several transcription styles, using as an example the Chibchan language Bribri, an extremely low-resource language from Costa Rica. The most successful models separate the tone from the vowel, so that the ASR algorithms learn tone patterns independently. These models showed improvements ranging from 4% to 25% in character error rate (CER), and between 3% and 23% in word error rate (WER). This is true for both traditional GMM/HMM and end-to-end CTC algorithms. This paper also presents the first attempt to train ASR models for Bribri. The best performing models had a CER of 33% and a WER of 50%. Despite the disadvantage of using hand-engineered representations, these models were trained on only 68 minutes of data, and therefore show the potential of ASR to generate further training materials and aid in the documentation and revitalization of the language.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Linguistic tone is transcribed for input into ASR systems in numerous ways. This paper shows a systematic test of several transcription styles, using as an example the Chibchan language Bribri, an extremely low-resource language from Costa Rica. The most successful models separate the tone from the vowel, so that the ASR algorithms learn tone patterns independently. These models showed improvements ranging from 4% to 25% in character error rate (CER), and between 3% and 23% in word error rate (WER). This is true for both traditional GMM/HMM and end-to-end CTC algorithms. This paper also presents the first attempt to train ASR models for Bribri. The best performing models had a CER of 33% and a WER of 50%. Despite the disadvantage of using hand-engineered representations, these models were trained on only 68 minutes of data, and therefore show the potential of ASR to generate further training materials and aid in the documentation and revitalization of the language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transcribir el tono de forma expl\u00edcita mejora el rendimiento del reconocimiento de voz en idiomas extremadamente bajos en recursos: Un estudio de caso en bribri. Hay numerosas maneras de transcribir el tono ling\u00fc\u00edstico a la hora de proveer los datos de entrenamiento a los sistemas de reconocimiento de voz. Este art\u00edculo presenta un experimento sistem\u00e1tico de varias formas de transcripci\u00f3n usando como ejemplo la lengua chibcha bribri, una lengua de Costa Rica extremadamente baja en recursos. Los modelos m\u00e1s exitosos fueron aquellos en que el tono aparece separado de la vocal de tal forma que los algoritmos pudieran aprender los patrones tonales por separado. Estos modelos mostraron mejoras de entre 4% y 26% en el error de caracteres (CER), y de entre 3% y 25% en el error de palabras (WER). Esto se observ\u00f3 tanto en los algoritmos GMM/HMM como en los algoritmos CTC de secuenciaa-secuencia. Este art\u00edculo tambi\u00e9n presenta el primer intento de entrenar modelos de reconocimiento de voz en bribri. Los mejores modelos tuvieron un CER de 33% y un WER de 50%. A pesar de la desventaja de usar representaciones dise\u00f1adas a mano, estos modelos se entrenaron con solo 68 minutos de datos y muestran el potencial para generar m\u00e1s materiales de entrenamiento, as\u00ed como de ayudar con la documentaci\u00f3n y revitalizaci\u00f3n de la lengua.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resumen", "sec_num": null }, { "text": "The documentation and revitalization of Indigenous languages relies on the transcription of speech recordings, which contain vital information about a community and its culture. However, the transcription of these recordings constitutes a major bottleneck in the process of making this information usable for researchers and practitioners. It typically takes up to 50 hours of an expert's time to transcribe each hour of audio in an Indigenous language (Shi et al., 2021) . Moreover, there are usually few community members who have the expertise to transcribe this data and who have the time to do so. Because of this, extending automated speech recognition (ASR) to these languages and incorporating it into their documentation and revitalization workflows would alleviate the workload of linguists and community members and help accelerate their efforts.", "cite_spans": [ { "start": 453, "end": 471, "text": "(Shi et al., 2021)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Indigenous and other minority languages usually have few transcribed audio recordings, and so adapting data-hungry ASR algorithms to assist in their documentation is an active area of research (Besacier et al., 2014; Jimerson and Prud'hommeaux, 2018; Michaud et al., 2019; Foley et al., 2018; Gupta and Boulianne, 2020b,a; Zahrer et al., 2020; Thai et al., 2019; Li et al., 2020; Zevallos et al., 2019; Matsuura et al., 2020; Levow et al., 2021) . This paper will examine an element that might appear obvious at first, but one where the literature is \"inconclusive\" (Adams, 2018) , and which can have major consequences in performance: How should tones be transcribed when dealing with extremely low-resource languages? This will be examined by building ASR models for the language Bribri from Costa Rica. The results show that simple changes in the orthographic transcription, in the form of explicit tonal markings that are separate from the vowel information, can dramatically improve accuracy.", "cite_spans": [ { "start": 193, "end": 216, "text": "(Besacier et al., 2014;", "ref_id": "BIBREF4" }, { "start": 217, "end": 250, "text": "Jimerson and Prud'hommeaux, 2018;", "ref_id": "BIBREF34" }, { "start": 251, "end": 272, "text": "Michaud et al., 2019;", "ref_id": "BIBREF51" }, { "start": 273, "end": 292, "text": "Foley et al., 2018;", "ref_id": "BIBREF20" }, { "start": 293, "end": 322, "text": "Gupta and Boulianne, 2020b,a;", "ref_id": null }, { "start": 323, "end": 343, "text": "Zahrer et al., 2020;", "ref_id": "BIBREF69" }, { "start": 344, "end": 362, "text": "Thai et al., 2019;", "ref_id": "BIBREF62" }, { "start": 363, "end": 379, "text": "Li et al., 2020;", "ref_id": "BIBREF44" }, { "start": 380, "end": 402, "text": "Zevallos et al., 2019;", "ref_id": "BIBREF70" }, { "start": 403, "end": 425, "text": "Matsuura et al., 2020;", "ref_id": "BIBREF47" }, { "start": 426, "end": 445, "text": "Levow et al., 2021)", "ref_id": "BIBREF43" }, { "start": 566, "end": 579, "text": "(Adams, 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A tonal language is a language where differences in pitch can change the meaning of a word, even if the consonants and vowels are the same (Yip, 2002) . The best-known example of a tonal language is Mandarin Chinese. In Mandarin, the syllable [ma] means \"mother\" if it is produced with a high pitch. The same syllable means \"horse\" when pronounced with a dipping-rising pitch, but if it is pronounced with a falling pitch, it means \"to scold\". Between 40% and 70% of the languages of the world are tonal (Yip, 2002; Maddieson, 2013) , including numerous Indigenous languages of the Americas. Because tone is expressed as pitch variations, and those variations can only occur during the pronunciation of consonants and vowels, tonal cues overlap with those of the consonants and vowels in the word. Therefore, it is useful to distinguish between segments -consonants and vowels -and the information that is suprasegmental, such as tone, which occurs co-temporally with segments (Lehiste and Lass, 1976) .", "cite_spans": [ { "start": 139, "end": 150, "text": "(Yip, 2002)", "ref_id": "BIBREF67" }, { "start": 504, "end": 515, "text": "(Yip, 2002;", "ref_id": "BIBREF67" }, { "start": 516, "end": 532, "text": "Maddieson, 2013)", "ref_id": "BIBREF45" }, { "start": 977, "end": 1001, "text": "(Lehiste and Lass, 1976)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Tonal languages and ASR", "sec_num": "1.1" }, { "text": "Precisely because of large tonal languages like Mandarin, there has been research into how tone can play a role in ASR. Many systems treat pitch (the main phonetic cue of tone) as a completely separate feature. In such systems, the traditional ASR algorithm learns the segments, and a separate machine learning module learns the pitch patterns and offers its inference of the tone (Kaur et al., 2020) . This has been used for languages like Mandarin (Niu et al., 2013; Shan et al., 2010) , Thai (Kertkeidkachorn et al., 2014) and Yoruba (O . d\u00e9lo . b\u00ed, 2008; Yusof et al., 2013) . On the other hand, there is research that suggests that, given that the tone and vowel information are co-temporal, these are best learned together. For example, an ASR system would be asked to learn a vowel and its tone as a single unit (e.g. a+highTone). Fus-ing the representation for vowel and tone, or embedded tone modeling (Lee et al., 2002) , has been shown to be effective for larger languages like Mandarin (Chang et al., 2000) , Vietnamese and Cantonese (Metze et al., 2013; Nguyen et al., 2018) , as well as smaller languages like Yolox\u00f3chitl Mixtec from Mexico (Shi et al., 2021) and Anyi from C\u00f4te d'Ivoire (Koffi, 2020) . Finally, in some tonal languages like Hausa, in which the orthography does not mark any tone, the tone is not included at all in ASR models (Gauthier et al., 2016) .", "cite_spans": [ { "start": 381, "end": 400, "text": "(Kaur et al., 2020)", "ref_id": "BIBREF36" }, { "start": 450, "end": 468, "text": "(Niu et al., 2013;", "ref_id": "BIBREF55" }, { "start": 469, "end": 487, "text": "Shan et al., 2010)", "ref_id": "BIBREF59" }, { "start": 490, "end": 525, "text": "Thai (Kertkeidkachorn et al., 2014)", "ref_id": null }, { "start": 530, "end": 558, "text": "Yoruba (O . d\u00e9lo . b\u00ed, 2008;", "ref_id": null }, { "start": 559, "end": 578, "text": "Yusof et al., 2013)", "ref_id": "BIBREF68" }, { "start": 911, "end": 929, "text": "(Lee et al., 2002)", "ref_id": "BIBREF41" }, { "start": 998, "end": 1018, "text": "(Chang et al., 2000)", "ref_id": "BIBREF7" }, { "start": 1046, "end": 1066, "text": "(Metze et al., 2013;", "ref_id": "BIBREF49" }, { "start": 1067, "end": 1087, "text": "Nguyen et al., 2018)", "ref_id": "BIBREF53" }, { "start": 1155, "end": 1173, "text": "(Shi et al., 2021)", "ref_id": "BIBREF60" }, { "start": 1202, "end": 1215, "text": "(Koffi, 2020)", "ref_id": null }, { "start": 1358, "end": 1381, "text": "(Gauthier et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Tonal languages and ASR", "sec_num": "1.1" }, { "text": "Representations where the tone is marked explicitly but is kept separate from the vowel (i.e. explicit tone recognition (Lee et al., 2002) ) are not often used for larger languages, but they are very common in low-resource ASR. This is often done using phonetic representations, where the output of the algorithm is in the form of the International Phonetic Alphabet (IPA), which is then converted to the language's orthographic convention. For languages like Na from China and Chatino from Mexico (\u0106avar et al., 2016; , the characters representing the tone are separated from the vowel. Wisniewski et al. (2020) argue that it is the transparency of the representation (either orthographic or phonetic) that helps ASR to learn these tonal representations, and this transparency includes having characters that the algorithm can use to generalize the phonetic cues of the tones separate from those of the vowels.", "cite_spans": [ { "start": 120, "end": 138, "text": "(Lee et al., 2002)", "ref_id": "BIBREF41" }, { "start": 491, "end": 518, "text": "Mexico (\u0106avar et al., 2016;", "ref_id": null }, { "start": 588, "end": 612, "text": "Wisniewski et al. (2020)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Tonal languages and ASR", "sec_num": "1.1" }, { "text": "Given the review above, there appears to be more than one way to represent tone effectively as input for ASR. In this paper several different methods will be tested using a language (and indeed, a language family) in which no ASR models have been trained before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tonal languages and ASR", "sec_num": "1.1" }, { "text": "The Bribri language (Glottocode brib1243) is spoken by about 7000 people in Southern Costa Rica (INEC, 2011) . It belongs to the Chibchan language family, which includes languages such as Cab\u00e9car and Malecu from Costa Rica, Kuna and Naso from Panama, and Kogi from Colombia. Bribri is a vulnerable language (Moseley, 2010; S\u00e1nchez Avenda\u00f1o, 2013) . This means that there are still children who speak it with their families but there are few circumstances when it is written, and indeed there are very few books published in the language. Bribri has four tones: high, falling, rising, and low tone. The first three are marked in the orthography using diacritics (respectively: \u00e0, \u00e1, \u00e2), while the low tone is left unmarked: a. Bribri tone can create differences in meaning: the word al\u00e0 means 'child'; its first syllable is low and the second syllable is high. Contrast this with al\u00e1 'thunder', where the second syllable has a falling tone.", "cite_spans": [ { "start": 96, "end": 108, "text": "(INEC, 2011)", "ref_id": "BIBREF32" }, { "start": 307, "end": 322, "text": "(Moseley, 2010;", "ref_id": "BIBREF52" }, { "start": 323, "end": 346, "text": "S\u00e1nchez Avenda\u00f1o, 2013)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Chibchan Languages and Bribri", "sec_num": "1.2" }, { "text": "Bribri has an additional suprasegmental feature: Nasality. Like in French, vowels in Bribri can be oral or nasal. Therefore, \u00f9 with an oral vowel means 'house', but \u00f9 with a nasal vowel, marked with a line underneath the vowel, 1 means 'pot'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chibchan Languages and Bribri", "sec_num": "1.2" }, { "text": "Bribri orthographies are relatively transparent due to their recent invention, the oldest of which is from the 1970s (Constenla et al., 2004; Jara Murillo and Garc\u00eda Segura, 2013; Margery, 2005) . This works to our advantage, in that there is almost no difference between an orthographic and a phonetic representation for the input of Bribri ASR.", "cite_spans": [ { "start": 117, "end": 141, "text": "(Constenla et al., 2004;", "ref_id": null }, { "start": 142, "end": 179, "text": "Jara Murillo and Garc\u00eda Segura, 2013;", "ref_id": "BIBREF33" }, { "start": 180, "end": 194, "text": "Margery, 2005)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Chibchan Languages and Bribri", "sec_num": "1.2" }, { "text": "There has been some work on Bribri NLP, including the creation of digital dictionaries (Krohn, 2020) and morphological analyzers used for documentation (Flores Sol\u00f3rzano, 2019 , 2017b . There have also been some experiments with untrained forced alignment (Coto-Solano and Flores Sol\u00f3rzano, 2016, 2017) , and with neural machine translation (Feldman and Coto-Solano, 2020) . However, there is a need to accelerate the documentation of Bribri and produce more written materials out of existing recordings, and here we face the bottleneck problem mentioned above. One of the main goals of this paper is to build a first ASR sys-1 There are two main orthographic systems for Bribri. In the Constenla et al. (2004) system, the nasal is marked with a line under the vowel. In the Jara Murillo and Garc\u00eda Segura (2013) system, the nasal is marked with a tilde over the vowel: u 'house'. tem for Bribri in order to alleviate the problems of transcription.", "cite_spans": [ { "start": 152, "end": 175, "text": "(Flores Sol\u00f3rzano, 2019", "ref_id": "BIBREF18" }, { "start": 176, "end": 183, "text": ", 2017b", "ref_id": "BIBREF19" }, { "start": 256, "end": 272, "text": "(Coto-Solano and", "ref_id": "BIBREF12" }, { "start": 273, "end": 302, "text": "Flores Sol\u00f3rzano, 2016, 2017)", "ref_id": null }, { "start": 341, "end": 372, "text": "(Feldman and Coto-Solano, 2020)", "ref_id": "BIBREF16" }, { "start": 687, "end": 710, "text": "Constenla et al. (2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Chibchan Languages and Bribri", "sec_num": "1.2" }, { "text": "The first step towards training an ASR model in Bribri was the selection of the training materials. The spontaneous speech corpus of Flores Sol\u00f3rzano (2017a) was used because of its public availability (it is available under a Creative Commons license) and because of its consistent transcription. This corpus contains 1571 utterances from 28 speakers (14 male and 14 female), for a total of 68 minutes of transcribed speech. These utterances contain a total of 13586 words, with 2221 unique words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transcription Methodology", "sec_num": "2" }, { "text": "The main question in this paper is: How can we easily reformat Bribri text into the best possible input for ASR? Let's take the word dik\u00ec /di\u0102\u00a3\"ki \u0102 \u00a3/ 'underneath' as an example. This word has two syllables, the first one with a low tone and the second one with a high tone, indicated by a grave accent. In addition to the tone, the second syllable is also nasal, and this is marked with a line underneath the vowel. One possible representation of this word would be to interpret it as four different characters, as is shown in condition 1 of table 1. Here, the character for the last vowel would carry in it the information that it is the vowel /i/, that the vowel is nasal, and that the vowel is produced with a high tone. This condition will be called AllFeats, or \"all features together\", because each character in the ASR alphabet carries with it all the suprasegmental features of the vowel. In this transcription, the Bribri ASR alphabet would have 48 separate vowel symbols: A-HIGH, A-HIGH-NAS, A-LOW, A-LOW-NAS, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transcription Methodology", "sec_num": "2" }, { "text": "There are many other ways in which the word could be transcribed. For example, as shown in the second condition, NasSep, the nasality could be written as a separate character and the tone and vowel could be represented together. In this transcription, the final vowel would be made up of two separate alphabetic symbols: I-HIGH and NAS. This idea of separating features could be taken further, and both the tone and the nasality could be represented as separate characters. This is represented in the third condition, TonesNasSepWL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transcription Methodology", "sec_num": "2" }, { "text": "Here, both the tones and the nasal feature follow the vowel as separate characters, and the final vowel of dik\u00ec 'underneath' would be expressed using three alphabetic symbols: I HIGH NAS. Notice that, in this condition, the low tone of the first syllable would be represented explicitly after the first vowel, I LOW, hence the condition includes the 'WL', \"with low [tone]\". However, this low tone is the most frequent tone in Bribri, and as a matter of fact it has no explicit diacritic in the Bribri writing system. Because of this, another option for the transcription could be to keep marking the tones and nasals separately from the vowels, but to only represent the three salient tones (high, falling, rising) and leave the low tone as a default, unwritten option in the transcription. This is shown in condition 4, ToneNasSep. There are some combinations where the nasal marking stays with the vowel, but the tone is separate. In condition 5, ToneSepWL, the tones are indicated separately but the nasality is written jointly with the vowel. The final vowel of dik\u00ec 'underneath' would then be represented using two symbols: I-NAS HIGH. This means that there would be twelve vowel symbols 2 in the Bribri ASR alphabet (e.g. A, A-NAS, E, E-NAS, etc.), and separate indicators for the four tones: HIGH, FALL, RISE, LOW. But, given that the low tone is again the most frequent, we could assume it as a default tone and leave the LOW marking out. This is done in condition 6, ToneSep. In ToneSep, the second vowel has a high tone, and so it gets a separate HIGH tone marker. The first vowel, on the other hand, has a low tone, and therefore gets no marking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transcription Methodology", "sec_num": "2" }, { "text": "In order to test the different performance of these conditions, two different ASR systems were used. First, the Bribri data was trained using a traditional Gaussian Mixture Models based Hidden Markov Model algorithm (GMM/HMM), implemented in the Kaldi ASR program (Povey et al., 2011) . Given the paucity of data, this is likely the best option for training. However, end-to-end systems are also available, and while they are known not to perform well with small datasets (Goodfellow et al., 2016; Glasmachers, 2017 ), they were still tested to see if the differences in transcription caused any variation in performance. A Connectionist Temporal Classification (CTC) loss algorithm (Graves et al., 2006) with bidirectional recursive neural networks (RNNs) was used, implemented in the DeepSpeech program (Hannun et al., 2014) .", "cite_spans": [ { "start": 264, "end": 284, "text": "(Povey et al., 2011)", "ref_id": "BIBREF57" }, { "start": 472, "end": 497, "text": "(Goodfellow et al., 2016;", "ref_id": "BIBREF24" }, { "start": 498, "end": 515, "text": "Glasmachers, 2017", "ref_id": "BIBREF23" }, { "start": 683, "end": 704, "text": "(Graves et al., 2006)", "ref_id": "BIBREF25" }, { "start": 805, "end": 826, "text": "(Hannun et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Transcription Methodology", "sec_num": "2" }, { "text": "Kaldi was used to train models for each of the transcription conditions described above. Two parameters were varied in the experiment: The number of phones in the acoustic model (monophone or triphone), and the number of words in a KenLM based language model (unigrams, bigrams and trigrams) (Heafield, 2011) . All other hyperparameters were identical to those in the default Kaldi installation. Thirty models were trained for each of the six transcription conditions, using the six parameter combinations (phones x ngrams), for a total of 1080 models. 3 To train these models utterances were randomly shuffled for every model and then split so that 90% of the utterances were used for training (1571 utterances) and 10% were used for validation (174 utterances). Each of the models had two measures of error: the median character error rate (CER) and the median word error rate (WER), calculated over the input transcription for each condition. The results reported below correspond to the median of the 30 medians in each condition. Figure 1 shows the summary of the training results. The condition with the best performance is ToneSep, where the tone symbol is kept separate (HIGH, FALL, RISE), the low tone is left out as a default, and the nasal feature remains connected to the vowel symbol (i.e.: A versus A-NAS). Table 2 shows the summary of results for three conditions: ToneSep and AllFeats, which had the best performance, and ToneNasSepWL, which had the worst performance. The best performing of all conditions is ToneSep trained with triphones and with a trigram language model. This combination of factors produces models with a median of 33% CER and 50% WER. Very close is AllFeats with triphones and trigrams, with 35% CER and 51% WER. These two perform substantially better than ToneNasSepWL, with CER 42% and WER 62% using the same parameters. This means that the ToneSep transcription is associated with an improvement of 9% in CER and 12% in WER. The biggest improvements between conditions are seen with the monophone+trigram models, where Tone-Sep has a 19% lower CER and a 23% lower WER than ToneNasSepWL.", "cite_spans": [ { "start": 292, "end": 308, "text": "(Heafield, 2011)", "ref_id": "BIBREF30" }, { "start": 553, "end": 554, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 1035, "end": 1043, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1321, "end": 1328, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Traditional ASR Results", "sec_num": "3" }, { "text": "ToneSep is not the condition with the least vowel symbols, but it is the one with the best performance. This could be due to two reasons. First, what Tone-Sep appears to be doing is changing the behavior of the triphone window. Kaldi's acoustic model has states with three symbols in them. In a writing system that only has graphemes for segments, the triphone window would, indeed, look at the consonant or vowel in question and to its preceding and following segments. With ToneSep, the tone symbols are surrounded by the vowel the tone belongs to and the following consonant or vowel (or at the nasal symbol). This means that, in practice, when the triphone window looks at the tone, it is looking at two actual phones (the vowel, its tonal cues, and the following consonant/vowel), or even one actual phone (the vowel with its tonal and nasal cues). There are well known effects of tones in their preceding and following segments (Tang, 2008; DiCanio, 2012; Hanson, 2009) , so this reduced window might be helping the computer generalize the relatively stable tone patterns of Bribri and their effect on the surrounding segments. The training chops the duration of the vowel into two segments; the first chunk is used to identify the vowel itself, and the second chunk is used to identify the tonal trajectory. 4 A second reason for the advantage of ToneSep might be the phonetics of the low tone itself. It is not only the most frequent tone in Bribri, but it also the least stable phonetically. The low tone can actually appear as low or mid, depending on its surrounding tones (Coto-Solano, 2015) . What Kaldi might be doing is simply learn the more stable patterns of the other tones and label all other pitch patterns as \"low\".", "cite_spans": [ { "start": 934, "end": 946, "text": "(Tang, 2008;", "ref_id": "BIBREF61" }, { "start": 947, "end": 961, "text": "DiCanio, 2012;", "ref_id": "BIBREF14" }, { "start": 962, "end": 975, "text": "Hanson, 2009)", "ref_id": "BIBREF29" }, { "start": 1315, "end": 1316, "text": "4", "ref_id": null }, { "start": 1584, "end": 1603, "text": "(Coto-Solano, 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Traditional ASR Results", "sec_num": "3" }, { "text": "The reason why ToneNasSepWL is the worst performing transcription is unclear. It might be the case that the addition of the low tone creates an explosion in the number of HMM states, given that the low tone is the most frequent one. Another reason might be the separation of the nasal feature. It is possible that the nasal vowels of Bribri are different enough from their oral equivalents that trying to decouple the vowels from their nasality makes generalization more difficult. As can be seen in figure 1, the NasSep condition also performs poorly. This pattern matches results in languages like Portuguese (Meinedo et al., 2003) and Hindi (Jyothi and Hasegawa-Johnson, 2015) , where the best results are obtained by keeping the nasal fea- 4 No experiment was conducted to test the effect of placing the tone indicator before the vowel (e.g. d LOW i k HIGH i NAS for dik\u00ec 'underneath'). In theory, the performance would be worse given that, in the early milliseconds of a vowel, tones can be phonetically co-articulated with their preceding tone and these two cues would blend together (Xu, 1997; Nguy\u1ebdn and Tr\u00e0n, 2012; DiCanio, 2014) . This effect, called carryover, causes greater deformations in pitch than the effect of anticipating the following tone, or anticipatory assimilation (Gandour et al., 1993; Coto-Solano, 2017, 93-99) . Therefore, the second part of the vowel would provide a clearer tonal cue. ture bound to the vowel representations. Table 3 below shows examples of the transcriptions generated by Kaldi for the validation utterances. In this particular example, the transcription from ToneSep is only off by one space (it doesn't separate the words e' ta 'so'). The transcription from AllFeats is also fairly good in terms of CER, but it is missing the pronoun be' 'you'. Finally, the ToneNasSepWL transcription misses several words. For example, it transcribed the word ts\u00edtsir 'young, small' as the phonetically similar ch\u00ecchi 'dog', and the adverb wake' 'right, anyways' as wa 'with'.", "cite_spans": [ { "start": 611, "end": 633, "text": "(Meinedo et al., 2003)", "ref_id": "BIBREF48" }, { "start": 644, "end": 679, "text": "(Jyothi and Hasegawa-Johnson, 2015)", "ref_id": "BIBREF35" }, { "start": 744, "end": 745, "text": "4", "ref_id": null }, { "start": 1090, "end": 1100, "text": "(Xu, 1997;", "ref_id": "BIBREF66" }, { "start": 1101, "end": 1123, "text": "Nguy\u1ebdn and Tr\u00e0n, 2012;", "ref_id": "BIBREF54" }, { "start": 1124, "end": 1138, "text": "DiCanio, 2014)", "ref_id": "BIBREF13" }, { "start": 1290, "end": 1312, "text": "(Gandour et al., 1993;", "ref_id": "BIBREF21" }, { "start": 1313, "end": 1338, "text": "Coto-Solano, 2017, 93-99)", "ref_id": null } ], "ref_spans": [ { "start": 1457, "end": 1464, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Traditional ASR Results", "sec_num": "3" }, { "text": "End-to-end algorithms need massive amounts of data to train properly (Goodfellow et al., 2016; Glasmachers, 2017) , so they are not the most appropriate way to train the small datasets characteristic of extremely low-resource languages. However, it would be useful to test whether the differences detected in the traditional ASR training are also visible in end-to-end training. A CTC loss algorithm with bidirectional RNNs was used, specifically that implemented in DeepSpeech. Two types of endto-end learning were studied: First, models were trained using only the available Bribri data. This style of training will be called Just Bribri. Second, the Bribri data was incorporated into transfer learning models (Wang and Zheng, 2015; Kunze et al., 2017; Wang et al., 2020) . DeepSpeech has existing English language models, 5 trained with 6-layer RNNs. The final two layers were removed and two new layers were grafted onto the RNN. The first four layers would, in theory, use their English model to encode the phonetic information, and the final two layers would receive that information and produce Bribri text as output. Removing two layers was found to be the optimal point of transfer learning, which matches previous results in literature Utterance meaning: 'So you were young then, right?' Target utterance: e' ta be' b\u00e1k ia ts\u00edtsir wake' ToneSep e'ta be' b\u00e1k ia ts\u00edtsir wake' CER: 3% AllFeats e'ta b\u00e1k ia ts\u00edtsir wake' CER: 16% ToneNasSepWL e' ta wake' ch\u00ecchi wa CER: 61% (Meyer, 2019; Hjortnaes et al., 2020) . This training style will be called Transfer. Both the Just Bribri and Transfer models were trained for 20 epochs, and all other hyperparameters were the same as in the default installation of DeepSpeech. The six transcription conditions were used to train models in both training styles. Same as before, thirty models were trained for each condition. The utterances were randomly shuffled before preparing each model, and then 80% of the utterances were used in the training set (1397 utterances), 10% of the utterances were used for validation (174 utterances), and the final 10% were used for testing. After the training was complete, the median CER and WER were extracted for each model. The median CER for the thirty models in each condition are shown in figure 2. 6 In the CTC training, the tables have completely turned: ToneSep and AllFeats are the worst performing conditions, and ToneNasSepWL has the best performance. Table 4 shows the median of the 30 medians for each transcription condition. The ToneNasSepWL models trained with Just Bribri have a median of 70% CER, whereas the AllFeats models have a median of 95%, a full 25% worse. As a matter of fact, both WL conditions now have the best performance. This pattern is also visible in the Transfer models: The ToneNasSepWL transcription has a CER of 86%, 7% better than the AllFeats transcription. The median WER is not shown because, for all conditions, the median of the thirty medians was WER=1.", "cite_spans": [ { "start": 69, "end": 94, "text": "(Goodfellow et al., 2016;", "ref_id": "BIBREF24" }, { "start": 95, "end": 113, "text": "Glasmachers, 2017)", "ref_id": "BIBREF23" }, { "start": 712, "end": 734, "text": "(Wang and Zheng, 2015;", "ref_id": "BIBREF64" }, { "start": 735, "end": 754, "text": "Kunze et al., 2017;", "ref_id": "BIBREF40" }, { "start": 755, "end": 773, "text": "Wang et al., 2020)", "ref_id": "BIBREF63" }, { "start": 1481, "end": 1494, "text": "(Meyer, 2019;", "ref_id": "BIBREF50" }, { "start": 1495, "end": 1518, "text": "Hjortnaes et al., 2020)", "ref_id": "BIBREF31" }, { "start": 2290, "end": 2291, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 2449, "end": 2456, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "End-to-End Results", "sec_num": "4" }, { "text": "There might be several reasons why the situation has reversed in the CTC models. First, providing an explicit symbol for the low tone might force DeepSpeech to look for more words in the transcription. As can be seen in table 5, the Tone-NasSepWL transcription uses the character 4 for the explicit indication of the low tone, which is then eliminated in post-processing to produce a human readable form. The explicit symbol for the low tone appears to force the CTC algorithm to keep looking for tones, and therefore words, whereas, in the other conditions, the CTC algorithm gives up on the search sooner. A second reason why WL performs better is that it provides a clear indication of where a syllable ends, and therefore makes the traverse through the CTC trellis simpler to navigate. Without an explicit low tone, any vowel could be followed by tones, vowels or consonants. On the other hand, when all tones have explicit marking, vowels can only be followed by a tone, which potentially simplifies the path to finding the word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-End Results", "sec_num": "4" }, { "text": "A third reason for this improvement might have to do with the size of the alphabet: The WL conditions have relatively few symbols for the vowels (12 symbols for ToneNasSepWL versus 48 for Utterance meaning: 'So you were young then, right?' Target utterance: e' ta be' b\u00e1k ia ts\u00edtsir wake' Condition DeepSpeech output Human-readable output CER ToneNasSepWL e4' tax4 i4e4' i4 e' ta ie' i 65% ToneSep e' e' 91% AllFeats i i 93% Table 5 : Example of DeepSpeech transcriptions for three of the experimental conditions AllFeats), which would result in a smaller output layer for the RNNs. Notice that, as with the triphones in Kaldi, the RNNs might be splitting the vowel into separate chunks. It would then proceed to identify the type of vowel from the first chunk, the tone in the second and the nasality in the final part. It would also benefit from the bidirectionality of the neural networks, finding tonal cues in the surrounding segments without the disadvantages of GMM/HMM systems. Finally, it should be noted that the Transfer models did not provide an improvement in performance. This is somewhat surprising; this might indicate that the Bribri dataset is too small to benefit from the transfer, or that the knowledge of English phones does not overlap sufficiently with the Bribri sound system to produce a boost. Even then, the Transfer models also show effects due to the different transcription conditions, and they also benefited from separating the tone and nasal features from the vowel. This effects will have to be confirmed in the future with other end-to-end techniques, such as Listen, Attend and Spell algorithms (Chan et al., 2016) and wav2vec pretraining (Baevski et al., 2020) .", "cite_spans": [ { "start": 1632, "end": 1651, "text": "(Chan et al., 2016)", "ref_id": "BIBREF6" }, { "start": 1676, "end": 1698, "text": "(Baevski et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "End-to-End Results", "sec_num": "4" }, { "text": "While hand-engineered representations are suboptimal for high-resource languages, these can still be helpful in low-resource environments, where they can help set up a virtuous cycle of creating imperfect but rapid transcriptions, which can then be improved to create more training materials, improve ASR algorithms, and start helping documentation and revitalization projects right away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The results above show that performing relatively easy transformations in the input (e.g. not marking the most common tone, separating the tonal markings from the vowel) can lead to major improvements in performance. It also shows that NLP practitioners and linguists can fruitfully combine their knowledge to understand the different features involved in the writing system of a language. Additionally, it provides evidence that the benefits of phonetic transcription can also be gained using semi-orthographic representations. The following recommendations provide a short summary of the results: (i) Separate the tones from the vowels. This will help ASR systems learn their regularities. (ii) Experiment with other features, such as nasality; if they modify the formants of the vowel, they should probably be grouped with the vowel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Finally, this work is the first attempt at training speech recognition for a Chibchan language. As shown in table 3 and Appendix A, it is feasible to transcribe these languages automatically, and these methods will be refined in the future to incorporate ASR into the documentation pipelines for this language family. be' mi'ke sul\u00e8 wa i w\u00e9bl\u00f6k daw\u00e1ska e' ta wa e' mi'ke sul\u00e8 wa w\u00e9bl\u00f6 daw\u00e1ska e' ta ma mi'ke sul\u00e8 wa w\u00e9bl\u00f6 daw\u00e1ska ta mi'ke sul\u00e8 wa w\u00e9r\u00f6 Table 6 : Additional examples of Kaldi transcriptions for three of the experimental conditions, trained with triphonetrigram models. The numbers represent the character error rate (CER) between the transcription and the target sentence. The fourth example includes code-switching into Spanish.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 459, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "There are five vowels that can be both oral and nasal: /a, e, i, o, u/. There are two vowels, /I, U/, written '\u00eb' and '\u00f6', which can never be nasal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The models were trained using an Intel i7-10750H CPU, and each took approximately 5 minutes to train, for a total of 90 hours of processing. The electricity came from the ICE electric grid in Costa Rica, which uses 98% renewable energy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A short experiment was run with the Mandarin Deep-Speech models as the base for transfer training, given that both languages are tonal. However, these models had worse performance than with transfer from the English model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The models were trained using the HPC infrastructure at Dartmouth College in New Hampshire. Each model used 16 CPUs and took approximately 65 minutes to train, for an approximate total of 78 hours of processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank Dr. Sof\u00eda Flores for her work on the Bribri corpus, the personnel of Research Computing Department at Dartmouth College for their support with processing resources, and Dr. Samantha Wray and three anonymous reviewers for their work in reviewing and offering helpful suggestions to improve this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Evaluating Phonemic Transcription of Low-resource Tonal languages for Language Documentation", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Hilaria", "middle": [], "last": "Cruz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" } ], "year": 2018, "venue": "LREC 2018 (Language Resources and Evaluation Conference)", "volume": "", "issue": "", "pages": "3356--3365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Trevor Cohn, Graham Neubig, Hi- laria Cruz, Steven Bird, and Alexis Michaud. 2018. Evaluating Phonemic Transcription of Low-resource Tonal languages for Language Documentation. In LREC 2018 (Language Resources and Evaluation Conference), pages 3356-3365.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Massively Multilingual Adversarial Speech Recognition", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Wiesner", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.02210" ] }, "num": null, "urls": [], "raw_text": "Oliver Adams, Matthew Wiesner, Shinji Watanabe, and David Yarowsky. 2019. Massively Multilingual Adversarial Speech Recognition. arXiv preprint arXiv:1904.02210.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations", "authors": [ { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Zhou", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.11477" ] }, "num": null, "urls": [], "raw_text": "Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A frame- work for self-supervised learning of speech represen- tations. arXiv preprint arXiv:2006.11477.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic Speech Recognition for Under-resourced Languages: A Survey", "authors": [ { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Karpov", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Schultz", "suffix": "" } ], "year": 2014, "venue": "", "volume": "56", "issue": "", "pages": "85--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurent Besacier, Etienne Barnard, Alexey Karpov, and Tanja Schultz. 2014. Automatic Speech Recog- nition for Under-resourced Languages: A Survey. Speech communication, 56:85-100.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Endangered Language Documentation: Bootstrapping a Chatino speech corpus, Forced Aligner, ASR", "authors": [ { "first": "Damir\u0107avar", "middle": [], "last": "Malgorzata\u0107avar", "suffix": "" }, { "first": "Hilaria", "middle": [], "last": "Cruz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4004--4011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Malgorzata\u0106avar, Damir\u0106avar, and Hilaria Cruz. 2016. Endangered Language Documentation: Boot- strapping a Chatino speech corpus, Forced Aligner, ASR. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4004-4011.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "authors": [ { "first": "William", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2016, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "4960--4964", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 4960-4964. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large Vocabulary Mandarin Speech Recognition with Different Approaches in Modeling Tones", "authors": [ { "first": "Eric", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jianlai", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Di", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kai-Fu", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2000, "venue": "Sixth International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Chang, Jianlai Zhou, Shuo Di, Chao Huang, and Kai-Fu Lee. 2000. Large Vocabulary Mandarin Speech Recognition with Different Approaches in Modeling Tones. In Sixth International Conference on Spoken Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Phonetics, Phonology and Phonotactics of the Bribri Language", "authors": [ { "first": "Rolando", "middle": [], "last": "Coto", "suffix": "" }, { "first": "-", "middle": [], "last": "Solano", "suffix": "" } ], "year": 2015, "venue": "2nd International Conference on Mesoamerican Linguistics", "volume": "25", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rolando Coto-Solano. 2015. The Phonetics, Phonol- ogy and Phonotactics of the Bribri Language. In 2nd International Conference on Mesoamerican Linguis- tics, volume 25. Los Angeles: California State Uni- versity.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Alineaci\u00f3n forzada sin entrenamiento para la anotaci\u00f3n autom\u00e1tica de corpus orales de las lenguas ind\u00edgenas de Costa Rica", "authors": [ { "first": "Rolando", "middle": [], "last": "Coto", "suffix": "" }, { "first": "-", "middle": [], "last": "Solano", "suffix": "" }, { "first": "Sof\u00eda", "middle": [], "last": "Flores Sol\u00f3rzano", "suffix": "" } ], "year": 2016, "venue": "K\u00e1nina", "volume": "40", "issue": "4", "pages": "175--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rolando Coto-Solano and Sof\u00eda Flores Sol\u00f3rzano. 2016. Alineaci\u00f3n forzada sin entrenamiento para la anotaci\u00f3n autom\u00e1tica de corpus orales de las lenguas ind\u00edgenas de Costa Rica. K\u00e1nina, 40(4):175-199.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Comparison of Two Forced Alignment Systems for Aligning Bribri Speech", "authors": [ { "first": "Rolando", "middle": [], "last": "Coto", "suffix": "" }, { "first": "-", "middle": [], "last": "Solano", "suffix": "" }, { "first": "Sof\u00eda", "middle": [], "last": "Flores Sol\u00f3rzano", "suffix": "" } ], "year": 2017, "venue": "CLEI Electron. J", "volume": "20", "issue": "1", "pages": "2--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rolando Coto-Solano and Sof\u00eda Flores Sol\u00f3rzano. 2017. Comparison of Two Forced Alignment Sys- tems for Aligning Bribri Speech. CLEI Electron. J., 20(1):2-1.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Tonal Reduction and Literacy in Me'phaa V\u00e1th\u00e1\u00e1", "authors": [ { "first": "Rolando Alberto Coto-", "middle": [], "last": "Solano", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rolando Alberto Coto-Solano. 2017. Tonal Reduction and Literacy in Me'phaa V\u00e1th\u00e1\u00e1. Ph.D. thesis, Uni- versity of Arizona.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Triqui Tonal Coarticulation and Contrast Preservation in Tonal Phonology", "authors": [ { "first": "Christian", "middle": [], "last": "Dicanio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Workshop on the Sound Systems of Mexico and Central America", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian DiCanio. 2014. Triqui Tonal Coarticulation and Contrast Preservation in Tonal Phonology. In Proceedings of the Workshop on the Sound Systems of Mexico and Central America, New Haven, CT: Department of Linguistics, Yale University.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Coarticulation between Tone and Glottal Consonants in Itunyoso Trique", "authors": [ { "first": "T", "middle": [], "last": "Christian", "suffix": "" }, { "first": "", "middle": [], "last": "Dicanio", "suffix": "" } ], "year": 2012, "venue": "Journal of Phonetics", "volume": "40", "issue": "1", "pages": "162--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian T DiCanio. 2012. Coarticulation between Tone and Glottal Consonants in Itunyoso Trique. Journal of Phonetics, 40(1):162-176.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Recognition of Tones in Yor\u00f9b\u00e1 Speech: Experiments with Artificial Neural Networks", "authors": [ { "first": "O", "middle": [], "last": "D\u00e9t\u00fanj\u00ed \u00c0j\u00e0d\u00ed", "suffix": "" }, { "first": "O", "middle": [ "D\u00e9lo" ], "last": "B\u00ed", "suffix": "" } ], "year": 2008, "venue": "Speech, Audio, Image and Biomedical Signal Processing using Neural Networks", "volume": "", "issue": "", "pages": "23--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "O . d\u00e9t\u00fanj\u00ed \u00c0j\u00e0d\u00ed O . d\u00e9lo . b\u00ed. 2008. Recognition of Tones in Yor\u00f9b\u00e1 Speech: Experiments with Artificial Neural Networks. In Speech, Audio, Image and Biomedical Signal Processing using Neural Networks, pages 23- 47. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural Machine Translation Models with Back-Translation for the Extremely Low-Resource Indigenous Language Bribri", "authors": [ { "first": "Isaac", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3965--3976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neural Machine Translation Models with Back- Translation for the Extremely Low-Resource Indige- nous Language Bribri. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 3965-3976.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Corpus oral pandialectal de la lengua bribri", "authors": [ { "first": "", "middle": [], "last": "Sof\u00eda Flores Sol\u00f3rzano", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sof\u00eda Flores Sol\u00f3rzano. 2017a. Corpus oral pandialec- tal de la lengua bribri. http://bribri.net.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "La modelizaci\u00f3n de la morfolog\u00eda verbal bribri -Modeling the Verbal Morphology of Bribri", "authors": [ { "first": "", "middle": [], "last": "Sof\u00eda Flores Sol\u00f3rzano", "suffix": "" } ], "year": 2019, "venue": "Revista de Procesamiento del Lenguaje Natural", "volume": "62", "issue": "", "pages": "85--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sof\u00eda Flores Sol\u00f3rzano. 2019. La modelizaci\u00f3n de la morfolog\u00eda verbal bribri -Modeling the Verbal Mor- phology of Bribri. Revista de Procesamiento del Lenguaje Natural, 62:85-92.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Un primer corpus pandialectal oral de la lengua bribri y su anotaci\u00f3n morfol\u00f3gica con base en el modelo de estados finitos", "authors": [ { "first": "Sof\u00eda Margarita Flores", "middle": [], "last": "Sol\u00f3rzano", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sof\u00eda Margarita Flores Sol\u00f3rzano. 2017b. Un primer corpus pandialectal oral de la lengua bribri y su an- otaci\u00f3n morfol\u00f3gica con base en el modelo de esta- dos finitos. Ph.D. thesis, Universidad Aut\u00f3noma de Madrid.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Building Speech Recognition Systems for Language Documentation: The Co-EDL Endangered Language Pipeline and Inference System (ELPIS)", "authors": [ { "first": "Ben", "middle": [], "last": "Foley", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" }, { "first": "Gautier", "middle": [], "last": "Durantin", "suffix": "" }, { "first": "T", "middle": [ "Mark" ], "last": "Ellison", "suffix": "" }, { "first": "Daan", "middle": [], "last": "Van Esch", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Heath", "suffix": "" }, { "first": "Frantisek", "middle": [], "last": "Kratochvil", "suffix": "" }, { "first": "Zara", "middle": [], "last": "Maxwell-Smith", "suffix": "" }, { "first": "David", "middle": [], "last": "Nash", "suffix": "" } ], "year": 2018, "venue": "Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages", "volume": "", "issue": "", "pages": "205--209", "other_ids": { "DOI": [ "10.21437/SLTU.2018-43" ] }, "num": null, "urls": [], "raw_text": "Ben Foley, Josh Arnold, Rolando Coto-Solano, Gau- tier Durantin, T. Mark Ellison, Daan van Esch, Scott Heath, Frantisek Kratochvil, Zara Maxwell- Smith, David Nash, Ola Olsson, Mark Richards, Nay San, Hywel Stoakes, Nick Thieberger, and Janet Wiles. 2018. Building Speech Recognition Systems for Language Documentation: The Co- EDL Endangered Language Pipeline and Inference System (ELPIS). In Proc. The 6th Intl. Work- shop on Spoken Language Technologies for Under- Resourced Languages, pages 205-209.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Anticipatory tonal coarticulation in thai noun compounds after unilateral brain damage", "authors": [ { "first": "Jack", "middle": [], "last": "Gandour", "suffix": "" }, { "first": "Suvit", "middle": [], "last": "Ponglorpisit", "suffix": "" }, { "first": "Sumalee", "middle": [], "last": "Dechongkit", "suffix": "" }, { "first": "Fuangfa", "middle": [], "last": "Khunadorn", "suffix": "" }, { "first": "Prasert", "middle": [], "last": "Boongird", "suffix": "" }, { "first": "Siripong", "middle": [], "last": "Potisuk", "suffix": "" } ], "year": 1993, "venue": "Brain and language", "volume": "45", "issue": "1", "pages": "1--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Gandour, Suvit Ponglorpisit, Sumalee De- chongkit, Fuangfa Khunadorn, Prasert Boongird, and Siripong Potisuk. 1993. Anticipatory tonal coar- ticulation in thai noun compounds after unilateral brain damage. Brain and language, 45(1):1-20.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic Speech Recognition for African Languages with Vowel Length Contrast", "authors": [ { "first": "Elodie", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Sylvie", "middle": [], "last": "Voisin", "suffix": "" } ], "year": 2016, "venue": "Procedia Computer Science", "volume": "81", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elodie Gauthier, Laurent Besacier, and Sylvie Voisin. 2016. Automatic Speech Recognition for African Languages with Vowel Length Contrast. Procedia Computer Science, 81:136-143.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Limits of End-to-end Learning", "authors": [ { "first": "Tobias", "middle": [], "last": "Glasmachers", "suffix": "" } ], "year": 2017, "venue": "Asian Conference on Machine Learning", "volume": "", "issue": "", "pages": "17--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Glasmachers. 2017. Limits of End-to-end Learning. In Asian Conference on Machine Learn- ing, pages 17-32. PMLR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep Learning", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http://www. deeplearningbook.org.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Faustino", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "369--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist Tem- poral Classification: Labelling Unsegmented Se- quence Data with Recurrent Neural Networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Automatic transcription challenges for Inuktitut, a lowresource polysynthetic language", "authors": [ { "first": "Vishwa", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2521--2527", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vishwa Gupta and Gilles Boulianne. 2020a. Auto- matic transcription challenges for Inuktitut, a low- resource polysynthetic language. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2521-2527.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Speech Transcription Challenges for Resource Constrained Indigenous Language Cree", "authors": [ { "first": "Vishwa", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "362--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vishwa Gupta and Gilles Boulianne. 2020b. Speech Transcription Challenges for Resource Constrained Indigenous Language Cree. In Proceedings of the 1st Joint Workshop on Spoken Language Technolo- gies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 362-367.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Deep Speech: Scaling up End-to-End Speech Recognition", "authors": [ { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Erich", "middle": [], "last": "Elsen", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "Shubho", "middle": [], "last": "Sengupta", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.5567" ] }, "num": null, "urls": [], "raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catan- zaro, Greg Diamos, Erich Elsen, Ryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, et al. 2014. Deep Speech: Scaling up End-to-End Speech Recognition. arXiv preprint arXiv:1412.5567.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Effects of Obstruent Consonants on Fundamental Frequency at Vowel Onset in English", "authors": [ { "first": "M", "middle": [], "last": "Helen", "suffix": "" }, { "first": "", "middle": [], "last": "Hanson", "suffix": "" } ], "year": 2009, "venue": "The Journal of the Acoustical Society of America", "volume": "125", "issue": "1", "pages": "425--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen M Hanson. 2009. Effects of Obstruent Conso- nants on Fundamental Frequency at Vowel Onset in English. The Journal of the Acoustical Society of America, 125(1):425-441.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "KenLM: Faster and smaller language model queries", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "187--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Towards a speech recognizer for Komi, an endangered and low-resource Uralic language", "authors": [ { "first": "Nils", "middle": [], "last": "Hjortnaes", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "31--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Hjortnaes, Niko Partanen, Michael Rie\u00dfler, and Francis M Tyers. 2020. Towards a speech recognizer for Komi, an endangered and low-resource Uralic language. In Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages, pages 31-37.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Poblaci\u00f3n total en territorios ind\u00edgenas por autoidentificaci\u00f3n a la etnia ind\u00edgena y habla de alguna lengua ind\u00edgena, seg\u00fan pueblo y territorio ind\u00edgena", "authors": [ { "first": "", "middle": [], "last": "Inec", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "INEC. 2011. Poblaci\u00f3n total en territorios ind\u00edgenas por autoidentificaci\u00f3n a la etnia ind\u00edgena y habla de alguna lengua ind\u00edgena, seg\u00fan pueblo y territorio in- d\u00edgena. In Instituto Nacional de Estad\u00edstica y Cen- sos, editor, Censo 2011.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Se' tt\u00f3 bribri ie Hablemos en bribri. EDigital", "authors": [ { "first": "Carla", "middle": [], "last": "Victoria", "suffix": "" }, { "first": "Jara", "middle": [], "last": "Murillo", "suffix": "" }, { "first": "Al\u00ed Garc\u00eda", "middle": [], "last": "Segura", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carla Victoria Jara Murillo and Al\u00ed Garc\u00eda Segura. 2013. Se' tt\u00f3 bribri ie Hablemos en bribri. EDig- ital.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "ASR for Documenting Acutely Under-resourced Indigenous Languages", "authors": [ { "first": "Robbie", "middle": [], "last": "Jimerson", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Prud", "suffix": "" }, { "first": "'", "middle": [], "last": "Hommeaux", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robbie Jimerson and Emily Prud'hommeaux. 2018. ASR for Documenting Acutely Under-resourced In- digenous Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improved Hindi broadcast ASR by adapting the language model and pronunciation model using a priori syntactic and morphophonemic knowledge", "authors": [ { "first": "Preethi", "middle": [], "last": "Jyothi", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hasegawa-Johnson", "suffix": "" } ], "year": 2015, "venue": "Sixteenth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preethi Jyothi and Mark Hasegawa-Johnson. 2015. Im- proved Hindi broadcast ASR by adapting the lan- guage model and pronunciation model using a pri- ori syntactic and morphophonemic knowledge. In Sixteenth Annual Conference of the International Speech Communication Association.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Automatic Speech Recognition System for Tonal Languages: State-of-the-Art Survey", "authors": [ { "first": "Jaspreet", "middle": [], "last": "Kaur", "suffix": "" }, { "first": "Amitoj", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Virender", "middle": [], "last": "Kadyan", "suffix": "" } ], "year": 2020, "venue": "Archives of Computational Methods in Engineering", "volume": "", "issue": "", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaspreet Kaur, Amitoj Singh, and Virender Kadyan. 2020. Automatic Speech Recognition System for Tonal Languages: State-of-the-Art Survey. Archives of Computational Methods in Engineering, pages 1- 30.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using tone information in Thai spelling speech recognition", "authors": [ { "first": "Natthawut", "middle": [], "last": "Kertkeidkachorn", "suffix": "" }, { "first": "Proadpran", "middle": [], "last": "Punyabukkana", "suffix": "" }, { "first": "Atiwong", "middle": [], "last": "Suchato", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing", "volume": "", "issue": "", "pages": "178--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natthawut Kertkeidkachorn, Proadpran Punyabukkana, and Atiwong Suchato. 2014. Using tone information in Thai spelling speech recognition. In Proceedings of the 28th Pacific Asia Conference on Language, In- formation and Computing, pages 178-184.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "2020. A Tutorial on Acoustic Phonetic Feature Extraction for Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) Applications in African Languages. Linguistic Portfolios", "authors": [ { "first": "Ettien", "middle": [], "last": "Koffi", "suffix": "" } ], "year": null, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ettien Koffi. 2020. A Tutorial on Acoustic Phonetic Feature Extraction for Automatic Speech Recog- nition (ASR) and Text-to-Speech (TTS) Applica- tions in African Languages. Linguistic Portfolios, 9(1):11.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Diccionario digital biling\u00fce bribri", "authors": [ { "first": "S", "middle": [], "last": "Haakon", "suffix": "" }, { "first": "", "middle": [], "last": "Krohn", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haakon S. Krohn. 2020. Diccionario digital bil- ing\u00fce bribri. http://www.haakonkrohn. com/bribri.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Transfer Learning for Speech Recognition on a Budget", "authors": [ { "first": "Julius", "middle": [], "last": "Kunze", "suffix": "" }, { "first": "Louis", "middle": [], "last": "Kirsch", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Kurenkov", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Krug", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Johannsmeier", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Stober", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.00290" ] }, "num": null, "urls": [], "raw_text": "Julius Kunze, Louis Kirsch, Ilia Kurenkov, Andreas Krug, Jens Johannsmeier, and Sebastian Stober. 2017. Transfer Learning for Speech Recognition on a Budget. arXiv preprint arXiv:1706.00290.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Using tone information in Cantonese continuous speech recognition", "authors": [ { "first": "Tan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lau", "suffix": "" }, { "first": "Yiu", "middle": [ "Wing" ], "last": "Wong", "suffix": "" }, { "first": "", "middle": [], "last": "Ching", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Asian Language Information Processing (TALIP)", "volume": "1", "issue": "1", "pages": "83--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tan Lee, Wai Lau, Yiu Wing Wong, and PC Ching. 2002. Using tone information in Cantonese continu- ous speech recognition. ACM Transactions on Asian Language Information Processing (TALIP), 1(1):83- 102.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Suprasegmental Features of Speech. Contemporary issues in experimental phonetics", "authors": [ { "first": "Ilse", "middle": [], "last": "Lehiste", "suffix": "" }, { "first": "J", "middle": [], "last": "Norman", "suffix": "" }, { "first": "", "middle": [], "last": "Lass", "suffix": "" } ], "year": 1976, "venue": "", "volume": "225", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilse Lehiste and Norman J Lass. 1976. Suprasegmental Features of Speech. Contemporary issues in experi- mental phonetics, 225:239.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Developing a Shared Task for Speech Processing on Endangered Languages", "authors": [ { "first": "Gina-Anne", "middle": [], "last": "Levow", "suffix": "" }, { "first": "Emily", "middle": [ "P" ], "last": "Ahn", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Workshop on Computational Methods for Endangered Languages", "volume": "1", "issue": "", "pages": "96--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gina-Anne Levow, Emily P Ahn, and Emily M Ben- der. 2021. Developing a Shared Task for Speech Processing on Endangered Languages. In Proceed- ings of the Workshop on Computational Methods for Endangered Languages, volume 1, pages 96-106.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Universal phone recognition with a multilingual allophone system", "authors": [ { "first": "Xinjian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Juncheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Jiali", "middle": [], "last": "Yao", "suffix": "" }, { "first": ";", "middle": [], "last": "David R Mortensen", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "8249--8253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anasta- sopoulos, David R Mortensen, Graham Neubig, Alan W Black, et al. 2020. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The World Atlas of Language Structures Online", "authors": [ { "first": "Ian", "middle": [], "last": "Maddieson", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Maddieson. 2013. Tone. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Lan- guage Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Diccionario Fraseol\u00f3gico Bribri-Espa\u00f1ol Espa\u00f1ol-Bribri", "authors": [ { "first": "Enrique", "middle": [ "Margery" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrique Margery. 2005. Diccionario Fraseol\u00f3gico Bribri-Espa\u00f1ol Espa\u00f1ol-Bribri, second edition. Edi- torial de la Universidad de Costa Rica.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Speech corpus of Ainu folklore and end-to-end speech recognition for Ainu language", "authors": [ { "first": "Kohei", "middle": [], "last": "Matsuura", "suffix": "" }, { "first": "Sei", "middle": [], "last": "Ueno", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Mimura", "suffix": "" }, { "first": "Shinsuke", "middle": [], "last": "Sakai", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Kawahara", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.06675" ] }, "num": null, "urls": [], "raw_text": "Kohei Matsuura, Sei Ueno, Masato Mimura, Shin- suke Sakai, and Tatsuya Kawahara. 2020. Speech corpus of Ainu folklore and end-to-end speech recognition for Ainu language. arXiv preprint arXiv:2002.06675.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "AUDIMUS. media: a Broadcast News speech recognition system for the European Portuguese language", "authors": [ { "first": "Hugo", "middle": [], "last": "Meinedo", "suffix": "" }, { "first": "Diamantino", "middle": [], "last": "Caseiro", "suffix": "" }, { "first": "Joao", "middle": [], "last": "Neto", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2003, "venue": "International Workshop on Computational Processing of the Portuguese Language", "volume": "", "issue": "", "pages": "9--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Meinedo, Diamantino Caseiro, Joao Neto, and Isabel Trancoso. 2003. AUDIMUS. media: a Broadcast News speech recognition system for the European Portuguese language. In International Workshop on Computational Processing of the Por- tuguese Language, pages 9-17. Springer.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Models of tone for tonal and non-tonal languages", "authors": [ { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" }, { "first": "A", "middle": [ "W" ], "last": "Zaid", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Sheikh", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Kilgour", "suffix": "" }, { "first": "", "middle": [], "last": "Bao Nguyen", "suffix": "" } ], "year": 2013, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "261--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Metze, Zaid AW Sheikh, Alex Waibel, Jonas Gehring, Kevin Kilgour, Quoc Bao Nguyen, et al. 2013. Models of tone for tonal and non-tonal lan- guages. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 261- 266. IEEE.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Multi-task and transfer learning in low-resource speech recognition", "authors": [ { "first": "Josh", "middle": [], "last": "Meyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josh Meyer. 2019. Multi-task and transfer learning in low-resource speech recognition. Ph.D. thesis, The University of Arizona.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Phonetic lessons from automatic phonemic transcription: preliminary reflections on Na (Sino-Tibetan) and Tsuut'ina (Dene) data", "authors": [ { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" } ], "year": 2019, "venue": "ICPhS XIX (19th International Congress of Phonetic Sciences)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Michaud, Oliver Adams, Christopher Cox, and S\u00e9verine Guillaume. 2019. Phonetic lessons from automatic phonemic transcription: preliminary re- flections on Na (Sino-Tibetan) and Tsuut'ina (Dene) data. In ICPhS XIX (19th International Congress of Phonetic Sciences).", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Atlas of the World's Languages in Danger", "authors": [ { "first": "Christopher", "middle": [], "last": "Moseley", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Moseley. 2010. Atlas of the World's Lan- guages in Danger. Unesco.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Development of a Vietnamese Large Vocabulary Continuous Speech Recognition System under Noisy Conditions", "authors": [ { "first": "", "middle": [], "last": "Quoc Bao Nguyen", "suffix": "" }, { "first": "Quang Trung", "middle": [], "last": "Van Tuan Mai", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" }, { "first": "Van", "middle": [ "Hai" ], "last": "Ba Quyen Dam", "suffix": "" }, { "first": "", "middle": [], "last": "Do", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Ninth International Symposium on Information and Communication Technology", "volume": "", "issue": "", "pages": "222--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Bao Nguyen, Van Tuan Mai, Quang Trung Le, Ba Quyen Dam, and Van Hai Do. 2018. Develop- ment of a Vietnamese Large Vocabulary Continuous Speech Recognition System under Noisy Conditions. In Proceedings of the Ninth International Sympo- sium on Information and Communication Technol- ogy, pages 222-226.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Tonal Coarticulation on Particles in Vietnamese Language", "authors": [ { "first": "Thi", "middle": [], "last": "", "suffix": "" }, { "first": "Lan", "middle": [], "last": "Nguy\u1ebdn", "suffix": "" }, { "first": "\u00d0\u00f5", "middle": [], "last": "\u00d0a", "suffix": "" }, { "first": "", "middle": [], "last": "Tr\u00e0n", "suffix": "" } ], "year": 2012, "venue": "International Conference on Asian Language Processing", "volume": "", "issue": "", "pages": "221--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thi . Lan Nguy\u1ebdn and \u00d0\u00f5 \u00d0a . t Tr\u00e0n. 2012. Tonal Coartic- ulation on Particles in Vietnamese Language. In In- ternational Conference on Asian Language Process- ing, pages 221-224.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Context-dependent deep neural networks for commercial Mandarin speech recognition applications", "authors": [ { "first": "Jianwei", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Na", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2013, "venue": "Asia-Pacific Signal and Information Processing Association Annual Summit and Conference", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianwei Niu, Lei Xie, Lei Jia, and Na Hu. 2013. Context-dependent deep neural networks for com- mercial Mandarin speech recognition applications. In 2013 Asia-Pacific Signal and Information Pro- cessing Association Annual Summit and Conference, pages 1-5. IEEE.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Speech Recognition for Endangered and Extinct Samoyedic languages", "authors": [ { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Tiina", "middle": [], "last": "Klooster", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.05331" ] }, "num": null, "urls": [], "raw_text": "Niko Partanen, Mika H\u00e4m\u00e4l\u00e4inen, and Tiina Klooster. 2020. Speech Recognition for Endangered and Extinct Samoyedic languages. arXiv preprint arXiv:2012.05331.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "The Kaldi Speech Recognition Toolkit", "authors": [ { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Arnab", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Glembek", "suffix": "" }, { "first": "Nagendra", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Hannemann", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Motlicek", "suffix": "" }, { "first": "Yanmin", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Schwarz", "suffix": "" } ], "year": 2011, "venue": "IEEE 2011 workshop on automatic speech recognition and understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi Speech Recogni- tion Toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, CONF. IEEE Signal Processing Society.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Lenguas en peligro en Costa Rica: vitalidad, documentaci\u00f3n y descripci\u00f3n", "authors": [ { "first": "Carlos", "middle": [], "last": "S\u00e1nchez", "suffix": "" }, { "first": "Avenda\u00f1o", "middle": [], "last": "", "suffix": "" } ], "year": 2013, "venue": "Revista K\u00e1\u00f1ina", "volume": "37", "issue": "1", "pages": "219--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos S\u00e1nchez Avenda\u00f1o. 2013. Lenguas en peligro en Costa Rica: vitalidad, documentaci\u00f3n y descrip- ci\u00f3n. Revista K\u00e1\u00f1ina, 37(1):219-250.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Search by Voice in Mandarin Chinese", "authors": [ { "first": "Jiulong", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Genqing", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhihong", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Xiliu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "Pedro", "middle": [ "J" ], "last": "Moreno", "suffix": "" } ], "year": 2010, "venue": "Eleventh Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiulong Shan, Genqing Wu, Zhihong Hu, Xiliu Tang, Martin Jansche, and Pedro J Moreno. 2010. Search by Voice in Mandarin Chinese. In Eleventh Annual Conference of the International Speech Communica- tion Association.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yolox\u00f3chitl Mixtec", "authors": [ { "first": "Jiatong", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Amith", "suffix": "" }, { "first": "Rey", "middle": [], "last": "Castillo Garc\u00eda", "suffix": "" }, { "first": "Esteban", "middle": [ "Guadalupe" ], "last": "Sierra", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.10877" ] }, "num": null, "urls": [], "raw_text": "Jiatong Shi, Jonathan Amith, Rey Castillo Garc\u00eda, Es- teban Guadalupe Sierra, Kevin Duh, and Shinji Watanabe. 2021. Leveraging End-to-End ASR for Endangered Language Documentation: An Empir- ical Study on Yolox\u00f3chitl Mixtec. arXiv preprint arXiv:2101.10877.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "The Phonology and Phonetics of Consonant-Tone Interaction", "authors": [ { "first": "Katrina", "middle": [ "Elizabeth" ], "last": "Tang", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrina Elizabeth Tang. 2008. The Phonology and Pho- netics of Consonant-Tone Interaction. Ph.D. thesis.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Synthetic data augmentation for improving low-resource ASR", "authors": [ { "first": "Bao", "middle": [], "last": "Thai", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Jimerson", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Arcoraci", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Prud'hommeaux", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Ptucha", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bao Thai, Robert Jimerson, Dominic Arcoraci, Emily Prud'hommeaux, and Raymond Ptucha. 2019. Syn- thetic data augmentation for improving low-resource ASR. In 2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW), pages 1- 9. IEEE.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Improving Cross-Lingual Transfer Learning for Endto-End Speech Recognition with Speech Translation", "authors": [ { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.05474" ] }, "num": null, "urls": [], "raw_text": "Changhan Wang, Juan Pino, and Jiatao Gu. 2020. Im- proving Cross-Lingual Transfer Learning for End- to-End Speech Recognition with Speech Translation. arXiv preprint arXiv:2006.05474.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Transfer learning for speech and language processing", "authors": [ { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Thomas Fang", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2015, "venue": "2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (AP-SIPA)", "volume": "", "issue": "", "pages": "1225--1237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong Wang and Thomas Fang Zheng. 2015. Trans- fer learning for speech and language processing. In 2015 Asia-Pacific Signal and Information Process- ing Association Annual Summit and Conference (AP- SIPA), pages 1225-1237. IEEE.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Phonemic transcription of lowresource languages: To what extent can preprocessing be automated?", "authors": [ { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Michaud", "suffix": "" }, { "first": "S\u00e9verine", "middle": [], "last": "Guillaume", "suffix": "" } ], "year": 2020, "venue": "1st Joint SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) Workshop", "volume": "", "issue": "", "pages": "306--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Wisniewski, Alexis Michaud, and S\u00e9verine Guillaume. 2020. Phonemic transcription of low- resource languages: To what extent can preprocess- ing be automated? In 1st Joint SLTU (Spoken Language Technologies for Under-resourced lan- guages) and CCURL (Collaboration and Computing for Under-Resourced Languages) Workshop, pages 306-315. European Language Resources Associa- tion (ELRA).", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Contextual Tonal Variations in Mandarin", "authors": [ { "first": "Yi", "middle": [], "last": "Xu", "suffix": "" } ], "year": 1997, "venue": "Journal of phonetics", "volume": "25", "issue": "1", "pages": "61--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Xu. 1997. Contextual Tonal Variations in Mandarin. Journal of phonetics, 25(1):61-83.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Tone. Cambridge Textbooks in Linguistics", "authors": [ { "first": "Moira", "middle": [], "last": "Yip", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moira Yip. 2002. Tone. Cambridge Textbooks in Lin- guistics. Cambridge University Press.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "A Review of Yor\u00f9b\u00e1 Automatic Speech Recognition", "authors": [ { "first": "Abdulwahab", "middle": [], "last": "Shahrul Azmi Mohd Yusof", "suffix": "" }, { "first": "M", "middle": [], "last": "Funsho Atanda", "suffix": "" }, { "first": "", "middle": [], "last": "Hariharan", "suffix": "" } ], "year": 2013, "venue": "2013 IEEE 3rd International Conference on System Engineering and Technology", "volume": "", "issue": "", "pages": "242--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahrul Azmi Mohd Yusof, Abdulwahab Funsho Atanda, and M Hariharan. 2013. A Review of Yor\u00f9b\u00e1 Automatic Speech Recognition. In 2013 IEEE 3rd International Conference on System Engi- neering and Technology, pages 242-247. IEEE.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Towards building an automatic transcription system for language documentation: Experiences from Muyu", "authors": [ { "first": "Alexander", "middle": [], "last": "Zahrer", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Zgank", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Schuppler", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2893--2900", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Zahrer, Andrej Zgank, and Barbara Schup- pler. 2020. Towards building an automatic transcrip- tion system for language documentation: Experi- ences from Muyu. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 2893-2900.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Automatic Speech Recognition of Quechua Language Using HMM Toolkit", "authors": [ { "first": "Rodolfo", "middle": [], "last": "Zevallos", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Cordova", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Camacho", "suffix": "" } ], "year": 2019, "venue": "Annual International Symposium on Information Management and Big Data", "volume": "", "issue": "", "pages": "61--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodolfo Zevallos, Johanna Cordova, and Luis Ca- macho. 2019. Automatic Speech Recognition of Quechua Language Using HMM Toolkit. In An- nual International Symposium on Information Man- agement and Big Data, pages 61-68. Springer.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Medians for character error rate (CER) and word error rate (WER) for Kaldi training, using different phone (monophone, triphone) and language models (unigrams, bigrams, trigrams).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Medians for character error rate (CER) for DeepSpeech models.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "content": "
", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "content": "
", "text": "Example of Kaldi transcriptions for three of the experimental conditions, trained with triphone-trigram models. More examples are shown in Appendix A.", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "content": "
", "text": "Median character error rate (CER) for models trained with CTC (DeepSpeech). Max\u2206 indicates the difference between the worst and the best models.", "num": null, "html": null, "type_str": "table" } } } }