|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:33:29.729464Z" |
|
}, |
|
"title": "Speech Technology for Everyone: Automatic Speech Recognition for Non-Native English with Transfer Learning", |
|
"authors": [ |
|
{ |
|
"first": "Toshiko", |
|
"middle": [], |
|
"last": "Shibano", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "tshibano@student.ubc.ca" |
|
}, |
|
{ |
|
"first": "Xinyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mia", |
|
"middle": [ |
|
"Taige" |
|
], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Haejin", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "haejin2909@gmail.com" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Sullivan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of British Columbia", |
|
"location": {} |
|
}, |
|
"email": "muhammad.mageed@ubc.ca" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "To address the performance gap of English ASR models on L2 English speakers, we evaluate fine-tuning of pretrained wav2vec 2.0 models (Baevski et al., 2020; Xu et al., 2021) on L2-ARCTIC, a non-native English speech corpus (Zhao et al., 2018) under different training settings. We compare (a) models trained with a combination of diverse accents to ones trained with only specific accents and (b) results from different single-accent models. Our experiments demonstrate the promise of developing ASR models for non-native English speakers, even with small amounts of L2 training data and even without a language model. Our models also excel in the zero-shot setting where we train on multiple L2 datasets and test on a blind L2 test set.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "To address the performance gap of English ASR models on L2 English speakers, we evaluate fine-tuning of pretrained wav2vec 2.0 models (Baevski et al., 2020; Xu et al., 2021) on L2-ARCTIC, a non-native English speech corpus (Zhao et al., 2018) under different training settings. We compare (a) models trained with a combination of diverse accents to ones trained with only specific accents and (b) results from different single-accent models. Our experiments demonstrate the promise of developing ASR models for non-native English speakers, even with small amounts of L2 training data and even without a language model. Our models also excel in the zero-shot setting where we train on multiple L2 datasets and test on a blind L2 test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Although non-native (L2) English speakers outnumber native (L1) English speakers (Crystal, 2003) , major challenges contribute to a gap between performance of ASR systems on L2 speech, mainly due to the influence of L1 pronunciation on the learned language, and the lack of annotated L2 speech data (Radzikowski et al., 2021; Viglino et al., 2019) . To meet these challenges, previous studies have exhibited two distinct approaches. The first is to make L2 speech representations more closely match those of L1 speech (Radzikowski et al., 2021) . The second approach leverages L2 speech data to improve model robustness. Due to L2 data scarcity, and hence the challenge of training L2 models from scratch, this second approach necessitates employment of transfer learning or domain adaptation (Shi et al., 2021; Sun et al., 2018) . * All authors contributed equally. State-of-the-art ASR models based on unsupervised/self-supervised pre-training such as wav2vec and wav2vec 2.0 (Baevski et al., 2020) 1 offer a tantalizing starting point for applying the second approach we list above, especially due to their strong performance on ASR even without a language model. However, challenges remain in identifying how best to apply models such as wav2vec 2.0 in L2 fine-tuning scenarios. For this reason, our objective in the current work is to investigate a rich set of conditions under which we can fine-tune ASR models for optimal L2 performance. More concretely, we attempt to achieve the following: pre-trained L1 English ASR models to L2 English;", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 96, |
|
"text": "(Crystal, 2003)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 325, |
|
"text": "(Radzikowski et al., 2021;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 347, |
|
"text": "Viglino et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 544, |
|
"text": "(Radzikowski et al., 2021)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 811, |
|
"text": "(Shi et al., 2021;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 829, |
|
"text": "Sun et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 978, |
|
"end": 1000, |
|
"text": "(Baevski et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. Explore impact of non-native (L2) accents on performance of these fine-tuned ASR models, comparing multi-accent training to singleaccent training; and 3. Quantify the impact of L2 fine-tuning on model performance for L1 English speech recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although external language models are often used in improving ASR performance (Nakatani, 2019; , models trained with great quantities of data can potentially internalize this linguistic information (Graves and Jaitly, 2014) . In particular, some of the wav2vec 2.0 models perform nearly as well with and without a language model on difficult speech such as LibriSpeech Test-Other (Xu et al., 2021) . We thus use this robust pre-trained model as our starting point, and carry out our work without use of an external language model to see if this performance is retained through the fine-tuning process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 94, |
|
"text": "(Nakatani, 2019;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 223, |
|
"text": "(Graves and Jaitly, 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 397, |
|
"text": "(Xu et al., 2021)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows: Section 2 is an overview of related works. We describe our data in Section 3. Section 4 is about our experiments and results. We conclude in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Because of the difficulty in linguistically annotating corpora for Hidden Markov Model (HMM)-based ASR (Graves and Jaitly, 2014) , researchers have broadly embraced End-to-End (E2E) deep learning architectures either based on Connectionist Temporal Classification (CTC) (Graves et al., 2006; Jaitly, 2014), Attention (Chorowski et al., 2015; Chan et al., 2016; Gulati et al., 2020) , or hybrids of the two (Watanabe et al., 2017; . Recent efforts inspired by work such as BERT (Devlin et al., 2019) have improved on these purely supervised learning baselines through self-supervised pre-training Baevski et al., , 2020 and self-training (Xu et al., 2021) . These self-supervised wav2vec models represent one line of research in speech representation. Other works include models similar to wav2vec that also use a contrastive loss (Oord et al., 2018) , models using an autoregressive loss function (Ling et al., 2020; Chung et al., 2019) , as well as models using a masked language model closer to the original BERT (Liu et al., 2020a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 128, |
|
"text": "(Graves and Jaitly, 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 291, |
|
"text": "(Graves et al., 2006;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 341, |
|
"text": "Jaitly, 2014), Attention (Chorowski et al., 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 360, |
|
"text": "Chan et al., 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 381, |
|
"text": "Gulati et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 429, |
|
"text": "(Watanabe et al., 2017;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 498, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 618, |
|
"text": "Baevski et al., , 2020", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 654, |
|
"text": "(Xu et al., 2021)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 830, |
|
"end": 849, |
|
"text": "(Oord et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 897, |
|
"end": 916, |
|
"text": "(Ling et al., 2020;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 936, |
|
"text": "Chung et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1034, |
|
"text": "(Liu et al., 2020a)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "With these efforts, ASR technologies for native languages have evolved significantly. However, we still observe problems in many applications. In particular, several researchers have emphasized how performance of ASR models drops when the input speech is from non-native speakers whose native languages are different from the models' target languages (Radzikowski et al., 2021; Livescu and Glass, 2000; Wang et al., 2003; Ping, 2008) . For systems developed for English ASR, this can be a real issue. The reason, as observed earlier, is that large populations of English language speakers are non-native (Crystal, 2003) . In line with this argument, Ping (2008), for example, pointed out the necessity to improve speech recognition technology for L2 speakers given that many people speak more than one language for economic and social reasons, especially considering human migration is becoming more common these days. It is hoped that continued efforts aiming at improving ASR for non-native speakers will eventually lead to improved results for many as voice recognition technology becomes increasingly pervasive in our daily lives (Ping, 2008 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 377, |
|
"text": "(Radzikowski et al., 2021;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 402, |
|
"text": "Livescu and Glass, 2000;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 421, |
|
"text": "Wang et al., 2003;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 433, |
|
"text": "Ping, 2008)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 619, |
|
"text": "(Crystal, 2003)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1134, |
|
"end": 1145, |
|
"text": "(Ping, 2008", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As we explained in Section 1, there are two distinct approaches to improve current ASR performance on L2 speech: 1) accent conversion as an extension to the active area of research of voice conversion; and 2) incorporation of L2 speech data, which is often limited in quantity and quality, during the model training process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first approach takes inspiration from voice conversion, but instead of focusing on modifying the pitch, it modifies the pronunciation to reduce accents. Additionally, voice conversion models aim to generate results that are speaker-dependent, while accent conversion models deal with generalizing accents from a group of speakers, hence being speaker-independent. With this approach, the resulting model can be used as a pre-processing step to remove accents in the data prior to feeding these data into an ASR model. Bearman et al. (2017) adopt this approach but focus on L1 English accents, while Radzikowski et al. (2021) work on L2 English accents with speakers' L1 being Japanese. Liu et al. (2020b) took a step further and turned Hindi-accented English to native American English without utilizing native utterances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 543, |
|
"text": "Bearman et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 628, |
|
"text": "Radzikowski et al. (2021)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The second approach often employs techniques such as domain adversarial training and transfer learning in order to utilize as much available ac-cented speech data as possible. Domain adversarial training (DAT) is a popular approach as it encourages models to learn accent-invariant features (Sun et al., 2018; Hou et al., 2019; Hu et al., 2021) . Transfer learning is another popular approach in L2 speech recognition, as it possibly allows a model to gain knowledge from both the base task and the new task, even when the new task has limited data (Matassoni et al., 2018; Das et al., 2021; Shi et al., 2021) . In the Accented English Speech Recognition Challenge 2020 (AESRC2020), many teams utilize transfer learning to tackle the L2 accent recognition task (Shi et al., 2021) . In a recent work, Das et al. (2021) combine both DAT and transfer learning to achieve robust accented speech recognition performance. We now introduce our data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 309, |
|
"text": "(Sun et al., 2018;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 327, |
|
"text": "Hou et al., 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 344, |
|
"text": "Hu et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 573, |
|
"text": "(Matassoni et al., 2018;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 591, |
|
"text": "Das et al., 2021;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 609, |
|
"text": "Shi et al., 2021)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 779, |
|
"text": "(Shi et al., 2021)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 800, |
|
"end": 817, |
|
"text": "Das et al. (2021)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We choose L2-ARCTIC, a non-native English speech corpus (Zhao et al., 2018) , for L2 finetuning. The recordings are from 24 non-native speakers of English with a total of six different L1s, and each of the L1s consists of two female speakers and two male speakers. The L1s we use for our experiments are Arabic (AR), Hindi (HI), Korean (KO), Mandarin (ZH), Spanish (ES), and Vietnamese (VI). Because L2-ARCTIC is based on the original L1 English corpus, CMU ARCTIC (Kominek et al., 2003 ) (henceforth L1-ARCTIC, for simplicity), we can easily evaluate performance from fine-tuning on same-domain L1 data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 75, |
|
"text": "(Zhao et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 486, |
|
"text": "(Kominek et al., 2003", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Information", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each speaker in L2-ARCTIC contributed approximately one hour of phonetically-balanced read speech based on the L1-ARCTIC prompts, which consist of carefully selected sentences (1, 132 sentence prompts) from Project Gutenberg (Kominek et al., 2003) . We note this, as the pretrained wav2vec 2.0 model we use was first pre-trained on LibriSpeech 2 (Panayotov et al., 2015) and then self-trained on Libri-Light 3 (Kahn et al., 2020) . Both corpora rely on audiobooks from the Lib-riVox project, 4 much of which comes from Project Gutenberg. 5 This minimizes discrepancies between domains of the text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 247, |
|
"text": "(Kominek et al., 2003)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 370, |
|
"text": "(Panayotov et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 429, |
|
"text": "(Kahn et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Information", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We also evaluate our fine-tuned models on 1) LibriSpeech to compare the fine-tuning with the original performance of self-trained wav2vec 2.0 Large (LV-60) model (Xu et al., 2021) , which we will refer to as Wav2Vec 2.0-ST. In addition, we evaluate on 2) L1-ARCTIC, identical to our L2-ARCTIC corpus but spoken by four native US English speakers, allowing us to identify any degradation in performance on L1 speech. Each of L1-ARCTIC speakers' datasets contain approximately the same number of utterances (n =\u223c 1, 132 * 4) as each of L2-ARCTIC speakers' datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 179, |
|
"text": "(Xu et al., 2021)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Information", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the purpose of our experiments, we define native (L1) accents as those represented in the Lib-riSpeech and L1-ARCTIC, and non-native (L2) accents as those represented in L2-ARCTIC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Information", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For both L2-ARCTIC and L1-ARCTIC, we split the data into three distinct Train, Dev, and Test sets with an 80:10:10 ratio. Importantly, we ensure there is no overlap between utterances. For L2-ARCTIC, we split the data across the following settings (see Fig. 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 259, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Split-1 (speaker-dependent, multi-accent split): All speakers from all accents in the Train set are also included in the Dev and Test sets; however, no utterances are shared between Train, Dev, and Test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Split-2 (speaker-independent cross-validation splits with multiple accents): A speaker from each accent 6 is removed from the Train and Dev sets, but other speakers with the same accent remain in the Train and Dev sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Split-3 (speaker-independent zero-shot splits with multiple accents): All speakers from one of the accents are entirely removed from the Train and Dev sets. The removed speakers are included in Test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Split-4 (all-speaker, single-accent split): Speakers are broken down by accents (six accents in total) and all speakers in a given accent are split into the Train, Dev, and Test sets (3 data splits x 6 accents).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Split-5 (speaker-independent cross-validation splits with single accent): One speaker in each", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Accent dependency Speaker dependency Dependent Independent Dependent Independent Multi-accent Model-1 (Split 1) x x Model-2 (Split 2) x x Model-3 (Split 3) x x Single-accent Model-4 (Split 4) x x x x Model-5 (Split 5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "x x ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For all our wav2vec 2.0 models, we use Fairseq 7 fine-tuning default settings as a reference and convert the hyper-parameters to align with Huggingface's implementation. We train each model with three random seeds and take average over three WERs, one each from the three seeds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For our model development, we use the wav2vec 2.0 architecture (Baevski et al., 2020) which is composed of a multi-layer convolutional neural network feature extractor and a Transformer context network. It takes in raw audio and converts it into representations of the input sequence. The encoder consists of multiple blocks of temporal convolution followed by a layer normalization and a GELU activation function. The relative positional embedding in the Transformer is accomplished by a convolutional layer. Fine-tuning of pre-trained wav2vec 2.0 is performed with CTC and the transcriptions of the audio segments. For each model, we identify the optimal hyper-parameters on the respective Dev set. We choose hyper-parameters as follows: For mask feature prob, we pick from {0.25, 0.5}, for mask feature length, we choose from {15, 30}, for mask time prob we use {0.5, 0.75}, and a batch size of 16. To mimic the tristate learning rate schedule (Baevski et al., 2020) , we set different learning rates for different stages:", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 85, |
|
"text": "(Baevski et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 947, |
|
"end": 969, |
|
"text": "(Baevski et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture, Fine-tuning, Baselines, and Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "warm-up (1e-5, 3e-5), constant stage (1e-5, 3e-5), and decay (1e-5, 3e-5, 5e-6). The decay stage is followed by another constant stage (1e-5, 2e-6, 5e-6) to simulate the Fairseq's fine-tuning configuration. We evaluate all our models in terms of word error rate (WER). All our results are the average of three runs, and we use the following baselines:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture, Fine-tuning, Baselines, and Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline-I: Wav2Vec 2.0-ST (Xu et al., 2021) , 8 a self-trained version of wav2vec 2.0 (Baevski et al., 2020) exploiting a Transformer large architecture and pre-training on 960 hours of speech data from LibriSpeech (Panayotov et al., 2015). The self-training is performed on 60K hours of Libri-Light (Kahn et al., 2020) . We believe this as an already strong baseline. We use the model released via HuggingFace. 9", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 46, |
|
"text": "(Xu et al., 2021)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 111, |
|
"text": "(Baevski et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 322, |
|
"text": "(Kahn et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture, Fine-tuning, Baselines, and Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline-II: This is Wav2Vec 2.0-ST, the same as Baseline-I, fine-tuned on L1-ARCTIC described earlier. The purpose of Baseline-II is to allow for measuring the tradeoff of L1 English ASR performance by finetuning the English pre-trained model on L2 accents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture, Fine-tuning, Baselines, and Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "With our multi-accent models, we examine performance using multiple accents during training. We introduce each of our models here, and present the results acquired with each. We provide a summary of our different data splits and models across accent and speaker dependency categories in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 294, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Model-1 (speaker-and accent-dependent): The model is fine-tuned with Split-1 data to identify any speaker-dependent training impact, as well as an upper limit on performance. In addition to evaluating on L2-ARCTIC Test, we evaluate on L1-ARCTIC Test and LibriSpeech in order to observe any changes in model performance on L1 English. As Table 2 shows, our Model-1 achieves best performance on both Dev and Test of L2-ARCTIC as compared to our two baselines. On Test, our Model-1 acquires 25.66% improvement over our Baseline-I wav2vec 2.0 system on L2-ARCTIC (9.27 WER for our model vs. 12.47 WER for Baseline-I). This gain is not surprising and simply means that a model with access to L2 data for fine-tuning will improve over models fine-tuned Figure 3 : HI-specific Model-4 evaluated on individual accents. As we evaluate model accuracy by error rate, the bars extending downwards represent the performance gain by fine-tuning. HI-specific fine-tuning benefits HI but hinders performance on all the other accents. Figure 4 : Individual Model-4s evaluated on the HI accent. All the bars except HI extend upwards, meaning that all the other single-accent models hinder performance on the HI accent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 344, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 755, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1018, |
|
"end": 1026, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "with L1 data (Baseline-II, which is fine-tuned on L1-ARCTIC) or not-fine-tuned at all (Baseline-I). Nor is performance on L1-ARCTIC surprising: a model fine-tuned with native data (Baseline-II) outperforms one fine-tuned with accented data (our Model-1), both of which outperform a model without fine-tuning (Baseline-I). These results, however, show that in absence of L1 data, L2 data can be valuable for improving ASR model performance even on L1. For LibriSpeech, Baseline-I, which is trained on LibriSpeech data, outperforms the two fine-tuned models (our Model-1 and Baseline-II). The reason is that these two latter models are fine-tuned on a domain that is different from LibriSpeech. That is, fine-tuning models on out-of-domain data will, and as we see here does, result in deterioration of performance on indomain data. We also note that our Model-1's performance on LibriSpeech is worse than that of Baseline-II on both the 'Clean' (LS Clean , native speech under quite recording environments), and 'Other' (LS Other , both noisy environment and accented recordings), Dev and Test splits. This may be because LibriSpeech is mostly comprised of L1 data and the greater variability on our L2-ARCTIC Train set (24 non-native speakers in our Model-1 vs. 4 native speakers in Baseline-II).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Model-2 (speaker-independent, accentdependent): While Model-1 mimics a situation where we have some training data from speakers that we serve (i.e., test on), this is rarely a realistic scenario. We instead switch to a speakerindependent (but still accent-dependent) setting, Split-2. We carry out four-fold cross-validation with the 24 speakers in the data, every time using 18 speakers (three speakers per accent) in Train 10 and six speakers in Test (one per accent). We report the average of the four folds/runs, along with standard deviation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As Table 3 shows, Model-2 performance is consistent with Model-1. Our Model-2 outperforms the two baselines on both Dev and Test, reaching 9.96 WER on Test compared to 12.47 for Baseline-I and 15.96 for Baseline-II. These results demonstrate that fine-tuning with multiple accents improves the accented ASR system without access to test speaker data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Model-3 (speaker-and accent-independent): To evaluate performance on unseen accents, we adopt a zero-shot strategy by removing one accent at a time from both Train and Dev sets and evaluating on the Test set of the removed accent, Split-3. To evaluate model performance on each accent, we conduct six runs in total with one accent removed at a time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As Table 4 shows, fine-tuning on accented speech benefits unseen accents and speakers (Model-3 setting). All the multi-accent, zeroshot models outperform Baseline-I and Baseline-II, which means each of the six accents benefit from other accents through this process of transfer Table 6 : Model-4 performance in the zero-shot setting. Bold fonts represent the accent whose WER drops the most in the zero-shot setting. For example, compared with Baseline-I, the VI-specific fine-tuning not only improves performance on VI (i.e., a drop in WER), but also improves on ZH despite ZH being the unseen accent. One notable pattern is that HI-specific fine-tuning only benefits HI-accented speech recognition while all the other fine-tuning hinder performance on the HI accent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "learning. Our results also show that, in absence of in-accent data, some unseen accents are easier for the model than others. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Accent Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We evaluate the accent-dependent performance by fine-tuning our models on a single type of L1specific accent at a time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accent-Specific Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Model-4 (speaker-dependent, accentdependent): The model is fine-tuned with Split-4 data to identify any accent-dependent training impact on downstream performance, as well as an upper bound on performance when the model is optimized for a single accent. In addition to evaluating on L2-ARCTIC Test, we test the model on L1-ARCTIC Test and LibriSpeech as a means to identify any degradation on L1 English data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accent-Specific Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "As Table 5 shows, while the multi-accent model (Model-1) outperforms Baseline-I for all six accents, all of the accent-specific models (Model-4 setting) outperform Model-1 on the Test L2 setting despite the small amount of data (roughly five hours) used for fine-tuning each of the versions of Model-4. On average, Model-4 setting is two points WER better than Model-1. In addition, Model-4 type models (each of which is fine-tuned on one non-native accent) perform reasonably well ). An interesting result is the apparent difficulty difference between different accents (HI and KO easiest, V I hardest), regardless of model types. We provide sample outputs from Model-4 in Table 8 . As shown in Table 6 , we also perform accentwise zero-shot evaluation. Results of this set of experiments reveal an interesting pattern: while fine-tuning on a single accent generally benefits at least one other accent, fine-tuning on the Hindi accent only benefits Hindi (the same accent) and hinders performance on all the other accents. Model-5 (speaker-independent and accentdependent): This setup simulates a more realistic scenario where we target a single accent, without access to all speakers during development time. Thus, we use Split-5 data which mimics a speakerindependent setting. We cross-validate each L1 subset with one of the four speakers per fold. The hyper-parameters we use are those identified for Model-4. To evaluate the performance on each speaker, we conduct 24 folds in total with one speaker removed at a time, and report the average and standard deviation of the four folds per each accent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 681, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 703, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accent-Specific Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "As Table 7 shows, speaker-dependent variability is small for Test all (SD = [0.11 \u2212 0.38]) but large for Test zeroshot-speaker (SD = [1.12 \u2212 4.87]). These results suggest that individual speaker's differences may play an important role in how much performance gain can be obtained by fine-tuning. 11", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accent-Specific Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "11 For those speakers whose TOEFL scores are known (Zhao", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accent-Specific Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We demonstrated potential of developing accentindependent and accent-dependent models that improve non-native speech recognition simply by finetuning the pre-trained wav2vec 2.0 model on a small amount of labeled data. Both the multi-and single-accent models improve performance on L2 English speakers. However, each accent benefits differently: results of the multi-accent, zero-shot experiments suggest that transfer learning on accent is possible and single-accent models improve the most for the target L2 accents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As to future work, while we chose a language model-free setting to focus specifically on wav2vec 2.0's acoustic capacity, comparison with language model decoding would be a useful direction to explore as a way to gauge any further potential improvements a language model can bring. In addition, finding the optimal combination of accented speech datasets when there is no available dataset for a target accent (Model-3) may constitute another interesting direction. Finally, although we have offered a number of sample transcriptions from one of our models, a thorough error analysis on each experiment would help advance the research into improving ASR models for non-native English speakers. Since L2 English speakers have specific accent characteristics influenced by their native languages, an error analysis focused on each language as well as on groups or families of languages will likely aid effective model development. Future directions could also investigate different strategies for developing ASR systems for challenging languages such as Vietnamese. et al., 2018), a strong negative correlation was observed between speaker-specific WERs of Baseline-I and speaker's TOEFL scores, r(8) \u2248 \u2212.77, p <.01.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Although sometimes referred to as 'unsupervised', these models employ a self-supervised objective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.openslr.org/12/ 3 https://github.com/facebookresearch/libri-light 4 https://librivox.org 5 http://www.gutenberg.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the term 'accent' here to loosely refer to variation in speakers with L1 other than English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/pytorch/fairseq", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/pytorch/fairseq/ tree/master/examples/wav2vec#wav2vec2.0 9 https://huggingface.co/facebook/ wav2vec2-large-960h-lv60-self", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use 10% of the utterances from these 18 speakers for development (Dev).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "vq-wav2vec: Self-supervised learning of discrete speech representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.05453" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexei Baevski, Steffen Schneider, and Michael Auli. 2019. vq-wav2vec: Self-supervised learning of discrete speech representations. arXiv preprint arXiv:1910.05453.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.11477" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A frame- work for self-supervised learning of speech represen- tations. arXiv preprint arXiv:2006.11477.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Accent conversion using artificial neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Bearman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelsey", |
|
"middle": [], |
|
"last": "Josund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gawan", |
|
"middle": [], |
|
"last": "Fiore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amy Bearman, Kelsey Josund, and Gawan Fiore. 2017. Accent conversion using artificial neural networks. Technical report, Stanford University, Tech. Rep, Tech. Rep.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4960--4964", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 4960-4964. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Attention-based models for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Jan K Chorowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitriy", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Serdyuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "577--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recogni- tion. In Advances in neural information processing systems, pages 577-585.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An unsupervised autoregressive model for speech representation learning", |
|
"authors": [ |
|
{ |
|
"first": "Yu-An", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ning", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.03240" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-An Chung, Wei-Ning Hsu, Hao Tang, and James Glass. 2019. An unsupervised autoregressive model for speech representation learning. arXiv preprint arXiv:1904.03240.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "English as a global language", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Crystal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Crystal. 2003. English as a global language. Ernst Klett Sprachen.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Best of both worlds: Robust accented speech recognition with adversarial transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Nilaksh", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sravan", |
|
"middle": [], |
|
"last": "Bodapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monica", |
|
"middle": [], |
|
"last": "Sunkara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sundararajan", |
|
"middle": [], |
|
"last": "Srinivasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duen Horng", |
|
"middle": [], |
|
"last": "Chau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2103.05834" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nilaksh Das, Sravan Bodapati, Monica Sunkara, Sun- dararajan Srinivasan, and Duen Horng Chau. 2021. Best of both worlds: Robust accented speech recog- nition with adversarial transfer learning. arXiv preprint arXiv:2103.05834.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Santiago", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Faustino", |
|
"middle": [], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 23rd international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "369--376", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://dl.acm.org/doi/pdf/10.1145/1143844.1143891?casa_token=GhCuBi2Ga2MAAAAA:t30hDL-ndW_PtrXBTyV09hTjaemBOH6K7b0DDOs8q5-gslBe5XmHuIM5A7MQL5YirkcKSFYuFaOF" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd international conference on Ma- chine learning, pages 369-376.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Towards endto-end speech recognition with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International conference on machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1764--1772", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves and Navdeep Jaitly. 2014. Towards end- to-end speech recognition with recurrent neural net- works. In International conference on machine learning, pages 1764-1772.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Conformer: Convolution-augmented transformer for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Anmol", |
|
"middle": [], |
|
"last": "Gulati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chung-Cheng", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiahui", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shibo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.08100" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. 2020. Conformer: Convolution-augmented trans- former for speech recognition. arXiv preprint arXiv:2005.08100.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Domain adversarial training for improving keyword spotting performance of esl speech", |
|
"authors": [ |
|
{ |
|
"first": "Jingyong", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sining", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenping", |
|
"middle": [], |
|
"last": "Soong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8122--8126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingyong Hou, Pengcheng Guo, Sining Sun, Frank K Soong, Wenping Hu, and Lei Xie. 2019. Domain adversarial training for improving keyword spot- ting performance of esl speech. In ICASSP 2019- 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8122-8126. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Redat: Accent-invariant representation for end-to-end asr by domain adversarial training with relabeling", |
|
"authors": [ |
|
{ |
|
"first": "Hu", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuesong", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeynab", |
|
"middle": [], |
|
"last": "Raeesy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinxi", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gokce", |
|
"middle": [], |
|
"last": "Keskin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harish", |
|
"middle": [], |
|
"last": "Arsikere", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ariya", |
|
"middle": [], |
|
"last": "Rastrow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Maas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6408--6412", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hu Hu, Xuesong Yang, Zeynab Raeesy, Jinxi Guo, Gokce Keskin, Harish Arsikere, Ariya Rastrow, Andreas Stolcke, and Roland Maas. 2021. Re- dat: Accent-invariant representation for end-to-end asr by domain adversarial training with relabeling. In ICASSP 2021-2021 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6408-6412. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Libri-light: A benchmark for asr with limited or no supervision", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Kahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgane", |
|
"middle": [], |
|
"last": "Rivi\u00e8re", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiyi", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeny", |
|
"middle": [], |
|
"last": "Kharitonov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiantong", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Emmanuel", |
|
"middle": [], |
|
"last": "Mazar\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Karadayi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaliy", |
|
"middle": [], |
|
"last": "Liptchinsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Fuegen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7669--7673", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Kahn, Morgane Rivi\u00e8re, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazar\u00e9, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. 2020. Libri-light: A benchmark for asr with limited or no supervision. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7669-7673. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Cmu arctic databases for speech synthesis", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Kominek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ver", |
|
"middle": [], |
|
"last": "Ver", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Kominek, Alan W Black, and Ver Ver. 2003. Cmu arctic databases for speech synthesis.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Deep contextualized acoustic representations for semi-supervised speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Shaoshi", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuzong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Salazar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Kirchhoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6429--6433", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoshi Ling, Yuzong Liu, Julian Salazar, and Katrin Kirchhoff. 2020. Deep contextualized acoustic rep- resentations for semi-supervised speech recognition. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6429-6433. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Andy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu-Wen", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Po-Han", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Po-Chun", |
|
"middle": [], |
|
"last": "Chi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hung-Yi", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6419--6423", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andy T Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, and Hung-yi Lee. 2020a. Mockingjay: Unsuper- vised speech representation learning with deep bidi- rectional transformer encoders. In ICASSP 2020- 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6419-6423. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "End-to-end accent conversion without using native utterances", |
|
"authors": [ |
|
{ |
|
"first": "Songxiang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Disong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuewen", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifa", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xixin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiyin", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xunying", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6289--6293", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Songxiang Liu, Disong Wang, Yuewen Cao, Lifa Sun, Xixin Wu, Shiyin Kang, Zhiyong Wu, Xunying Liu, Dan Su, Dong Yu, et al. 2020b. End-to-end accent conversion without using native utterances. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6289-6293. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Lexical modeling of non-native speech for automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100)", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1683--1686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Livescu and James Glass. 2000. Lexical model- ing of non-native speech for automatic speech recog- nition. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceed- ings (Cat. No. 00CH37100), volume 3, pages 1683- 1686. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Non-native children speech recognition through transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Matassoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Gretter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniele", |
|
"middle": [], |
|
"last": "Falavigna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Giuliani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6229--6233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Matassoni, Roberto Gretter, Daniele Falavi- gna, and Diego Giuliani. 2018. Non-native chil- dren speech recognition through transfer learning. In 2018 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 6229-6233. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improving transformerbased end-to-end speech recognition with connectionist temporal classification and language model integration", |
|
"authors": [ |
|
{ |
|
"first": "Tomohiro", |
|
"middle": [], |
|
"last": "Nakatani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. Interspeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomohiro Nakatani. 2019. Improving transformer- based end-to-end speech recognition with connec- tionist temporal classification and language model integration. In Proc. Interspeech 2019.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Representation learning with contrastive predictive coding", |
|
"authors": [ |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Van Den Oord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yazhe", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1807.03748" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Librispeech: an asr corpus based on public domain audio books", |
|
"authors": [ |
|
{ |
|
"first": "Vassil", |
|
"middle": [], |
|
"last": "Panayotov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoguo", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Povey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE international conference on acoustics, speech and signal processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5206--5210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr cor- pus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Automatic speech recognition for non-native speakers", |
|
"authors": [ |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Tan Tien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tan Tien Ping. 2008. Automatic speech recognition for non-native speakers. Ph.D. thesis, Universit\u00e9 Joseph-Fourier-Grenoble I.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Accent modification for speech recognition of non-native speakers using neural style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Kacper", |
|
"middle": [], |
|
"last": "Radzikowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Osamu", |
|
"middle": [], |
|
"last": "Yoshie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Nowak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "EURASIP Journal on Audio, Speech, and Music Processing", |
|
"volume": "2021", |
|
"issue": "1", |
|
"pages": "1--10", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://asmp-eurasipjournals.springeropen.com/track/pdf/10.1186/s13636-021-00199-3.pdf" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kacper Radzikowski, Le Wang, Osamu Yoshie, and Robert Nowak. 2021. Accent modification for speech recognition of non-native speakers using neu- ral style transfer. EURASIP Journal on Audio, Speech, and Music Processing, 2021(1):1-10.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "wav2vec: Unsupervised pre-training for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.05862" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The accented english speech recognition challenge 2020: open datasets, tracks, baselines, results and methods", |
|
"authors": [ |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhou", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiangze", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daliang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanmin", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6918--6922", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xian Shi, Fan Yu, Yizhou Lu, Yuhao Liang, Qiangze Feng, Daliang Wang, Yanmin Qian, and Lei Xie. 2021. The accented english speech recognition chal- lenge 2020: open datasets, tracks, baselines, results and methods. In ICASSP 2021-2021 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6918-6922. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Domain adversarial training for accented speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sining", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ching-Feng", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mei-Yuh", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4854--4858", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sining Sun, Ching-Feng Yeh, Mei-Yuh Hwang, Mari Ostendorf, and Lei Xie. 2018. Domain adversar- ial training for accented speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4854-4858. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "End-to-end accented speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "Viglino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Motlicek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milos", |
|
"middle": [], |
|
"last": "Cernak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Interspeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2140--2144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault Viglino, Petr Motlicek, and Milos Cernak. 2019. End-to-end accented speech recognition. In Interspeech, pages 2140-2144.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Transformer-based acoustic modeling for hybrid speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yongqiang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdelrahman", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Due", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunxi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Mahadeokar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongzhao", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andros", |
|
"middle": [], |
|
"last": "Tjandra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaohui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6874--6878", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yongqiang Wang, Abdelrahman Mohamed, Due Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, et al. 2020. Transformer-based acous- tic modeling for hybrid speech recognition. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6874-6878. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Comparison of acoustic model adaptation techniques on non-native speech", |
|
"authors": [ |
|
{ |
|
"first": "Zhirong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Schultz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Waibel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhirong Wang, Tanja Schultz, and Alex Waibel. 2003. Comparison of acoustic model adaptation tech- niques on non-native speech. In 2003 IEEE In- ternational Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03)., volume 1, pages I-I. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Hybrid ctc/attention architecture for end-to-end speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Shinji", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takaaki", |
|
"middle": [], |
|
"last": "Hori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suyoun", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoki", |
|
"middle": [], |
|
"last": "Hershey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hayashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Journal of Selected Topics in Signal Processing", |
|
"volume": "11", |
|
"issue": "8", |
|
"pages": "1240--1253", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Sig- nal Processing, 11(8):1240-1253.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Independent language modeling architecture for end-to-end asr", |
|
"authors": [ |
|
{ |
|
"first": "Haihua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yerbolat", |
|
"middle": [], |
|
"last": "Khassanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiping", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chongjia", |
|
"middle": [], |
|
"last": "Eng Siong Chng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Ni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7059--7063", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haihua Xu, Yerbolat Khassanov, Zhiping Zeng, Eng Siong Chng, Chongjia Ni, Bin Ma, Haizhou Li, et al. 2020. Independent language modeling architecture for end-to-end asr. In ICASSP 2020- 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7059-7063. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Selftraining and pre-training are complementary for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Qiantong", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Likhomanenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paden", |
|
"middle": [], |
|
"last": "Tomasello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Synnaeve", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3030--3034", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, and Michael Auli. 2021. Self- training and pre-training are complementary for speech recognition. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3030-3034. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "L2-arctic: A non-native english speech corpus. Perception Sensing Instrumentation Lab", |
|
"authors": [ |
|
{ |
|
"first": "Guanlong", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinem", |
|
"middle": [], |
|
"last": "Sonsaat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Alif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivana", |
|
"middle": [], |
|
"last": "Silpachai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeny", |
|
"middle": [], |
|
"last": "Lucic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Chukharev-Hudilainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Levis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gutierrez-Osuna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guanlong Zhao, Sinem Sonsaat, Alif O Silpachai, Ivana Lucic, Evgeny Chukharev-Hudilainen, John Levis, and Ricardo Gutierrez-Osuna. 2018. L2- arctic: A non-native english speech corpus. Percep- tion Sensing Instrumentation Lab.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The various data splits we use in our experiments. Shade represents a different run of our training, with the gradient blocks in Split 4 being present in all runs. For cross validation splits, we show a single fold as an example, where number indicates the participants included." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure 3 and Figure 4 illustrate this observation." |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Summary of data splits, fine-tuning, and evaluation setups. accent is removed from the Train and Dev sets, but the other speakers with the same accent remain in the Train and Dev sets.", |
|
"content": "<table><tr><td>As there</td></tr><tr><td>are four speakers per accent, four splits are</td></tr><tr><td>created for each accent, which are further split</td></tr><tr><td>into the Train, Dev, and Test sets (3 data splits</td></tr><tr><td>x 6 accents x 4 speakers).</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Model-1 performance in word error rate (WER) (lower is better) on non-native accents (L2-ARCTIC) and native accents (L1-ARCTIC, LS dev and LS test ). Baseline-I and Baseline-II are reported on the same Dev and Test sets of each corpus for comparison.", |
|
"content": "<table><tr><td/><td>Dev L2</td><td/><td>Test L2</td><td/></tr><tr><td>Model</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td>Baseline-I</td><td colspan=\"2\">13.47 0.23</td><td colspan=\"2\">12.47 0.84</td></tr><tr><td>Baseline-II</td><td colspan=\"2\">17.29 0.41</td><td colspan=\"2\">15.96 1.58</td></tr><tr><td>Model-2</td><td colspan=\"2\">9.57 0.19</td><td colspan=\"2\">9.96 0.64</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Model-3 setting, where a different accent is removed each run. Test all refers to Test of all 24 speakers, and Test zeroshot refers to Test of those four speakers who have L1 removed accent. Baseline-I acquires 12.47 on Test all , while Baseline-II acquires 15.95 on the same test set (i.e., Test all ).", |
|
"content": "<table><tr><td/><td>Baseline-I</td><td>Baseline-II</td><td>Model-1</td><td/><td colspan=\"2\">Model-4</td><td/></tr><tr><td>L1</td><td>Test L2</td><td>Test L2</td><td>Test L2</td><td colspan=\"4\">Test L2 Test L1 LS Clean LS Other</td></tr><tr><td>VI</td><td>23.30</td><td>28.81</td><td>15.14</td><td>12.12</td><td>2.02</td><td>3.08</td><td>6.96</td></tr><tr><td>ZH</td><td>14.85</td><td>19.32</td><td>11.49</td><td>8.95</td><td>1.82</td><td>2.84</td><td>6.22</td></tr><tr><td>AR</td><td>10.95</td><td>14.82</td><td>8.90</td><td>6.92</td><td>1.55</td><td>2.66</td><td>6.24</td></tr><tr><td>ES</td><td>10.48</td><td>13.48</td><td>8.92</td><td>6.68</td><td>1.56</td><td>2.53</td><td>6.11</td></tr><tr><td>KO</td><td>8.18</td><td>10.22</td><td>6.60</td><td>4.99</td><td>1.71</td><td>2.51</td><td>5.63</td></tr><tr><td>HI</td><td>6.93</td><td>8.93</td><td>5.51</td><td>4.99</td><td>1.52</td><td>2.36</td><td>6.05</td></tr><tr><td>Mean</td><td>12.45</td><td>15.93</td><td>9.43</td><td>7.44</td><td>1.70</td><td>2.66</td><td>6.20</td></tr><tr><td>SD</td><td>5.97</td><td>7.30</td><td>3.49</td><td>2.72</td><td>0.20</td><td>0.26</td><td>0.43</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table/>" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Model-5 performance on L2 accent. Test all</td></tr><tr><td>contains utterances by all speakers within each L1</td></tr><tr><td>whereas Test zeroshot-speaker contains utterances by a sin-</td></tr><tr><td>gle speaker that is absent in the training phase. Mean</td></tr><tr><td>refers to the average WER over four folds for each L1,</td></tr><tr><td>and SD refers to the standard deviation.</td></tr></table>" |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Ref at lake linderman i had one canoe very good peterborough canoe VI at LAY LINDEMAN i had one canoe very good PETERBORROUG CANOES A lake LNDER MAN i had one canoe very good BIET OF ROCK canoe ZH at lake LINGERMAN i had ONCE canoe very good PETERBROUGH canoe at lake LINERMAN i had one canoe very good PETERE BROUGHTA canoe AR at lake LUNDERBOGH i had one canoe very good BITTERBOROUGH canoe at lake LUNDERMAN i had one canoe very good BETTER BORT canoe", |
|
"content": "<table><tr><td>Model</td><td>Model output</td></tr><tr><td>ES</td><td>at lake linderman i had one canoe a very good PETERBOURN canoe at lake linderman i had ONCE canoe very good PIERREBOROUGH canoe</td></tr><tr><td>KO</td><td>at lake linderman i had one canoe very good peterborough canoe at lake LINDEMAN i had ONCE canoe very good PITTEBRAUG canoe</td></tr><tr><td>HI</td><td>at lake LINDEMAN i had one canoe very good PETERBURGH canoe at lake linderman i had one canoe A very good PEACHERBROROU canoe</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Examples of transcription output of selected utterances from the Test set of Model-4 among all six L1s without a language model. Capitalized words indicate errors. We show samples from two speakers per accent.", |
|
"content": "<table><tr><td>on L1 data (Test L1 , LS Clean , and LS Other ). Fur-</td></tr><tr><td>ther, large accent-specific variability is observed</td></tr><tr><td>across different model types on Test L2 (SD =</td></tr><tr><td>[2.72 \u2212 7.30]), compared with native counterparts</td></tr><tr><td>such as Test L1</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |