{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:24.956209Z" }, "title": "Indic Languages Automatic Speech Recognition using Meta-Learning Approach", "authors": [ { "first": "Anugunj", "middle": [], "last": "Naman", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIIT Guwahati", "location": { "country": "India" } }, "email": "anugunj.naman@iiitg.ac.in" }, { "first": "Kumari", "middle": [], "last": "Deepshikha", "suffix": "", "affiliation": {}, "email": "kumari.deepshikha@lowes.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recently Conformer-based models have shown promising leads to Automatic Speech Recognition (ASR), outperforming transformer-based networks while metalearning has been extremely useful in modeling deep learning networks with a scarcity of abundant data. In this work, we use Conformers to model both global and local dependencies of an audio sequence in a very parameter-efficient way and meta-learn the initialization parameters from several languages during training to attain fast adaptation on the unseen target languages, using model-agnostic meta-learning algorithm (MAML). We analyse and evaluate the proposed approach for seven different Indic languages. Preliminary results showed that the proposed method, MAML-ASR, comes significantly closer to state-of-the-art monolingual Automatic Speech Recognition for all seven different Indic languages in terms of character error rate.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Recently Conformer-based models have shown promising leads to Automatic Speech Recognition (ASR), outperforming transformer-based networks while metalearning has been extremely useful in modeling deep learning networks with a scarcity of abundant data. In this work, we use Conformers to model both global and local dependencies of an audio sequence in a very parameter-efficient way and meta-learn the initialization parameters from several languages during training to attain fast adaptation on the unseen target languages, using model-agnostic meta-learning algorithm (MAML). We analyse and evaluate the proposed approach for seven different Indic languages. Preliminary results showed that the proposed method, MAML-ASR, comes significantly closer to state-of-the-art monolingual Automatic Speech Recognition for all seven different Indic languages in terms of character error rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\"Ok, Google. Hi Alexa. Hey Siri.\" have featured an enormous boom of smart speakers in recent years, unveiling a trend towards ubiquitous and ambient computing (AI) for better daily lives. As the communication bridge between humans and machines, multilingual ASR is of central importance. India is a country with an enormous amount of languages and catering to those languages is difficult without having a large amount of label training corpora. Pretraining on other language sources as the initialization, then fine-tuning on target language is the main approach for such low-resource setting, also referred to as multilingual transfer learning pretraining (Multi-ASR) (Vu et al., 2014) (Tong et al., 2017) . Multi-ASR models are designed to learn using an encoder to extract language-independent representations to build a better acoustic model from Figure 1 : The MAML algorithm learns a good parameter initializer \u03b8 by training across various meta-tasks such that it can adapt quickly to new tasks. many source languages. The success of language independent features to improve ASR performance compared to monolingual training has been shown in many recent works (Dalmia et al., 2018) (Tong et al., 2018) . However, there performance have been lacklustre compared to model trained directly using target language, i.e., training for single language only.", "cite_spans": [ { "start": 670, "end": 687, "text": "(Vu et al., 2014)", "ref_id": "BIBREF18" }, { "start": 688, "end": 707, "text": "(Tong et al., 2017)", "ref_id": "BIBREF15" }, { "start": 1167, "end": 1188, "text": "(Dalmia et al., 2018)", "ref_id": "BIBREF2" }, { "start": 1189, "end": 1208, "text": "(Tong et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 852, "end": 860, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we follow on the concept of multilingual pretraining -Meta-learning. Meta-learning, or learning-to-learn, has recently received considerable interest within the machine learning community. The goal of meta-learning is to resolve the matter of fast adaptation on unseen data, which is aligned with our low-resource setting for different Indic languages. We use model-agnostic metalearning algorithm (MAML) (Finn et al., 2017) in this work. As its name suggests and seen in figure 1, MAML can be applied to any neural network architecture since it only modifies the optimization process following a meta-learning training method. It doesn't introduce any additional modules like adversarial training or requires phoneme level annotation like hierarchical approaches (Hsu et al., 2019) .", "cite_spans": [ { "start": 420, "end": 439, "text": "(Finn et al., 2017)", "ref_id": "BIBREF3" }, { "start": 779, "end": 797, "text": "(Hsu et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent times, the Transformer architecture based on self-attention (Zhang et 2020) (Vaswani et al., 2017) has shown widespread adoption for modeling sequences due to its ability to capture long-distance interactions and the high training efficiency. Alternatively, convolutions have also been successful for speech recognition -Hamid et al., 2014) , that capture local context progressively using local receptive field layer by layer. However, models with convolutions or selfattention each have their own limitations. While Transformers are good at modeling long-range global context, they are not very capable to extract fine-grained local feature patterns. Convolution networks, on the other hand, exploit local information and are used as the common computational block in vision. They learn shared position-based kernels over a local window which maintains translation equivariance and can capture features like edges and shapes. One limitation of using local connectivity is that you need several layers or parameters to capture global information. To tackle this issue, contemporary work ContextNet adopts the squeeze-and-excitation module (Hu et al., 2018) in each residual block to capture longer context. However, the model is still limited in capturing dynamic global context because it only applies a global averaging over the entire sequence.", "cite_spans": [ { "start": 70, "end": 79, "text": "(Zhang et", "ref_id": null }, { "start": 86, "end": 108, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF17" }, { "start": 330, "end": 350, "text": "-Hamid et al., 2014)", "ref_id": "BIBREF0" }, { "start": 1150, "end": 1167, "text": "(Hu et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, combining convolution and selfattention has shown significant improvement in automatic speech recognition model as they can learn both position-wise local features and use contentbased global interactions. We have used Conformers (Gulati et al., 2020) in this work. Conformers are the combination of self-attention and convolution sandwiched between a pair of feed-forward modules that achieves the best of both worlds i.e., self-attention learns the global interaction whilst the convolutions coherently captures the relative offset-based local correlations.", "cite_spans": [ { "start": 240, "end": 261, "text": "(Gulati et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluated the effectiveness of the proposed model of several Indic languages. Our experiments show that our model comes close to monolingual models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we present the architecture of our conformer-based speech recognition model and the proposed meta-learning method for fast adaptation to the multilingual speech recognition task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "2" }, { "text": "As shown in Figure 2 , we build our model using a Conformers to learn to predict graphemes from the speech input. Our model extracts learnable features from audio inputs using a feature extractor module to generate input embeddings. The encoder process the input embeddings generated from the feature extractor module using conformer blocks. Mathematically, this means, for input x i to a Conformer block i, the output z i of the block is:", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conformer Speech Recognition Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x i = x i + 1 2 FFN(x i ) x i =x i + MHSA(x i ) x i = x i + Conv(x i ) z i = Layernorm(x i + 1 2 FFN(x i ))", "eq_num": "(1)" } ], "section": "Conformer Speech Recognition Model", "sec_num": "2.1" }, { "text": "where FFN refers to the Feedforward module, MHSA refers to the Multi-Head Self-Attention module, and Conv refers to the Convolution module as described in the preceding sections (Gulati et al., 2020) . Then the decoder receives the encoder outputs from conformer blocks and applies multi-head attention to its input to finally compute the logits of the outputs. To generate the probability of the outputs, we then compute the value of logits using a softmax function. We also apply a mask in the attention layer to avoid any possible information flow from future tokens. We then train our model by optimizing the next-step prediction on the previous characters and by maximizing the log probability shown below:", "cite_spans": [ { "start": 178, "end": 199, "text": "(Gulati et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conformer Speech Recognition Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max \u03b8 i log P (y i |z, y Predictions(Grapheme)SoftmaxLinearConformerTransformerEncoderDecoderPositionalEncodingFeature ExtractorCharacter EmbeddingsInputsOutputs(shifted right)", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "Statistics of Indic Language Speech Data.", "content": "
Language# Samples
Assamese (as)36,000
Bengali (be)232,537
Hindi (hi)80,000
Marathi (ma)44,500
Nepali (ne)157,905
Sinhala (sh)185,293
Tamil (ta)62,000
Total798,235
", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "Average Character Error Rate (% CER) comparison with single training.", "content": "
LanguagesMAMLSingle Training
-10%-shot 25%-shot 50%-shot 75%-shotall-shot-
Assamese61.2950.8741.8025.4413.44 (+1.86)11.58
Bengali57.4847.6038.1926.4710.77 (+2.04)8.73
Hindi55.4943.8135.7823.4310.19 (+2.92)7.27
Marathi56.7845.3036.6823.5610.04 (+2.91)7.13
Nepali57.3347.3335.2722.8510.32 (+3.46)6.86
Sinhala54.3645.2235.1524.3611.69 (+4.36)7.33
Tamil60.3848.7039.8927.4119.74 (+4.21)15.53
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "", "content": "
: Mean Human Evaluation score(0-5) for Indic Languages
LanguageMAMLSingle Training
-Mean Correct Mean Fluency Mean Correct Mean Fluency
Assamese (as)4.14.04.44.5
Bengali (be)4.04.04.44.4
Hindi (hi)4.24.14.44.5
Marathi (ma)4.04.14.54.5
Nepali (ne)4.14.04.64.5
Sinhala (sh)4.14.14.54.4
Tamil (ta)3.94.14.24.2
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "LANGUAGE: HINDI ORIGINAL: \u092e\u0941 \u091d\u0947 \u0907\u0938\u0938\u0947 \u0915\u094b\u0908 \u092b\u0915 \u0928\u0939 \u0902 \u092a\u095c\u0924\u093e \u0915 \u0930\u0928 \u0915\u0939\u093e\u0902 \u092c\u0928\u0947 \u0939 \u092f \u0915 \u091f\u0947 \u091f \u092e\u0948 \u091a \u092e \u0930\u0928 \u0924\u094b \u0930\u0928 \u0939\u094b\u0924\u0947 \u0939 SINGLE: \u092e\u0941 \u091d\u0947 \u0907\u0938\u0947 \u0915\u094b\u0908 \u092b\u0915 \u0928\u0939 \u0902 \u092a\u095c\u0924 \u0915 \u0930\u0928 \u0915\u0939\u093e \u092c\u0928\u0947 \u0939 \u092f\u094b \u0915 \u091f\u0947 \u091f \u092e\u0948 \u091a\u094b \u092e \u0930\u0928 \u0924\u094b \u0930\u0928 \u0939\u094b\u0924\u0947 \u0939\u0948 MAML: \u092e\u0941 \u091d\u0947 \u0907\u0938\u0947 \u0915\u094b\u0908 \u092b\u0915 \u0928\u0939 \u0902 \u092a\u095c\u0924 \u0915 \u0930\u0928 \u0915\u0939 \u092c\u0928\u0947 \u0939 \u092f\u094b \u0915 \u091f\u0947 \u091f \u092e\u0948 \u091a\u094b \u092e\u0947 \u0930\u0928 \u0924 \u0930\u0928 \u0939\u094b\u0924\u0947 \u0939 ORIGINAL: \u092c\u091c\u091f \u0924\u0948 \u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u092e \u0905\u0939\u092e \u092d\u0942 \u092e\u0915\u093e \u0939\u094b\u0924\u0940 \u0939\u0948 \u0906\u0907\u090f \u091c\u093e\u0928\u0924\u0947 \u0939 \u092c\u091c\u091f \u0924\u0948 \u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u0935\u093e\u0932 \u091f \u092e \u0915\u0947 \u092c\u093e\u0930\u0947 \u092e SINGLE: \u092c\u091c\u091f \u0924\u0948 \u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u092e \u0905\u092e \u092d\u0942 \u092e\u0915\u093e \u0939\u094b\u0924\u0940 \u0939\u0948 \u0906\u090f \u091c\u093e\u0928\u0924\u0947 \u0939 \u092c\u091c\u091f \u0924\u0948 \u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u0935\u093e\u0932 \u091f \u092e \u0915\u0947 \u092c\u093e\u0930\u0947 \u092e\u0947 MAML: \u092c\u091c\u091f \u0924\u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u092e \u0905\u092e \u092d\u0942 \u092e\u0915\u093e \u0939\u094b\u0924\u0940 \u0939 \u0906\u090f \u091c\u093e\u0928\u0924\u0947 \u092c\u091c\u091f \u0924\u0948 \u092f\u093e\u0930 \u0915\u0930\u0928\u0947 \u0935\u093e\u0932 \u091f \u092e \u0915\u0947 \u092c\u093e\u0930\u0947 \u092e", "content": "
LANGUAGE: BENGALI
ORIGINAL: \u098f \u09ad\u09be\u09b0\u09c7\u09a4 \u0985\u09a8\u09c1 \u09bf \u09a4 \u09b8\u09ac \u0995\u09be\u09c7\u09b2\u09b0 \u09ac\u09c3 \u09b9 \u09ae \u09bf\u09a8\u09ac \u09be\u099a\u09a8\u0964
SINGLE: \u098f \u09ad\u09be\u09b0\u09c7\u09a4 \u0985\u09a8\u09c1 \u09bf \u09a4 \u09b8\u09ac \u0995\u09be\u09c7\u09b2\u09b0 \u09ac\u09c3 \u09ae \u09bf\u09a8\u09ac \u099a\u09a8
MAML: \u098f \u09ad\u09be\u09b0\u09c7\u09a4 \u0985\u09a8\u09c1 \u09a4 \u09b8\u09ac \u0995\u09be\u09c7\u09b2 \u09ac\u09c3 \u09b9 \u09ae \u09bf\u09a8\u09ac \u099a\u09a8
LANGUAGE: TAMIL
ORIGINAL: \u0b87 \u0ba4\u0bbf\u0baf\u0bbe\u0bb5\u0b87 \u0bb5\u0bc8\u0bb0 \u0ba8\u0b9f \u0ba4 \u0bae\u0bbf\u0b95 \u0bc6\u0baa \u0baf \u0bc7\u0ba4 \u0ba4\u0b87 \u0bb5\u0bbe.
SINGLE: \u0b87 \u0ba4\u0bbf\u0baf\u0bbe\u0bb5\u0b87 \u0bb5\u0bb0 \u0ba8\u0b9f \u0ba4 \u0bae\u0bbf\u0b95 \u0bc6\u0baa \u0baf \u0bc7\u0ba4 \u0ba4\u0b87 \u0bb5\u0bbe.
MAML: \u0b87 \u0ba4\u0bbf\u0baf\u0bbe\u0bb5\u0b87 \u0bb5\u0bb0 \u0ba8\u0b9f\u0ba4 \u0bae\u0bbf\u0b95 \u0bc6\u0baa \u0baf \u0bc7\u0ba4\u0bb0\u0ba4\u0b87 \u0bb5\u0bbe \u0bae.
", "num": null, "type_str": "table", "html": null } } } }