{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:22.983750Z" }, "title": "On the Difficulty of Segmenting Words with Attention", "authors": [ { "first": "Ramon", "middle": [], "last": "Sanabria", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh", "location": {} }, "email": "r.sanabria@ed.ac.uk" }, { "first": "Hao", "middle": [], "last": "Tang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh", "location": {} }, "email": "hao.tang@ed.ac.uk" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word segmentation, the problem of finding word boundaries in speech, is of interest for a range of tasks. Previous papers have suggested that for sequence-to-sequence models trained on tasks such as speech translation or speech recognition, attention can be used to locate and segment the words. We show, however, that even on monolingual data this approach is brittle. In our experiments with different input types, data sizes, and segmentation algorithms, only models trained to predict phones from words succeed in the task. Models trained to predict words from either phones or speech (i.e., the opposite direction needed to generalize to new data), yield much worse results, suggesting that attention-based segmentation is only useful in limited scenarios. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Word segmentation, the problem of finding word boundaries in speech, is of interest for a range of tasks. Previous papers have suggested that for sequence-to-sequence models trained on tasks such as speech translation or speech recognition, attention can be used to locate and segment the words. We show, however, that even on monolingual data this approach is brittle. In our experiments with different input types, data sizes, and segmentation algorithms, only models trained to predict phones from words succeed in the task. Models trained to predict words from either phones or speech (i.e., the opposite direction needed to generalize to new data), yield much worse results, suggesting that attention-based segmentation is only useful in limited scenarios. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word segmentation is the task of finding word boundaries in speech. The task has a wide range of applications, including documenting underresourced languages (Dunbar et al., 2017) and bootstrapping speech recognizers (Juang and Rabiner, 1990) . It is often the first step to a variety of unsupervised speech tasks Baevski et al., 2021) and to the NLP pipeline for languages with no whitespace between words.", "cite_spans": [ { "start": 158, "end": 179, "text": "(Dunbar et al., 2017)", "ref_id": "BIBREF8" }, { "start": 217, "end": 242, "text": "(Juang and Rabiner, 1990)", "ref_id": "BIBREF12" }, { "start": 314, "end": 335, "text": "Baevski et al., 2021)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While unsupervised speech segmentation has been studied (Kamper et al., 2017; R\u00e4s\u00e4nen et al., 2015) , in many cases a parallel data source may be available, such as transcriptions, translations, or images. Previous researchers have suggested that it is possible to extract word segments from the attention map created by training an end-toend sequence-to-sequence model on such parallel data (Palaskar and Metze, 2018; Boito et al., 2017 Boito et al., , 2019 Boito et al., , 2020 Godard et al., 2018) . However, 1 code available in the following link https://github.com/ramonsanabria/insights_2021", "cite_spans": [ { "start": 56, "end": 77, "text": "(Kamper et al., 2017;", "ref_id": "BIBREF13" }, { "start": 78, "end": 99, "text": "R\u00e4s\u00e4nen et al., 2015)", "ref_id": "BIBREF18" }, { "start": 392, "end": 418, "text": "(Palaskar and Metze, 2018;", "ref_id": "BIBREF16" }, { "start": 419, "end": 437, "text": "Boito et al., 2017", "ref_id": "BIBREF1" }, { "start": 438, "end": 458, "text": "Boito et al., , 2019", "ref_id": "BIBREF2" }, { "start": 459, "end": 479, "text": "Boito et al., , 2020", "ref_id": "BIBREF3" }, { "start": 480, "end": 500, "text": "Godard et al., 2018)", "ref_id": "BIBREF9" }, { "start": 512, "end": 513, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Palaskar and Metze's evaluation was non-standard 2 and the others (which we refer to collectively henceforth as BOITOEA) used models with text translations on the input side-decoding to phones or phone-like units-which means the trained models cannot be applied to segment novel (untranslated) sequences. In addition, the interpretation of attention as alignments in other areas of NLP has been questioned (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) .", "cite_spans": [ { "start": 406, "end": 430, "text": "(Jain and Wallace, 2019;", "ref_id": "BIBREF11" }, { "start": 431, "end": 458, "text": "Wiegreffe and Pinter, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prompted by the prior work, we set out to study word segmentation with attention in sequence-tosequence models, aiming to better understand when and how attention can be interpreted as alignments and in what settings it can be used to segment words. Our main experiments follow BOITOEA in performing and evaluating word segmentation on the same data used to train the sequence-tosequence model. However, instead of training translation models as in BOITOEA, we train models to perform speech recognition: a well-studied task with a simpler (monotonic) alignment structure. This setting is similar to forced alignment, where both the speech and the transcription are given, and the goal is to discover the hidden alignments. Aside from this potential use case, this setting is useful for analysis because it abstracts away from the need to generalize to a novel test set. If the attention is not able to provide acceptable word boundaries in this setting, then the approach is unlikely to succeed in other, more difficult, settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform experiments on both the lowresource Mboshi dataset from BOITOEA and a much larger English dataset, MuST-C (Di Gangi et al., 2019) . We study models trained in both directions with a variety of input-output types (e.g., phones, speech frames) and different postprocessing strategies to extract alignments from the atten-tion weights. We find that although the particular configuration used by BOITOEA works well, in most configurations the word segments extracted from the attention map are poor, even when using a larger dataset or model size. In particular, we did not get good results from any of the configurations that can be applied to a novel test set or to speech frames (rather than phone-like units). We conclude that even in the simple monotonic case studied here, attention typically does not provide clear word-level alignments.", "cite_spans": [ { "start": 110, "end": 140, "text": "MuST-C (Di Gangi et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We consider an n-sample dataset S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "= {(x 1 , y 1 , z 1 ), . . . , (x n , y n , z n )}, each of which is a triplet (x, y, z) \u2208 X \u00d7 Y \u00d7 Z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "As an example, X is the set of sequences of speech frames, Y is the set of sequences of words, and Z is the set of sequences of segments, where a segment is a triplet (s, t, w) that indicates the start time s, the end time t, and the word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "The goal is to learn a function f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "X \u00d7 Y \u2192 Z given only S| xy = {(x 1 , y 1 ), . . . , (x n , y n )},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "i.e., discovering the alignments without observing them. Formally, we aim to find f that minimizes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "n i=1 (f (x i , y i ), z i ) using only S|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "xy , for some loss function that evaluates the quality of the segmentation. Note that the evaluation of f is on the set S. We could evaluate f on a test set, but, in general, generalization is not involved in this setting. 3 The setting is general, subsuming many tasks. When X is the set of speech utterances and Y is the empty set, this is the usual unsupervised word segmentation. The set Y can be images or translations, grounding words from other modalities (Harwath et al., 2018) . In this work, we focus on Y being transcriptions, i.e., we have a forced alignment task.", "cite_spans": [ { "start": 223, "end": 224, "text": "3", "ref_id": null }, { "start": 463, "end": 485, "text": "(Harwath et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Setting", "sec_num": "2" }, { "text": "Our pipeline, due to BOITOEA, consists of two steps: generating an attention map from a sequenceto-sequence model, followed by postprocessing to convert the map into an alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Segmentation with Attention", "sec_num": "3" }, { "text": "Below is a review of sequence-to-sequence models. Readers should refer to, for example, Luong et al. (2015) for a detailed exposition. Given a speech utterance x = x 1 x 2 \u2022 \u2022 \u2022 x T or simply x 1:T and its transcription y = y 1:K , a sequence-to-sequence model learns a function of X \u2192 Y. An encoder Enc and a decoder Dec take x and y as input and produce their respective hidden vectors h 1:T = Enc(x 1:T ) and q 2:", "cite_spans": [ { "start": 88, "end": 107, "text": "Luong et al. (2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "K = Dec(y 1:K\u22121 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "The attention map", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 t,k = exp W a h t q k T i=1 exp W a h i q k", "eq_num": "(1)" } ], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "is computed with a weight matrix W a . Examples are illustrated in Figure 1 . In the early stages of the project, we experimented with dot product attention and the results were similar. Finally, the probability of the label is computed as", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y k |y 1:k\u22121 , x 1:T ) = softmax W c k q k", "eq_num": "(2)" } ], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "c k = T t=1 \u03b1 t,k h t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "The model is trained to maximize the probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "p(y 1:K |x 1:T ) = p(y 1 |x 1:T ) K k=2 p(y k |y 1:k\u22121 , x 1:T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "We emphasize that, in our setting, x and y are always given, also known as teacher forcing (Lamb et al., 2016) , and we are interested in the attention map \u03b1, not how well the model maps x to y. Note that the assignments to x and y can be swapped since both are given. For example, we can align word transcriptions to phonetic transcriptions, or vice versa. However, the choice of directionality has two implications. First, for each output y k , some parts of x will have high attention weights, whereas for parts of x, there may be no y k with high weights. This asymmetry affects the choice of postprocessing method, as described below. Second, typically only one direction will be feasible if we want to apply the trained model to new (unannotated) data. While our experiments here focus on segmenting the training set, we would ideally like to find a method that can also work on new data.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Lamb et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence Models", "sec_num": "3.1" }, { "text": "We explore three types of postprocessing to obtain the alignments from the attention map \u03b1. The first of these was introduced by Boito et al. 2017; the others are novel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Postprocessing", "sec_num": "3.2" }, { "text": "Hard Assignment This approach aligns each output symbol y k to the input that has the highest attention weight, i.e., to x t k , where t k = arg max t \u03b1 t,k . Due to the attention map asymmetry noted above, this approach is only applied when the transcribed words are on the input side; otherwise some phones (or speech frames) may not be aligned to any word. This method hypothesizes a word boundary between two output symbols if they are aligned to different words; otherwise they are considered part of the same word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Postprocessing", "sec_num": "3.2" }, { "text": "BOITOEA mainly use this method to train translation models (French input words; Mboshi output phones), but Boito et al. (2020) also present monolingual results (Mboshi input words; Mboshi output phones), which we compare to below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Postprocessing", "sec_num": "3.2" }, { "text": "Thresholding When the attention weight is higher than a threshold \u03c4 onset , then we hypothesize a start of a word segment. When the attention weight is lower than a threshold \u03c4 offset , then we hypothesize an end of a word segment. Thresholds are set by exhaustive search using F-score on the development set as the search metric. Thresholding can generate multiple segments for a given output, which is not desirable for our setting. However, in an automatic speech translation setup, such behavior can be helpful in some language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Postprocessing", "sec_num": "3.2" }, { "text": "Segmental Assignment Since we know that word segments are contiguous chunks of speech, this constraint should be baked into the postprocessing. In particular, we find a sequence (s 1 , t 1 ), . . . , (s K , t K ) such that the sum of attention weights that each segment covers, i.e., t k t=s k \u03b1 t,k , is maximized, while respecting the connectedness constraint, i.e., s k+1 = t k + 1. This can be achieved by finding the maximum weighted path in a graph with edges as word segments and weights of the edges as the attention weights a segment covers. See a detailed description in Appendix A.1 and (Tang et al., 2017) .", "cite_spans": [ { "start": 598, "end": 617, "text": "(Tang et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Postprocessing", "sec_num": "3.2" }, { "text": "Most of our experiments are conducted on the Mboshi dataset. It contains 4616 short read-speech utterances for training (3 seconds/6 words on average; 4.5h in total), with a vocabulary of 6638 words, and 514 utterances for development. Mboshi is a Bantu Language with no orthography, and the speech is transcribed at the word level using a phonetic orthography designed by linguists. We regard the basic units in the transcriptions as phones. 4 We do not use the French translations of the utterances.", "cite_spans": [ { "start": 443, "end": 444, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For experiments with speech, we use Kaldi to extract speech features, with a 25 ms window shifted by 10 ms. Each acoustic frame consists of 40dimensional log mel features and 3-dimensional pitch features. The acoustic feature vectors are used directly, we do no clustering or acoustic unit discovery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We use a 1-layer bidirectional LSTM encoder and a 1-layer unidirectional LSTM decoder, 5 with 0.5 dropout on the encoder and a 256-dimensional hidden layer. 6 Further hyperparameter details are in Appendix A.2.", "cite_spans": [ { "start": 157, "end": 158, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "As noted above, our main questions do not require testing generalization, so except where otherwise noted (to compare to previous work), we evaluate all models on the training set. Results on the development set, which do not change the story, are reported in Appendix A.3. We report precision, recall, and F-score of the hypothesized word boundaries. When transcribing phones to words, a 4 We use the term phone rather than phoneme both here and with reference to the MUST-C data set (below), to avoid making any commitments about the underlying cognitive/linguistic form, which the term phoneme implies. For the Mboshi data especially, we are not sure if these commitments hold. However, like phonemic transcriptions, the transcriptions we work with assume a single pronunciation for each word type (effectively, dictionary lookup of pronunciations).", "cite_spans": [ { "start": 389, "end": 390, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "5 Boito et al. (2020) showed that LSTMs worked better than Transformer or CNN models with their framework. 6 The number of layers, dropout, and dimensions were tuned on the development set. The values we found best are the same ones Boito et al. (2020) reported, except their hidden layer size is 64. It is unclear why dropout helps in this setting, since without generalization, there is no concern of overfitting, but we did find a small benefit. Hard 95.5 85.7 90.4 -10.2 w \u2192 p \u2020 * Hard 92.9 92.1 92.5 word boundary must be hypothesized at the exact place to be counted as correct. In later experiments, particularly for speech, we follow the The Zero Resource Speech Challenge (Dunbar et al., 2017) evaluation and use a 30ms tolerance window, i.e., the hypothesized boundary is counted as correct if it falls withing 30 ms of the correct boundary. Similar to BOITOEA, we use force alignments extracted with a Kaldi (Povey et al., 2011 ) GMM-HMM model as ground truth word boundaries. We also report the amount of over-segmentation, de-", "cite_spans": [ { "start": 107, "end": 108, "text": "6", "ref_id": null }, { "start": 681, "end": 702, "text": "(Dunbar et al., 2017)", "ref_id": "BIBREF8" }, { "start": 919, "end": 938, "text": "(Povey et al., 2011", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "fined as (N h \u2212N ref )/N ref .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "If the quantity is positive, the model hypothesizes too many boundaries; if the quantity is negative, the model hypothesizes too few boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We begin by confirming the positive results from previous work, following BOITOEA in training a model to predict phones given words. Results for both Hard Assignment (Hard) and Segmental Assignment (Seg) are shown in Table 1 . We transpose the attention matrix to run Seg so that a segment (word) can consist of multiple phones. As expected, both methods work well, with the more principled Seg performing slightly betterthough it has a slight advantage, since (due to teacher forcing) it always generates the right number of word segments (yielding OS = 0). However, these methods can only be applied to the annotated data, which is a significant weakness.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Predicting Phones From Words", "sec_num": "4.1" }, { "text": "Next, we consider the more typical direction of decoding, predicting words given phones. We do not use hard assignment in this setting for the reasons described in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words as Targets", "sec_num": "4.2" }, { "text": "Results (Table 2) show that, when using phones, flipping the model direction from w \u2192 p to p \u2192 w makes the results much worse. The Thresholding method is especially bad, so we focus on Seg for Table 2 : Word boundary scores on Mboshi for models with words as targets, using phones, phone frames, or acoustic feature frames as input (p \u2192 w, f \u2192 w, and a \u2192 w, respectively), with Thresholding or Segmental assignment. The first row is copied from Table 1. Unsupervised baselines (acoustic input only) are also shown: R15 (R\u00e4s\u00e4nen et al., 2015) ; K17 (Kamper et al., 2017 the rest of the paper. The deterioration by simply flipping the model is unsatisfying, because the setting is simple enough that the model should be able to achieve near-perfect results by acting like a lexicon, mapping canonical pronunciations to words. This decoding direction allows us to build models that take acoustic features as input and produce words.", "cite_spans": [ { "start": 519, "end": 541, "text": "(R\u00e4s\u00e4nen et al., 2015)", "ref_id": "BIBREF18" }, { "start": 548, "end": 568, "text": "(Kamper et al., 2017", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 8, "end": 17, "text": "(Table 2)", "ref_id": null }, { "start": 193, "end": 200, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Words as Targets", "sec_num": "4.2" }, { "text": "However, once we replace phone transcriptions as input with acoustic features, the result (denoted a \u2192 w in Table 2 ) is drastically worse-even underperforming one of the unsupervised baselines (note the change on attention structure between Figure 1b and Figure 1a) .", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 2", "ref_id": null }, { "start": 242, "end": 252, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 257, "end": 267, "text": "Figure 1a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Words as Targets", "sec_num": "4.2" }, { "text": "To understand what causes the dramatic drop in performance, we explore an intermediate input representation where we replace each acoustic frame with its phone label. This input format, which we refer to as phone frames, has the same length as the acoustic input sequence and reflects the duration of each phone, while abstracting away from acoustic variability. Results of this experiment (f \u2192 w in Table 2) show that most of the performance gap is recovered. This suggests that the model learns little about the phonetic variability in this experiment. This prompts us to work on a larger dataset, with the hope that the model is able to capture the phonetic variability given more data.", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 408, "text": "Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Words as Targets", "sec_num": "4.2" }, { "text": "We investigate if by exposing a model to more speech data it would learn to normalize phonetic variance and close the gap between f \u2192 w and a \u2192 w. For this experiment we use English data (speech, lexicon phone sequences, and word transcriptions) from the MuST-C dataset (Di Gangi Table 3 : Word boundary scores on MuST-C using Segmental assignment with a variety of models. The first three models have 1 hidden layer, while Lg has 5. Unsupervised baselines are also shown: R15 (R\u00e4s\u00e4nen et al., 2015) ; K17 (Kamper et al., 2017 ., 2019) . MuST-C provides translations to other languages; we don't use these here but we do limit our data to the 145k English utterances (257h of speech) for which translations are available in all the languages 7 . Utterances have an average length of 6.5 seconds/18 words. We use the same speech feature extraction configuration as in the Mboshi experiments. Because we are using a larger dataset, we also try a deeper (5-layer) model for the speech input. Results are shown in Table 3 . The results on p \u2192 w are lower than for Mboshi, suggesting that the longer utterances in MuST-C make the task more challenging. The speech in MuST-C is probably also harder than in Mboshi (TED talks vs. read speech); nevertheless, the performance on a \u2192 w is better on MuST-C than Mboshi, closer to the MuST-C f \u2192 w results. This suggests that adding more data does allow the model to learn more about acoustic variability. However, given the large size of this data set, all the results are underwhelming, and the results with speech still do not beat the unsupervised models.", "cite_spans": [ { "start": 477, "end": 499, "text": "(R\u00e4s\u00e4nen et al., 2015)", "ref_id": "BIBREF18" }, { "start": 506, "end": 526, "text": "(Kamper et al., 2017", "ref_id": "BIBREF13" }, { "start": 527, "end": 535, "text": "., 2019)", "ref_id": null } ], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 3", "ref_id": null }, { "start": 1010, "end": 1017, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Scaling to Larger Data", "sec_num": "4.3" }, { "text": "Previous researchers had suggested a connection between attention weights and word alignments in both speech recognition and speech translation. However, we have experimented with several attention-based segmentation methods and demonstrated that these only succeed in the scenario where words are used as the input to the modela scenario with limited application. Performance drops considerably for models with phones as input, and is no better than unsupervised segmentation for models using speech as input, even when the amount of training data is increased by two orders and the scheduler applies a decay factor of 0.5 after two consecutive epochs where loss does not decrease. All models are trained until cross-entropy loss on training reaches 0. The implementation of each model has around 3M and 19M of learnable parameters for the 1 and 5 layers encoder model, respectively. They are trained with one Nvidia GEFORCE GTX 1080 Ti. To reduce computation on Segmental Assignment, we set the maximum duration of a word to 4 seconds (400 frames for speech or phone frames representation) for Mboshi and 10 seconds for MuST-C. We set them by analyzing their performance on the development set. Regarding the unsupervised speech baseline models, we use the unigram public implementation of Kamper et al. (2017) 9 with a minimum word segment duration of 250 ms. Because its performance is linked to the syllable segmentation method, we select the best configuration by fine-tuning R\u00e4s\u00e4nen et al. (2015)'s 10 hyperparameter values on the development set.", "cite_spans": [ { "start": 1292, "end": 1312, "text": "Kamper et al. (2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Finally, we consider a more traditional scenario where the model is exposed to unseen data. For this setting our model does not have access to transcriptions and therefore we do not use Teacher-Forcing. We evaluate the development set from Mboshi (used in Section 4.1), which has 1147 token types (where only 710 are observed during 9 link to Kamper et al., 2017 . In terms of absolute tokens, it has 2993, and only 516 out-of-vocabulary. We experiment with the MuST-C by using 1162 utterances from MuST-C's superset unseen during training. In this case, the set has 3273 token types. Surprisingly, for p and f \u2192 w, attention still produces meaningful segments although the model has not seen or early stopped with a development set. In that case, we observe a degradation in performance but not dramatic. The small number of unobserved absolute tokens do not have a critical effect on the segmentation performance of the model. Finally, the small difference in performance between the train and development sets on a \u2192 w shows an already present weak segmentation signal not correlated with word in speech models.", "cite_spans": [ { "start": 343, "end": 362, "text": "Kamper et al., 2017", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Results without Transcriptions", "sec_num": null }, { "text": "For their ASR model, they reported mean frame error relative to a forced alignment. Positive and negative errors cancel, so a small mean error does not imply correct boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do report generalization results in the Appendix, for completeness, though these do not change our main story.(a) speech (b) phones", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Find the list of files in Kaldi format in the following link https://homepages.inf.ed.ac.uk/s1945848/must_c_insights.zip. of magnitude. Although in principle the transcriptions provide an additional source of information, using this to help segment words from speech will likely require a completely different approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/lium-lst/nmtpytorch", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Given an attention map, the goal of segmental assignment is to find a segmentation that maximizes the amount of attention weights each word covers while making sure that the word segments are connected. To achieve this, we turn this into a problem of finding a maximum weighted path on a graph, where the edges of the graph are segments, the weights on the edges correspond to the amount attention weights covered, and the graph encodes the connectedness constraints.Suppose the attention map is of dimension T \u00d7 K. Recall that T is the number of input tokens (such as speech frames) and K is the number of output tokens (such as words). We first create a vertex set V = {(t, k) : for t = 0, . . . , T and k = 0, . . . , K}, a grid marking every element in the attention map. An edge is a pair of vertices (t 1 , k 1 ) and (t 2 , k 2 ) while satisfying t 1 < t 2 and k 2 = k 1 + 1. That edge represents a segment of the k 2 -th output token that aligns to t 1 to t 2 on the input side. This can be realized by defining the incoming edges in((t 2 , k 2 )) = (t 1 , k 2 \u2212 1), (t 2 , k 2 ) :We assign the sum of attention weights from t 1 to t 2 , i.e., t 2 t=t 1 +1 \u03b1 t,k 2 to the edgeOnce the graph is constructed, we find the maximum weighted path that starts at (0, 0) and ends at (T, K). An example is shown in Figure 2 . Note that due to the imposed constraints and in turn due to how the graph is constructed, segmental assignment only considers monotonic alignments.", "cite_spans": [], "ref_spans": [ { "start": 1313, "end": 1321, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A.1 Segmental Assignment", "sec_num": null }, { "text": "We use the sequence-to-sequence implementation of nmtpytorch (Caglayan et al., 2017) 8 . The model comprises one-layer encoder and one-layer decoder with 0.5 dropout, except in the experiment of Section 4.3 where we use a five-layer encoder in the Large (Lg) model. We set a size of 256 to all hidden dimensions (i.e., source and target embedding, encoder, and decoder). We use the Adam optimizer,", "cite_spans": [ { "start": 61, "end": 86, "text": "(Caglayan et al., 2017) 8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Hyperparameters and Implementation", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised speech recognition", "authors": [ { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Wei-Ning", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.11084" ] }, "num": null, "urls": [], "raw_text": "Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised speech recogni- tion. arXiv preprint arXiv:2105.11084.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unwritten languages demand attention too! Word discovery with encoder-decoder models", "authors": [ { "first": "Alexandre", "middle": [], "last": "Marcely Zanon Boito", "suffix": "" }, { "first": "", "middle": [], "last": "B\u00e9rard", "suffix": "" } ], "year": 2017, "venue": "Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcely Zanon Boito, Alexandre B\u00e9rard, Aline Villav- icencio, and Laurent Besacier. 2017. Unwritten languages demand attention too! Word discovery with encoder-decoder models. In Automatic Speech Recognition and Understanding Workshop (ASRU).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Empirical evaluation of sequenceto-sequence models for word discovery in lowresource settings", "authors": [ { "first": "Aline", "middle": [], "last": "Marcely Zanon Boito", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcely Zanon Boito, Aline Villavicencio, and Laurent Besacier. 2019. Empirical evaluation of sequence- to-sequence models for word discovery in low- resource settings. In Interspeech.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Investigating alignment interpretability for low-resource nmt", "authors": [ { "first": "Aline", "middle": [], "last": "Marcely Zanon Boito", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2020, "venue": "Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcely Zanon Boito, Aline Villavicencio, and Lau- rent Besacier. 2020. Investigating alignment inter- pretability for low-resource nmt. In Machine Trans- lation. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "NMTPY: a flexible toolkit for advanced neural machine translation systems", "authors": [ { "first": "Ozan", "middle": [], "last": "Caglayan", "suffix": "" }, { "first": "Mercedes", "middle": [], "last": "Garc\u00eda-Mart\u00ednez", "suffix": "" }, { "first": "Adrien", "middle": [], "last": "Bardet", "suffix": "" } ], "year": 2017, "venue": "Prague Bull. Math. Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozan Caglayan, Mercedes Garc\u00eda-Mart\u00ednez, Adrien Bardet, Walid Aransa, Fethi Bougares, and Lo\u00efc Bar- rault. 2017. NMTPY: a flexible toolkit for advanced neural machine translation systems. Prague Bull. Math. Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech", "authors": [ { "first": "Yu-An", "middle": [], "last": "Chung", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-An Chung and James Glass. 2018. Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech. In Interspeech.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised cross-modal alignment of speech and text embedding spaces", "authors": [ { "first": "Yu-An", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Wei-Hung", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Schrasing", "middle": [], "last": "Tong", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. 2018. Unsupervised cross-modal alignment of speech and text embedding spaces. In Advances in Neural Information Processing Systems (NIPS).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MuST-C: a multilingual speech translation corpus", "authors": [ { "first": "Di", "middle": [], "last": "Mattia", "suffix": "" }, { "first": "Roldano", "middle": [], "last": "Gangi", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Cattoni", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Negri", "suffix": "" }, { "first": "", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2019, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mattia A Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a multilingual speech translation corpus. In North American Chapter of the Association for Computa- tional Linguistics (NAACL).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The zero resource speech challenge 2017", "authors": [ { "first": "Ewan", "middle": [], "last": "Dunbar", "suffix": "" }, { "first": "Xuan", "middle": [ "Nga" ], "last": "Cao", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Benjumea", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Karadayi", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Bernard", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Anguera", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" } ], "year": 2017, "venue": "Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. 2017. The zero resource speech challenge 2017. In Automatic Speech Recognition and Understanding Workshop (ASRU).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised word segmentation from speech with attention", "authors": [ { "first": "Pierre", "middle": [], "last": "Godard", "suffix": "" }, { "first": "Marcely", "middle": [], "last": "Zanon-Boito", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Ondel", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Berard", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Godard, Marcely Zanon-Boito, Lucas Ondel, Alexandre Berard, Fran\u00e7ois Yvon, Aline Villavicen- cio, and Laurent Besacier. 2018. Unsupervised word segmentation from speech with attention. In Inter- speech.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Jointly discovering visual objects and spoken words from raw sensory input", "authors": [ { "first": "David", "middle": [], "last": "Harwath", "suffix": "" }, { "first": "Adria", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "D\u00eddac", "middle": [], "last": "Sur\u00eds", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2018, "venue": "European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Harwath, Adria Recasens, D\u00eddac Sur\u00eds, Galen Chuang, Antonio Torralba, and James Glass. 2018. Jointly discovering visual objects and spoken words from raw sensory input. In European Conference on Computer Vision (ECCV).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Attention is not explanation", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The segmental K-means algorithm for estimating parameters of hidden markov models", "authors": [ { "first": "B-H", "middle": [], "last": "Juang", "suffix": "" }, { "first": "", "middle": [], "last": "Lawrence R Rabiner", "suffix": "" } ], "year": 1990, "venue": "Transactions on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B-H Juang and Lawrence R Rabiner. 1990. The seg- mental K-means algorithm for estimating parame- ters of hidden markov models. Transactions on Acoustics, Speech, and Signal Processing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A segmental framework for fullyunsupervised large-vocabulary speech recognition", "authors": [ { "first": "Herman", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "Aren", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2017, "venue": "Computer Speech & Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herman Kamper, Aren Jansen, and Sharon Gold- water. 2017. A segmental framework for fully- unsupervised large-vocabulary speech recognition. Computer Speech & Language.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Professor forcing: A new algorithm for training recurrent networks", "authors": [ { "first": "Alex M", "middle": [], "last": "Lamb", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Goyal Alias Parth", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "C", "middle": [], "last": "Aaron", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Ad- vances in Neural Information Processing Systems (NIPS).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neu- ral machine translation. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Acoustic-toword recognition with sequence-to-sequence models", "authors": [ { "first": "Shruti", "middle": [], "last": "Palaskar", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" } ], "year": 2018, "venue": "Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shruti Palaskar and Florian Metze. 2018. Acoustic-to- word recognition with sequence-to-sequence mod- els. In Spoken Language Technology Workshop (SLT).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The kaldi speech recognition toolkit", "authors": [ { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Arnab", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Glembek", "suffix": "" }, { "first": "Nagendra", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Hannemann", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Motlicek", "suffix": "" }, { "first": "Yanmin", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Schwarz", "suffix": "" } ], "year": 2011, "venue": "Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In Workshop on Automatic Speech Recognition and Understanding. IEEE.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unsupervised Word discovery from speech using automatic segmentation into syllable-like units", "authors": [ { "first": "Okko", "middle": [], "last": "R\u00e4s\u00e4nen", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Doyle", "suffix": "" }, { "first": "Michael C", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Okko R\u00e4s\u00e4nen, Gabriel Doyle, and Michael C Frank. 2015. Unsupervised Word discovery from speech using automatic segmentation into syllable-like units. In Interspeech.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "End-to-end neural segmental models for speech recognition", "authors": [ { "first": "Hao", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Renals", "suffix": "" } ], "year": 2017, "venue": "Journal of Selected Topics in Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris Dyer, Noah A Smith, and Steve Renals. 2017. End-to-end neural segmental models for speech recognition. Journal of Selected Topics in Signal Processing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Attention is not not explanation", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Empirical Methods in Natu- ral Language Processing (EMNLP).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Example attention maps for sequences using (a) speech or (b) phones as input. Note that for space reasons, the maps are shown transposed: each row shows the attention for a single output timestep." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "An example segmental assignment with 12 input tokens and 4 output tokens. Each box represents an element in the attention map. The darker the shade, the higher the attention weight. Edges of the maximum scoring path and the corresponding vertices are shown in blue. The shade being crossed by an edge is the amount of attention weights covered by the edge. The goal is of segmental assignment is to find the maximum weighted path from the bottom-left corner to the top-right corner." }, "TABREF0": { "type_str": "table", "content": "
PRFOS
w \u2192 pHard 92.3 83.2 87.5 -9.8
w \u2192 pSeg 93.5 93.5 93.50.0
w \u2192 p \u2020
", "text": "Word boundary scores on Mboshi for models predicting phones from words (w \u2192 p), as in Boito et al. (2020), using Hard or Segmental assignment. Results from Boito et al. (2020), averaging five attention maps.", "num": null, "html": null }, "TABREF3": { "type_str": "table", "content": "", "text": "Github implementation 10 link to Rasanen et al., 2015 Github implementation", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
DSModelPRFOS (%)
Mbp \u2192 w51.9 54.2 53.04.4
Mbf \u2192 w48.5 47.3 47.9-2.3
Mba \u2192 w16.2 14.7 15.4-8.9
Mb a \u2192 w (Lg) 14.1 14.9 14.56.0
MCp \u2192 w43.8 44.0 43.90.5
MCf \u2192 w30.1 30.1 30.10.2
MC a \u2192 w (Lg) 17.9 20.7 19.215.8
training)
", "text": "Results from 1 layer, and 5 layers (Lg) models for word segmentation of the development (unseen) set of the Mboshi and MuST-C dataset.", "num": null, "html": null } } } }