doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
10.1006/csla.2000.0138
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b37", "b15", "b21", "b19", "b13", "b38", "b12", "b17", "b37", "b0", "b36", "b25", "b8", "b17", "b36" ], "table_ref": [], "text": "A spectrum of studies recently arose in Natural Language Processing (NLP), which incorporates intermediate supervision signals into the model by simply converting the intermediate signals into textual sequences and prepending or appending these sequences to the output sequence. It benefits tasks such as math word problems (Wei et al., 2022), commonsense reasoning (Liu et al., 2022), programs execution (Nye et al., 2022), summarisation (Narayan et al., 2021), etc. This trend further There she is. That she is. That is she. triggered the collection of a new dataset with intermediate results (Lewkowycz et al., 2022) and corresponding theoretical analysis (Wies et al., 2022). Intermediate supervision signals show consistent benefits to these various sequence generation tasks and Neural Machine Translation (NMT) is a basic and typical sequence generation task in the NLP community. However, it remains an open question whether and how intermediate signals can be defined and leveraged for NMT. Meanwhile, previous studies (Koehn and Knowles, 2017;Müller et al., 2020) found that NMT suffers from poor domain robustness, i.e. the generalisation ability to unseen domains. Such an ability not only has theoretical meaning, but also has practical value since: 1) the target domain(s) may be unknown when a system is built; 2) some language pairs may only have training data for limited domains. Since the recent study (Wei et al., 2022) Different from math problem-solving tasks, machine translation tasks do not have explicit intermediate results to serve as the intermediate signals. A recent work (Voita et al., 2021b) found that NMT acquires the three core SMT competencies, target-side language modelling, lexical translation and reordering in order during the course of the training. Inspired by this work, we borrow tech-niques in SMT to produce intermediate sequences as the intermediate signals for NMT. Specifically, we first obtain the word alignments for the parallel corpus and use it to produce the word-for-word translations (lex) and the aligned word-for-word translations (ali) to resemble the lexical translation and reordering competencies in SMT. As shown in Figure 1, the intermediate sequences resemble structurally approaching the target from the source progressively, which shares a similar spirit of how humans do translation or reasoning about translation step by step, thus named Progressive Translation.\nOur intuition is that these intermediate sequences inject an inductive bias about a domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, and finally refinement. Such a bias limits the learning flexibility of the model but prevents the model from building up some spurious correlations (Arjovsky et al., 2019) which harm out-ofdomain performance.\nHowever, previous works have shown that NMT is prone to overly relying on the target history (Wang and Sennrich, 2020;Voita et al., 2021a), which is partially correlated with exposure bias (Ranzato et al., 2016) (a mismatch between training and inference), especially under domainshift. Simply prepending these introduced intermediate sequences to the target would introduce spurious causal relationships from the intermediate sequences to the target. As a result, these intermediate sequences would potentially mislead the model about the prediction of the target, due to erroneous intermediate sequences during inference. To alleviate this spurious causal relationship, we introduce the full-permutation multi-task learning framework, where the target and intermediate sequences are fully permuted. The Minimum Bayes Risk (Goel and Byrne, 2000) decoding algorithm is used to select a consensus translation from all permutations to further improve the performance.\nWe first test our proposed framework on IWSLT'14 German→English and find that the proposed intermediate sequence can improve the domain robustness of NMT. The permutation multi-task learning is important for the intermediate sequence which is prone to erroneous during inference. To examine the generality of our methods, we conduct experiments on another two domain-robustness datasets in NMT, OPUS German→English and a low resource German→Romansh scenario. Our methods show consistent out-of-domain improvement over these two datasets.\nMoreover, previous works (Müller et al., 2020;Wang and Sennrich, 2020) found that hallucinated translations are more pronounced in out-of-domain setting. Such translations are fluent but completely unrelated to the input, and they may cause more serious problems in practical use due to their misleading nature. Therefore, we manually evaluate the proportion of hallucinations. Results show that our methods substantially reduce the amount of hallucinations in out-of-domain translation. Finally, since the corpus size in the main experiments is relatively small, we investigate the effectiveness of our methods when scaling up the corpus sizes. Results show that our methods are especially effective under the low-resource scenarios." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b19", "b20", "b27" ], "table_ref": [], "text": "Intermediate Supervision Signals. Some existing works in the broader NLP community try to incorporate intermediate sequences into the model. We take two typical examples of them to better distinguish our work from other works. Narayan et al. (2021) (Ng et al., 2020) or auxiliary tasks where the target history is less informative (Sánchez-Cartagena et al., 2021) named MTL-DA framework. The main difference between our PT framework and the MTL-DA framework is that the MTL-DA framework treats each target-side sequence as an independent task conditioned on the source, whereas PT also encourages the model to learn the transformational relations between any pair of target-side sequences, which may help the model to generalise better across domains." }, { "figure_ref": [], "heading": "Multi-task Learning:", "publication_ref": [], "table_ref": [], "text": "<123> is a control token indicating the order of three sequences. 1: lex; 2: ali; 3: tgt, then <123> is for the task where the target is in order of lex, ali and tgt. <lex>, <ali>, <tgt> is the special tokens prepended to lex, ali, tgt separately.\nSource: Das ist sie. Target: There she is." }, { "figure_ref": [], "heading": "Old training pair:", "publication_ref": [ "b10", "b3", "b5", "b40" ], "table_ref": [], "text": "New training pairs: Target: <lex> That is she. <ali> That she is. <tgt> There she is.\nTarget: <tgt> There she is. <ali> That she is. <lex> That is she.\nSource: <123> Das ist sie.\nSource: <321> Das ist sie. Statistical Machine Translation in NMT. The intermediate sequences of PT are produced using the word alignments and reordering components in Statistical Machine Translation (SMT). There are works on improving NMT with SMT features and techniques (He et al., 2016;Chen et al., 2016;Du and Way, 2017;Zhao et al., 2018). However, these works either modify the architecture of the neural network or require more than one model to produce the translation (e.g. a rule-based pre-ordering model and a NMT model etc.). To the best of our knowledge, we are the first to incorporate features from SMT into NMT by converting the features into textual sequences and prepending these to the target without requiring extra models or modifying the neural architecture.\n3 Approach" }, { "figure_ref": [], "heading": "Intermediate Sequences", "publication_ref": [ "b7", "b22", "b11", "b30" ], "table_ref": [], "text": "The traditional SMT decomposes the translation task into distinct components where some features could potentially be the intermediate supervision ali: lex is reordered so that the word alignments from the target to lex is monotonic. The word alignments used here are target-to-source alignments because it is equivalent to the target-to-lex alignments since lex is word-for-word mapped from the source. The words in the target which is assigned to \"NULL\" are omitted during reordering.\nlex, ali and target (tgt) are prefixed with a special token separately for extracting the corresponding sequence from the predicted output. The one-tomany (both source-to-target and target-to-source) word alignments are obtained with mgiza++ (Gao and Vogel, 2008;Och and Ney, 2003) 1 , a SMT word alignments tool, on the in-domain training corpus, following the default parameter provided in train-model.perl by Moses (Koehn et al., 2007) 2 . The one-to-one word alignments are built by computing the intersection between the one-to-many word alignments in both directions. The bilingual lexicon is obtained by associating each source word to the target word it is most frequently aligned within the one-to-one word alignments.\nThe learning of word alignments and transformations of lex and ali are at the word level. The BPE (Sennrich et al., 2016) word segmentation is trained on src-tgt parallel data as normal and applied to both source-target parallel sequences and intermediate sequences (the target-language vocabulary is applied to split the words in the intermediate sequences).\nWe expect that the introduced intermediate sequences would benefit the domain robustness of NMT. Because the proposed intermediate sequences serve as a supervision signal to provide the model with an explicit path for learning the transformational relations from source to target. Such signals inject an inductive bias about one kind of domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, finally refinement. This injected bias limits the learning flexibility of the neural model but prevents the model from building up some spurious correlations which harm out-ofdomain performance." }, { "figure_ref": [ "fig_4" ], "heading": "Spurious Causality Relationship", "publication_ref": [], "table_ref": [], "text": "To introduce these intermediate sequences as intermediate supervision signals to the model, we prepend them to the output sequence in training. However, simply prepending these produced intermediate sequences to the target would potentially introduce spurious causality relationships from presequence to post-sequence. For example, prepending lex, ali to the target would introduce the causal relationships of lex → ali → tgt. These are spurious causality relationships because the model is highly unlikely to get the gold-standard pre-sequences (lex or ali) as in the training during inference, especially under the domain-shift where the performance is relatively poor. Therefore, the model should learn that source (input) is the only reliable information for any target-side sequences. Note that such spurious causality relationship in principle results from a mismatch between training and inference of the standard training-inference paradigm of NMT, which is termed exposure bias by the community.\nIntuitively, if the model could predict the targetside sequences in any order, then the causality relationship between target-side sequences should be reduced. Therefore, we propose to fully permute the target-side sequences, i.e. intermediate sequences (lex or ali) and the target sequence (tgt). Figrue 2 illustrates the training data after permutation when we prepend both lex and ali to the target. The source is prefixed with a control token for each permutation, i.e. 1: lex; 2: ali; 3: tgt, then <123> is the control token for the permutation where the target is in the order of lex, ali and tgt.\nAs shown in Figure 3, with the permutation, we create counterfactual data which disentangles the causal relations of lex → ali → tgt and enhances the causal relations from source to each of these three sequences. Therefore, the full-permutation multitask training better balances the model's reliance on the source and target history, at least on presequence(s)." }, { "figure_ref": [], "heading": "Minimum Bayes Risk Decoding", "publication_ref": [ "b6", "b6", "b31" ], "table_ref": [], "text": "From our preliminary experiments, we found that various test sets prefer different generation orders of the permutation. For example, order lex-ali-tgt performs best on some test sets whereas tgt-ali-lex performs best on some other test sets. Therefore, we suspect that the translation quality would be further improved if we could dynamically select the best candidate translations from all permutations. Inspired by (Eikema and Aziz, 2021), we use Minimum Bayes Risk (MBR) decoding to select a consensus translation from all permutations.\nMBR aims to find a translation that maximises expected utility (or minimises expected risk) over the posterior distribution. In practice, the posterior distribution is approximated by drawing a pool of samples S = (s 1 , ..., s n ) of size n from the model:\ny = argmax s i ∈S 1 n n s j =1 u (s i , s j ) (1)\nwhere u is the utility function to compute the similarity between two sequences. In our experiment, the samples S are translations from all permutations.\nFollowing Eikema and Aziz (2021), we use BEER (Stanojević and Sima'an, 2014) as the utility function, and the released toolkit 3 for MBR decoding." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b27", "b20", "b2", "b27", "b14", "b17", "b17", "b20", "b28", "b30" ], "table_ref": [], "text": "We work on three datasets involving two language pairs, which were used in previous works on the domain robustness in NMT (Sánchez-Cartagena et al., 2021;Ng et al., 2020).\nIWSLT'14 DE→EN IWSLT'14 (Cettolo et al., 2014) German→English (DE→EN) is a commonly used small-scale dataset in NMT, which consists of 180 000 sentence pairs in the TED talk domain. Following Sánchez-Cartagena et al. (2021), the validation and in-domain (ID) testing sets are tst2013 and tst2014 separately; and out-of-domain (OOD) test sets consist of IT, law and medical domains from OPUS (Lison and Tiedemann, 2016) collected by Müller et al. (2020) 4 .\nOPUS DE→EN & Allegra DE→RM are two benchmarks of domain-robustness NMT released by Müller et al. (2020). OPUS comprises five domains: medical, IT, law, koran and subtitles. Following Ng et al. (2020), we use medical as ID for training (which consists of 600 000 parallel sentences) and validation and the rest of four domains as OOD test sets. Allegra (Scherrer and Cartoni, 2012) German→Romansh (DE→RM) has 100 000 sentence pairs in law domain. The test OOD domain is blogs, using data from Convivenza.\nWe tokenise and truecase all datasets with Moses 3 https://github.com/Roxot/mbr-nmt 4 https://github.com/ZurichNLP/ domain-robustness and use shared BPE with 10 000 (on IWSLT'14) and 32 000 (on OPUS and Allegra) for word segmentation (Sennrich et al., 2016)." }, { "figure_ref": [], "heading": "Models and Evaluation", "publication_ref": [ "b29", "b32", "b36", "b20", "b23", "b24" ], "table_ref": [], "text": "All experiments are done with the Nematus toolkit (Sennrich et al., 2017) based on the Transformer architecture (Vaswani et al., 2017) 5 . The baseline is trained on the training corpus without using intermediate sequences. We follow Wang and Sennrich (2020) to set hyperparameters (see Appendix) on three datasets. For our framework, we scale up the token batch size proportional to the length of the target for a fair comparison, e.g. if the target-side sequence is three times longer than the original target, we scale up the batch size to three times as well. 6 . The performance of the original order (lex)-(ali)-tgt is used for validation and testing. We conduct early-stopping if the validation performance underperforms the best one over 10 times of validation in both the translation quality (BLEU) and the cross entropy loss.\nWe also compare to two recently proposed methods of domain robustness in NMT. SSMBA (Ng et al., 2020) generates synthetic training data by moving randomly on a data manifold with a pair of corruption and reconstruction functions. Re-verse+Mono+Replace (Sánchez-Cartagena et al., 2021) (RMP) introduces three auxiliary tasks where the target history is less informative.\nWe report cased, detokenised BLEU (Papineni et al., 2002) with SacreBLEU (Post, 2018) 7 . Each experiment is independently run for three times, and we report the average and standard deviation to account for optimiser instability." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b36" ], "table_ref": [ "tab_1", "tab_2" ], "text": "We test our proposal mainly on IWSLT'14 DE→EN. Table 1 summarises the results. 1 is the baseline system which is trained on parallel corpus only without any data augmentation. The average OOD is computed by averaging results across all OOD test sets. Single lex benefits OOD whereas ali does not. Firstly, we simply prepend the produced intermediate sequence(s) (any one of them and both of them in the order of lex-ali) to the target sequence. Results show that single lex ( 2 ) significantly improves the OOD performance by 2.2 BLEU, at the cost of 0.9 BLEU decrease in in-domain performance. However, the introduction of ali deteriorates the performance on both in-domain (ID) and OOD test sets ( 3 and 4 ). We argue that this comes from the reason that the learning of generating ali is more difficult than generating lex (ali needs an extra reordering step and also the produced ali is noisy due to the word alignment errors). As a result, ali is more erroneous than lex during inference. Therefore, generation quality of the target deteriorates due to its causal dependency on ali. ali benefits OOD with the support of permutation multi-task learning. We try to alleviate the problem by introducing the permutation multi-task learning on top of 2 ∼ 4 . Results show that the permutation successfully alleviates the deterioration of introducing ali, bringing positive results for both ID and OOD ( 3 → 6 , 4 → 7 ). With the permutation, a single ali intermediate sequence ( 6) can improve OOD over the baseline by 2 BLEU and the combination of lex and ali ( 7 ) bring further improvement on OOD over single lex ( 2 ) or single ali ( 6 ) by 0.5 and 0.7 BLEU respectively. The permutation shows a negative effect on single lex ( 2 → 5 ). Because the lex is very easy to learn, few error would occur when predicting lex. Therefore, permutation is not effective and even has negative effects as it makes the neural model hard to focus on learning the task of lex-tgt, leading to inferior performance. MBR decoding brings further improvement. For the lex, ali, tgt with permutation, there are six permutations in total. We dynamically select a consensus translation over each input data by performing MBR decoding over translation from all permu-tations. Results show MBR ( 7 → 8 ) could further improve the OOD and ID performances by 0.4 and 0.6 BLEU respectively, and outperforms baseline OOD by 3.1 BLEU at the cost of 1.6 BLEU decrease in ID. Results on other datasets and comparison with existing methods. As 8 achieves the highest OOD performance and 2 achieves relatively high OOD and ID performance with simpler techniques, we name 8 as PT f ull and 2 as PT simple and evaluate these two methods on another two domainrobustness datasets (OPUS DE→EN and Allegra DE→RM). Table 2 lists the results.\nBaselines (Transformer) in cited works (RMP and SSMBA) are trained under inappropriate hyperparameters, e.g. on IWSLT'14, the cited works uses default hyperparameters for the WMT dataset (more than 10 times larger than IWSLT'14). To enable better comparison by other researchers, we train the Transformer with the appropriate hyperparameters provided by Wang and Sennrich (2020) to build strong baselines, which outperform those in the cited works. We re-implement the other two DA methods based on our baseline for comparison.\nResults show that both PT simple and PT f ull perform most effectively on IWSLT'14 OOD, surpassing the existing methods by 0.7-2.3 BLEU. On the other two new datasets, PT simple and PT f ull show consistent OOD improvement, outperforming our baseline (Transformer) by 1.1-1.6 BLEU and 1.1-1.2 BLEU on OPUS and DE→RM dataset respectively. The ID performance of PT simple and PT f ull on these two datasets is less affected than on IWSLT'14, at the cost of 0.3-0.4 BLUE decrease on OPUS and even no decrease on the Allegra DE→RM.\nPT f ull significantly outperforms PT simple OOD on OPUS DE→EN and they show negligible ID differences. For Allegra DE→RM, PT simple and PT f ull shows similar OOD and ID performance." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "BLEU score indicates that the proposed methods can improve domain robustness. In this section, we investigate the reduction of hallucinations and performance on larger datasets of our methods." }, { "figure_ref": [ "fig_5" ], "heading": "Hallucinations", "publication_ref": [ "b39", "b16", "b18", "b17", "b1" ], "table_ref": [ "tab_4", "tab_5" ], "text": "Hallucinations are more pronounced in out-ofdomain translation, and their misleading nature makes them particularly problematic. Therefore, et al., 2021;Yan et al., 2022), and finding solutions for hallucinations (Miao et al., 2021;Müller and Sennrich, 2021) \nTo test our methods for reducing the hallucinations under domain shift, we manually evaluate the proportion of hallucinations on IWSLT'14 and OPUS (DE→EN) OOD test sets. We follow the definition and evaluation by Müller et al. (2020), considering a translation as a hallucination if it is (partially) fluent and its content is not related to the source (inadequate). We report the proportion of such hallucinations in each system.\nThe manual evaluation is performed by two students who have completed an English-medium university program. We collect ∼3000 annotations for 10 configurations. We ask annotators to evaluate translations according to fluency and adequacy. For fluency, the annotator classifies a translation as fluent, partially fluent or not fluent; for adequacy, as adequate, partially adequate or inadequate. We report the kappa coefficient (K) (Carletta, 1996) for inter-annotator and intra-annotator agreement in Table 3, and assess statistical significance with Fisher's exact test (two-tailed).\nTable 4 shows the results of human evaluation. All of the DA methods significantly decrease the proportion of hallucinations by 2%-6% on IWSLT'14 and by 9%-11% on OPUS, with the increase in BLEU. Note that the two metrics do not correlate perfectly: for example, PT corpus as the training data separately. We follow the same data preprocessing as for OPUS (medical). The hyperparameters for training the model are the same as those for IWSLT'14 when the corpus size is 0.2M and those for OPUS (medical) when the corpus size is 2M. For the corpus size of 20M, we increase the token batch size to 16384 instead of 4096 and keep the rest of the hyperparameters the same as for the 2M corpus size. Similarly, each experiment is independently run for three times and we report the average result.\nResults are shown in Figure 4. As expected, increasing the corpus size (0.2M-20M) improves both ID and OOD performance for all systems. When the corpus size is small (0.2M), PT f ull (red line) shows a considerable improvement in OOD over the baseline (blue line) by 4.3 BLEU and even slightly benefits ID, surpassing the baseline by around 0.9 BLEU. However, scaling up the corpus size (0.2M-20M) narrows the gap of OOD improvement (4.3-0.9 BLEU) between the baseline and PT f ull , and widens the ID deterioration from +0.9 to -1.6 BLEU.\nIn general, PT simple (green line) follows a similar tendency as PT f ull , compared to the baseline. However, PT simple underperforms the baseline at the corpus size of 2M. By a close inspection, we found that the training of PT simple is relatively unstable. The standard deviations of PT simple for OOD are 1.38, 2.49 and 0.24 on 0.2M, 2M and 20M corpus size respectively, whereas the standard deviations of PT f ull are 0.47, 0.27 and 0.52 respectively. This indicates that the training of PT simple is less stable than PT f ull when the corpus size is 0.2M-2M. The better stability of PT f ull may come from its permutation multi-task learning mechanism.\nPT simple always underperforms PT f ull on OOD for any corpus size. PT simple shows slightly better ID performance than PT f ull when the corpus size is large (2M-20M) but underperforms PT f ull on ID performance in low resource setting where the corpus size is 0.2M." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our results show that our introduced intermediate signals effectively improve the OOD performance of NMT. Intermediate sequence lex can benefit OOD by simply prepending it to the target. ali is more likely to be erroneous during inference than lex, which results in degenerated target due to the spurious causal relationship. Our proposed permutation multi-task learning successfully alleviates the problem and manifests the effectiveness of ali.\nExperiments also confirm that the MBR algorithm can further improve the performance by dynamically selecting a consensus translation from all permutations. The human evaluation shows that the proposed methods substantially reduce the number of hallucinations of the out-of-domain translation.\nExperiments on the larger corpus sizes indicate that our methods are especially promising in the low-resource scenarios.\nOur work is the first attempt to complete the puzzle of the study of intermediate signals in NMT, and two new ideas may benefit this study in other areas: 1) thinking intermediate signals from the intermediate structures between the transformation from the input to the output; 2) the permutation multi-task learning, instead of only pre/appending intermediate sequences to the output sequence. The permutation multi-task learning + MBR decoding framework is also a potential solution for any multi-pass generation tasks (e.g. speech translation), which suffer from the error propagation problem. The problem is alleviated with the permutation which disentangles causal relations between intermediate and final results. Finally, our work provides a new perspective of data augmentation in NMT, i.e. augmenting data by introducing extra sequences instead of directly modifying the source or target." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The way we use the intermediate sequences is to concatenate new sequences and the target sequence as the new target. As a result, the length of the target increases linearly with the number of intermediate sequences introduced, which increases the cost of inference. In the meantime, Minimum Bayes Risk decoding needs to do prediction multiple times under different control tasks, which further increases the computational cost. However, there are potential solutions to compromise between the computational cost and quality, e.g. learning a student model by distilling the domainrobust knowledge from Progressive Translation." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "The datasets used in the experiments are all wellknown machine translation datasets and publicity available. Data preprocessing does not involve any external textual resources. Intermediate sequences generated in our data augmentation method are new symbolic combinations of the tokens in the target language. However, the final output of the model is the tgt sequence which is the same as the target sequence in the original training set. Therefore, we would not expect the model trained with our data augmentation method would produce more harmful biases. Finally, we declare that any biases or offensive contexts generated from the model do not reflect the views or values of the authors." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "A.1 Discussion of Intermediate Sequences lex and ali intermediate sequences may come from certain intermediate topological spaces between the transformation from the topological spaces of the source into the target languages. We empirically confirm that such intermediate sequences might look strange but are easier for the neural model to learn and predict, since they are structurally closer to the source. We use the standard Transformer model to learn to predict lex, ali and tgt (this is just the baseline) directly on IWSLT'14 dataset and report the results on both in-domain and outof-domain test sets. Note that the gold-standard sequences of lex and ali on the out-of-domain test sets are produced on the corresponding out-of-domain training sets.\nTable 5 shows that lex is easier to be predicted than ali, and ali is easier to be predicted than tgt by the NMT model, over both in-domain and out-ofdomain test sets. " }, { "figure_ref": [], "heading": " * ", "publication_ref": [], "table_ref": [], "text": "The work described in this paper is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14200620)." } ]
Previous studies show that intermediate supervision signals benefit various Natural Language Processing tasks. However, it is not clear whether there exist intermediate signals that benefit Neural Machine Translation (NMT). Borrowing techniques from Statistical Machine Translation, we propose intermediate signals which are intermediate sequences from the "source-like" structure to the "target-like" structure. Such intermediate sequences introduce an inductive bias that reflects a domain-agnostic principle of translation, which reduces spurious correlations that are harmful to out-of-domain generalisation. Furthermore, we introduce a full-permutation multi-task learning to alleviate the spurious causal relations from intermediate sequences to the target, which results from exposure bias. The Minimum Bayes Risk decoding algorithm is used to pick the best candidate translation from all permutations to further improve the performance. Experiments show that the introduced intermediate signals can effectively improve the domain robustness of NMT and reduces the amount of hallucinations on outof-domain translation. Further analysis shows that our methods are especially promising in low-resource scenarios.
Progressive Translation: Improving Domain Robustness of Neural Machine Translation with Intermediate Sequences *
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of the transformation from a source sentence to the target translation and its analogy with vision. src: source; tgt: target; lex: word-by-word translation; ali: reorders lex monotonically based on word alignments.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "in intermediate supervision signals showed a benefit of such signals on out-of-domain generalisation, we expect intermediate signals may benefit domain robustness in NMT.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An illustration of the proposed intermediate sequences and multi-task learning framework. src: source.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "signals. More recently,Voita et al. (2021b) found that NMT acquires the three core SMT competencies, i.e. target-side language modelling, lexical translation and reordering, in order during the course of training. Inspired by this work, we produce word-for-word translations and aligned wordfor-word translations as the intermediate sequences to resemble the lexical translation and reordering components separately using the word alignments component in SMT. As shown in Figure 2 Data Augmentation part, for each source-target parallel sequence in the training corpus, we augment their target sequences with two extra intermediate sequences, lex and ali. The two intermediate sequences are prepended to the target to form an augmented target. lex: The source sequence is word-for-word translated based on a bilingual lexicon obtained from the parallel training corpus. Tokens that are not in the lexicon are copied into lex.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Causal graphs for the source and three targetside sequences. Solid arrow denotes casual dependence and dashed arrow represents the statistical correlation between two variables. Left: relations if we simply prepend lex and ali to the target. Right: relations after full-permutation multi-task learning.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Average BLEU (↑) on in-domain and out-of-domain test sets for models trained on OPUS DE→EN (subtitles) with various sizes of the training corpus.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Average BLEU (↑) and standard deviation of ablation results on in-domain and out-of-domain test sets on IWSLT'14 DE→EN. permu: permutation.", "figure_data": "ID Augmentation In-DomainITLawMedicalaverage OOD1Transformer32.1±0.3814.7±0.21 10.1±0.38 17.0±0.2513.9±0.192lex+tgt31.2±0.5016.6±0.26 11.1±0.23 20.7±0.6616.1±0.303ali+tgt25.8±3.5714.4±2.544.5±6.00 17.9±1.3212.2±3.254lex+ali+tgt25.5±7.829.4±1.143.1±2.31 11.3±6.707.9±1.7152 + permu30.1±1.5515.5±0.507.2±5.48 19.0±1.0813.9±2.1863 + permu30.6±0.3016.9±1.00 10.8±0.40 19.9±0.6015.9±0.5374 + permu29.9±0.3218.2±0.89 10.8±0.10 20.7±0.4016.6±0.3787 + MBR30.5±0.2117.7±0.7211.8±0.121.6±0.4917.0±0.35", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Average BLEU (↑) and standard deviation on in-domain and out-of-domain test sets for models trained on IWSLT'14 DE→EN, OPUS DE→EN and Allegra DE→RM. PT simple : method 2 in Table1; PT f ull : method 8 in Table1; RMP: Reverse+Mono+Replace many works have been conducted on hallucinations, involving detection of hallucinations(Zhou et al., 2021;Guerreiro et al., 2022;Dale et al., 2022), exploration of the causes of hallucinations (Raunak", "figure_data": "IWSLT'14OPUSDE→RMaugmentation in-domain average OOD in-domain average OOD in-domain average OODResults reported by Sánchez-Cartagena et al. (2021):Transformer30.0±0.108.3±0.85----RMP31.4±0.3011.8±0.48----Results reported by Ng et al. (2020):Transformer--57.010.251.512.2SSMBA--54.910.752.014.7Our experiments:Transformer32.1±0.3813.9±0.1958.8±0.3811.0±0.2254.4±0.2519.2±0.23SSMBA31.9±0.1515.4±0.1058.4±0.2012.1±0.2154.7±0.2020.4±0.15RMP32.2±0.0614.7±0.1759.2±0.2512.6±0.4155.1±0.2121.5±0.23PT simple31.2±0.5016.1±0.3058.5±0.6412.1±0.1854.6±0.1220.3±0.31PT f ull30.5±0.2117.0±0.3558.4±0.1212.6±0.1054.4±0.2120.4±0.51", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "f ull has", "figure_data": "inter-annotatorintra-annotatorannotation P (A) P (E) K P (A) P (E) Kfluency0.520.31 0.30 0.840.39 0.73adequacy0.680.38 0.48 0.880.38 0.81", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Inter-annotator (N=300) and intra-annotator agreement (N=150) of manual evaluation. a higher BLEU than PT simple but PT simple has a similar or even lower proportion of hallucinations than PT f ull . This indicates that PT f ull improves translation quality in other aspects.", "figure_data": "% hallucinations (BLEU)Augmentation IWSLT'14OPUSTransformer11% (13.9) 39% (11.0)RMP9% (14.7) 30% (12.6)SSMBA6% (15.4) 28% (12.1)PT simple5% (16.1) 28% (12.1)PT f ull7% (17.0) 30% (12.6)", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Proportion of hallucinations (↓) and BLEU (↑)on out-of-domain test sets over IWSLT'14 and OPUS(DE→EN).5.2 Tendency by scaling up the corpus sizeSince the size of the training corpus in the previousexperiments ranges from 0.1M to 0.6M (million)samples, which is a low-resource setting for NMT,here we investigate the performance of our methodswhen scaling up the corpus size. We use subtitlesdomain from OPUS as the in-domain training data(because it has around 20M sentence pairs) andthe rest four domains as the OOD test sets. Weuse the first 0.2M, 2M and 20M samples in the", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Average BLEU (↑) and standard deviation on in-domain and out-of-domain test sets on IWSLT'14 DE→EN when the target is lex, ali or tgt separately.", "figure_data": "DomainlexalitgtID94.0±0.20 61.1 ±0.12 32.1±0.38OOD72.6±0.60 47.9 ±0.48 13.9±0.19", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Configurations of NMT systems over three datasets.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Chaojun Wang; Yang Liu; Wai Lam
[ { "authors": "Martin Arjovsky; Léon Bottou; Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b0", "title": "Invariant risk minimization", "year": "2019" }, { "authors": "Jean Carletta", "journal": "Computational Linguistics", "ref_id": "b1", "title": "Assessing agreement on classification tasks: The kappa statistic", "year": "1996" }, { "authors": "Mauro Cettolo; Jan Niehues; Sebastian Stüker; Luisa Bentivogli; Marcello Federico", "journal": "", "ref_id": "b2", "title": "Report on the 11th IWSLT evaluation campaign", "year": "2014" }, { "authors": "Wenhu Chen; Evgeny Matusov; Shahram Khadivi; Jan-Thorsten Peter", "journal": "", "ref_id": "b3", "title": "Guided alignment training for topic-aware neural machine translation", "year": "2016" }, { "authors": "David Dale; Elena Voita; Loïc Barrault; Marta R ", "journal": "", "ref_id": "b4", "title": "Detecting and mitigating hallucinations in machine translation: Model internal workings alone do well, sentence similarity even better", "year": "2022" }, { "authors": "Jinhua Du; Andy Way", "journal": "Prague Bulletin of Mathematical Linguistics", "ref_id": "b5", "title": "Pre-reordering for neural machine translation: Helpful or harmful?", "year": "2017" }, { "authors": "Bryan Eikema; Wilker Aziz", "journal": "", "ref_id": "b6", "title": "Sampling-based minimum bayes risk decoding for neural machine translation", "year": "2021" }, { "authors": "Qin Gao; Stephan Vogel", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Parallel implementations of word alignment tool", "year": "2008" }, { "authors": "Vaibhava Goel; William J Byrne", "journal": "Computer Speech&Language", "ref_id": "b8", "title": "Minimum bayes-risk automatic speech recognition", "year": "2000" }, { "authors": "M Nuno; Elena Guerreiro; Voita; F T André; Martins", "journal": "", "ref_id": "b9", "title": "Looking for a needle in a haystack: A comprehensive study of hallucinations in neural machine translation", "year": "2022" }, { "authors": "Wei He; Zhongjun He; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b10", "title": "Improved neural machine translation with smt features", "year": "2016" }, { "authors": "Philipp Koehn; Hieu Hoang; Alexandra Birch; Chris Callison-Burch; Marcello Federico; Nicola Bertoldi; Brooke Cowan; Wade Shen; Christine Moran; Richard Zens; Chris Dyer; Ondřej Bojar; Alexandra Constantin; Evan Herbst", "journal": "", "ref_id": "b11", "title": "Moses: Open source toolkit for statistical machine translation", "year": "2007" }, { "authors": "Philipp Koehn; Rebecca Knowles", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Six challenges for neural machine translation", "year": "2017" }, { "authors": "Aitor Lewkowycz; Anders Andreassen; David Dohan; Ethan Dyer; Henryk Michalewski; Vinay Ramasesh; Ambrose Slone; Cem Anil; Imanol Schlag; Theo Gutman-Solo; Yuhuai Wu; Behnam Neyshabur; Guy Gur-Ari; Vedant Misra", "journal": "", "ref_id": "b13", "title": "Solving quantitative reasoning problems with language models", "year": "2022" }, { "authors": "Pierre Lison; Jörg Tiedemann", "journal": "European Language Resources Association (ELRA", "ref_id": "b14", "title": "OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles", "year": "2016" }, { "authors": "Jiacheng Liu; Alisa Liu; Ximing Lu; Sean Welleck; Peter West; Le Ronan; Yejin Bras; Hannaneh Choi; Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Generated knowledge prompting for commonsense reasoning", "year": "2022" }, { "authors": "Mengqi Miao; Fandong Meng; Yijin Liu; Xiao-Hua Zhou; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Prevent the language model from being overconfident in neural machine translation", "year": "2021" }, { "authors": "Mathias Müller; Annette Rios; Rico Sennrich", "journal": "", "ref_id": "b17", "title": "Domain robustness in neural machine translation", "year": "2020" }, { "authors": "Mathias Müller; Rico Sennrich", "journal": "", "ref_id": "b18", "title": "Understanding the properties of minimum Bayes risk decoding in neural machine translation", "year": "2021" }, { "authors": "Shashi Narayan; Yao Zhao; Joshua Maynez; Gonçalo Simões; Vitaly Nikolaev; Ryan Mcdonald", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b19", "title": "Planning with learned entity prompts for abstractive summarization", "year": "2021" }, { "authors": "Nathan Ng; Kyunghyun Cho; Marzyeh Ghassemi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "SSMBA: Self-supervised manifold based data augmentation for improving out-of-domain robustness", "year": "2020" }, { "authors": "Maxwell Nye; Anders Johan Andreassen; Guy Gur-Ari; Henryk Michalewski; Jacob Austin; David Bieber; David Dohan; Aitor Lewkowycz; Maarten Bosma; David Luan; Charles Sutton; Augustus Odena", "journal": "", "ref_id": "b21", "title": "Show your work: Scratchpads for intermediate computation with language models", "year": "2022" }, { "authors": "Josef Franz; Hermann Och; Ney", "journal": "Computational Linguistics", "ref_id": "b22", "title": "A systematic comparison of various statistical alignment models", "year": "2003" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Aurelio Marc; Sumit Ranzato; Michael Chopra; Wojciech Auli; Zaremba", "journal": "", "ref_id": "b25", "title": "Sequence level training with recurrent neural networks", "year": "2016-05-02" }, { "authors": "Arul Vikas Raunak; Marcin Menezes; Junczys-Dowmunt", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "The curious case of hallucinations in neural machine translation", "year": "2021" }, { "authors": "M Víctor; Miquel Sánchez-Cartagena; Juan Esplà-Gomis; Felipe Antonio Pérez-Ortiz; Sánchez-Martínez", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Rethinking data augmentation for low-resource neural machine translation: A multitask learning approach", "year": "2021" }, { "authors": "Yves Scherrer; Bruno Cartoni", "journal": "European Language Resources Association (ELRA", "ref_id": "b28", "title": "The trilingual ALLEGRA corpus: Presentation and possible use for lexicon induction", "year": "2012" }, { "authors": "Rico Sennrich; Orhan Firat; Kyunghyun Cho; Alexandra Birch; Barry Haddow; Julian Hitschler; Marcin Junczys-Dowmunt; Samuel Läubli; Antonio Valerio Miceli; Jozef Barone; Maria Mokry; Nȃdejde", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Nematus: a toolkit for neural machine translation", "year": "2017" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Miloš Stanojević; Khalil Sima'an", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Fitting sentence level translation evaluation with many dense features", "year": "2014" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Analyzing the source and target contributions to predictions in neural machine translation", "year": "2021" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Language modeling, lexical translation, reordering: The training process of NMT through the lens of classical SMT", "year": "2021" }, { "authors": "Chaojun Wang; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "On exposure bias, hallucination and domain shift in neural machine translation", "year": "2020" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b37", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Noam Wies; Yoav Levine; Amnon Shashua", "journal": "", "ref_id": "b38", "title": "Sub-task decomposition enables learning in sequence to sequence tasks", "year": "2022" }, { "authors": "Jianhao Yan; Fandong Meng; Jie Zhou", "journal": "", "ref_id": "b39", "title": "Probing causes of hallucinations in neural machine translations", "year": "2022" }, { "authors": "Yang Zhao; Jiajun Zhang; Chengqing Zong", "journal": "European Language Resources Association (ELRA", "ref_id": "b40", "title": "Exploiting pre-ordering for neural machine translation", "year": "2018" }, { "authors": "Chunting Zhou; Graham Neubig; Jiatao Gu; Mona Diab; Francisco Guzmán; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Detecting hallucinated content in conditional neural sequence generation", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 349.72, 703.29, 174.69, 34.41 ], "formula_id": "formula_0", "formula_text": "y = argmax s i ∈S 1 n n s j =1 u (s i , s j ) (1)" } ]
10.33011/lilt.v16i.1417
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b7", "b2", "b9", "b17", "b3", "b15", "b4", "b6", "b1", "b16", "b20", "b9", "b7" ], "table_ref": [], "text": "In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency (Figure 1).\nIn supervised learning settings, a model is trained with finite input-output examples {(x 0 , y 0 ), . . . , (x n , y n )} and then tested with unseen input-output pairs. The models that achieve high accuracy on test data are often said to \"generalize well\". However, the important point is that func-Figure 1: An example showing a train dataset and two candidate generalization patterns, each showing a different output sequence frequency. Here, \"aababba\" is the input sequence, and there are four binary train labels 0, 1, 1, 0 each corresponding to the prefix of length 2, 3, 5, 6.\ntion f that satisfies f (x i ) = y i cannot be uniquely determined by finite train examples. This entails that if a model generalizes well to a certain function f , then the model hardly generalizes to another function f that has different outputs for the same unseen inputs, i.e., f (x test ) = f (x test ) but is consistent with the same train examples; f (x i ) = y i . Therefore, it is crucial to understand what kind of functions a model inherently prefers to learn, which is referred to as inductive bias (White and Cotterell, 2021;Kharitonov and Chaabouni, 2020;Delétang et al., 2022;Lovering et al., 2020).\nOur target is Recurrent Neural Network (RNN): a well-known deep learning architecture. A key feature of RNN is that it processes the input incrementally and predicts the output at each time step, producing a sequence of outputs. This is different from other deep learning architectures, e.g., Feed Forward Network (FFN), Convolutional Neural Network (CNN), and Transformers (Vaswani et al., 2017). Due to the incremental processing feature of RNNs, the inputs can be of variable length; RNNs have been used for various tasks in natural language processing, such as sentence classification and text generation. It has also been used as a subcomponent of more complex architectures (Dyer et al., 2016) and to simulate human sequential processing (Steinert-Threlkeld and Szymanik, 2019). Variants of RNN architectures have been proposed so far. The most basic one is the Elman RNN (Elman, 1990). Later, more complex architectures, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014), have been proposed to improve modeling long-term dependencies.\nAlthough deep learning models, including RNNs, are said to be high-performance models, they are essentially black boxes, and it is not clear what inductive bias they may have. In this research, in order to analyze the inductive bias of RNNs, we propose to calculate the output sequence frequency by regarding the outputs of RNNs as discrete-time signals and applying frequency domain analysis. Specifically, we apply discrete Fourier transform (DFT) to the output signals and compute the dominant frequencies to grasp the overall output patterns.\nInductive bias is not straightforward to analyze since it can be affected by various factors such as the task, dataset, and training method; theoretical analysis has been limited to simple architecture such as FFN (Rahaman et al., 2019;Valle-Perez et al., 2019). Therefore, empirical studies have been conducted to clarify the inductive bias in various tasks and settings, such as language modeling (White and Cotterell, 2021), sequence classification (Lovering et al., 2020), and sequenceto-sequence (Kharitonov and Chaabouni, 2020). These works approached the problems by designing synthetic datasets and testing several generalization patterns. However, when examining the output sequence frequency, we cannot directly apply these previous methods since enumerating exponentially many output sequence patterns in longer sequences is computationally difficult. To this end, our method makes use of frequency domain analysis to directly calculate the output sequence frequencies and avoid enumerating the candidate generalization patterns.\nIn the experiment, we randomly generated 500 synthetic datasets and trained models on a few data points (Figure 1). As a result, we found:\n• LSTM and GRU have an inductive bias such that the output changes at lower frequencies compared to Elman RNN, which can easily learn higher frequency patterns, • The inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inductive Bias Analysis", "publication_ref": [ "b20", "b7", "b7", "b14", "b7", "b7" ], "table_ref": [], "text": "Inductive bias analysis is usually performed by constructing synthetic datasets. This is because data from real tasks are complex and intertwined with various factors, making it difficult to determine what properties of the dataset affect the behavior of the model. For example, White and Cotterell (2021) targeted LSTM and Transformer and investigated whether easy-to-learn languages differ depending on their typological features in language modeling. White and Cotterell (2021) used Context Free Grammar (CFG) to construct parallel synthetic language corpora with controlled typological features. They trained models on each language and computed their perplexities to find that LSTM performs well regardless of word order while the transformer is affected. Another more synthetic example is Kharitonov and Chaabouni (2020). Kharitonov and Chaabouni (2020) targeted LSTM, CNN, and Transformer. They designed four synthetic tasks in the sequence-to-sequence framework and trained models on very small datasets (containing 1~4 data points). To examine the inductive biases of the models, they prepared a pair of candidate generalization patterns, such as COUNT and MEMORIZATION, for each task and compared the models' preference over the candidate patterns by calculating the Minimum Description Length (Rissanen, 1978). Using extremely small train datasets makes it possible to restrict the information models can obtain during training and analyze the models' inherent inductive bias in a more controlled setup.\nIn this research, we take a similar approach as (Kharitonov and Chaabouni, 2020), restricting the train data to extremely small numbers. However, we cannot directly apply the methods of (Kharitonov and Chaabouni, 2020) because the approach of comparing with candidate generalization patterns can be impractical in our case. Specifically, when examining the output sequence frequency, it is necessary to feed the models with longer se-quences in order to analyze a wide range of frequencies from low to high; there are exponentially many patterns with the same number of output changes in longer sequences, which makes it difficult to exhaustively enumerate the candidate generalization patterns. Therefore, instead of preparing candidate generalization patterns, we directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis." }, { "figure_ref": [], "heading": "Frequency Domain Analysis", "publication_ref": [], "table_ref": [], "text": "Discrete Fourier Transform (DFT) is a fundamental analysis technique in digital signal processing. Intuitively, DFT decomposes a signal into a sum of finite sine waves of different frequencies, allowing one to analyze what frequency components the original signal consists of. The DFT for a length N discrete-time signal f [0], . . . , f [N -1] is defined by the following equation: \nF [k] = N -1 n=0 f [n] exp - √ -1 2π N kn . (1) When f [n] is a real-value signal, it is sufficient to consider only k ∈ {1, . . . , N 2 }." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Task", "publication_ref": [], "table_ref": [], "text": "To analyze the output sequence frequency, i.e., how frequently the output changes through time steps, we focus on a simple case of binary sequence classification task: the inputs are the prefixes of a binary sequence s ∈ {a, b} * . Specifically, given a binary sequence s ∈ {a, b} * , the input space I and the output space O are defined as follows: where O is a set of categorical distributions over the binary labels {0, 1}, and p denotes the probability of predicting label 1.\nI = {s 0:i | i = 0, . . . |s| -1}, (2) O = {(1 -p, p) | p ∈ [0, 1]},(3)\nWithout loss of generality, we can only consider the model's output probability of predicting label 1 for the sequence s 0:i , which we denote by M(s 0:i ).\nIn this way, we can regard the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) as a discretetime signal taking values in [0, 1]." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Train Dataset", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows an intuitive illustration of our dataset construction. Given a sequence s, we randomly generate the binary labels y 0:|s|-1 , where each y i is the label assigned to the prefix s 0:i . When two successive labels y i and y i+1 differ, we say there is a label change (e.g., y 9 and y 10 in Figure 2). 2 We then make a train dataset D by taking instances where the labels change: {(s 0:i , y i ), (s 0:i+1 , y i+1 ) | y i = y i+1 }. For example, in Figure 2, the train data D contains {(aa, 0) (aab, 1) (aababba, 1) (aababbaa, 0), . . .}. Note that the original labels y 0:|s|-1 can be uniquely recovered from D simply by interpolating or extending the labels for other prefixes.\nThe procedure is formalized as follows:\n1. Sample a sequence s ∈ {0, 1} N , where N is the length of the sequence, 2. Sample the number of label changes m ∈ {1, . . . M }, where M is the maximum number of label changes, 3. Sample the labels y 0:|s|-1 so that all the m label changes do not overlap3 , i.e. ∀i, j. i < j ∧ y i = y i+1 ∧ y j = y j+1 ⇒ i + 1 < j, 4. Create a dataset as D = {(s 0:i , y i ), (s 0:i+1 , y i+1 ) | y i = y i+1 }.\nBy training models on random input sequences s, we expect the model predictions to represent the inherent generalization property of the model." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "For the analysis, we apply two evaluation metrics." }, { "figure_ref": [], "heading": "Test Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "First, we compare the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) with the original labels y 0:|s|-1 by calculating test cross-entropy loss L CE . Intuitively, near-zero L CE indicates that the model generalizes to simply interpolate or extend the training labels since we constructed the train datasets so that the original labels can be recovered by interpolation, as described in section 3.2. The loss is formalized as:\nL CE = - 1 |T | i∈T (y i ln(M(s 0:i )) +(1-y i ) ln(1 -M(s 0:i ))),(4)\nwhere T = {i | (s 0:i , _) / ∈ D} is the set of test data indices." }, { "figure_ref": [], "heading": "Dominant Frequency", "publication_ref": [], "table_ref": [], "text": "In case L CE is high, we consider the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) as a discretetime signal and apply frequency domain analysis to look into the model's behavior. More specifically, we apply DFT to the output signal and obtain the dominant frequency ω dom . The dominant frequency ω dom is calculated by simply replacing f [n] in Equation 1 with M(s 0:n )." }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b6", "b1", "b4" ], "table_ref": [], "text": "Here, we describe the basic settings of our experiment. We use well-known basic RNN architectures: LSTM (Hochreiter and Schmidhuber, 1997), GRU (Cho et al., 2014), and Elman RNN (Elman, 1990). For the decoding, we use a linear decoder without bias followed by a softmax function. We try 4 combinations of hyperparameters: (num_layers, hidden_size) ∈ {(1, 200), (2, 200), (3, 200), (2, 2000)}, where num_layers denotes the number of layers, and hidden_size denotes the size of hidden layers. 4For optimization, we train models to minimize the average cross-entropy loss by gradient descent using Adam (Kingma and Ba, 2015) with a learning rate of 1.0 × 10 -4 for 1000 epochs. 5Finally, we randomly generate 500 train datasets with N = 100, M = 5 and train 10 models with different random seeds for each dataset, architecture, and parameter setting. Note that this sparse setting (10 : 90 train-test data ratio at maximum) keeps the hypothesis space large and thus enables us to analyze the inductive bias of the models as described in section 2.1.\nTraining all the models took around 30 hours using 8 NVIDIA A100 GPUs." }, { "figure_ref": [], "heading": "Findings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Models Do Not Learn to Interpolate", "publication_ref": [], "table_ref": [], "text": "In order to see if the models generalize simply to interpolate the given labels, we calculate the median test cross-entropy loss of the multiple models trained for each dataset (Figure 3). The dotted vertical line shows the random baseline loss of -ln( 1 2 ) ≈ 0.7. As can be seen in Figure 3, the median test cross-entropy loss is higher than the random baseline for most datasets for all of LSTM, GRU, and Elman RNN. This indicates that, in most cases, none of the LSTM, GRU, or Elman RNN learns to interpolate in this extremely simple setup, where only the label-changing part is given as training data. We also observe a similar trend in other hyperparameter settings; The test cross-entropy losses for other settings are shown in Appendix A." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_5" ], "heading": "Architectural Difference", "publication_ref": [], "table_ref": [], "text": "Now that the test cross-entropy loss has revealed that the patterns learned by the models contain more output changes than the original pattern in the train data, the next step is to see if there are any architecture-specific trends in the output sequence patterns. We calculate the dominant frequency for each model and take the median over the models trained on the same dataset. Figure 4 shows the distribution of median dominant frequencies for LSTM, GRU, and Elman RNN with different hyperparameters. It is clear that, in all settings, LSTM and GRU tend to learn lower-frequency patterns, while the dominant frequencies of Elman RNN tend to be higher. Comparing LSTM and GRU, LSTM has slightly lower-frequency patterns for hidden_size = 200 (Figure 4 (a,b,c)), though the difference is not as clear for hidden_size = 2000 (Figure 4 (d)).\nAn example of sequential outputs of LSTM and Elman is shown in Figure 5. The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, only 4 labels 0, 1, 1, 0 are given to the prefixes of length 60, 61, 84, 85. It is clear that both LSTM and Elman learn periodic patterns but do not learn to interpolate the given train labels. Besides, it is also notable that LSTMs indeed learn lower- frequency patterns compared to Elman RNNs." }, { "figure_ref": [], "heading": "Effect of Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Here, we describe how hyperparameters affect the observed inductive biases." }, { "figure_ref": [ "fig_6" ], "heading": "Number of Layers", "publication_ref": [], "table_ref": [], "text": "Figure 6 shows the median dominant frequencies of num_layers = 1, 2, 3 for LSTM, GRU, and Elman RNN. As for LSTM, it can be seen that the proportion of patterns in the lower-frequency domain tends to increase as the number of layers increases. In other words, despite the increased complexity of the models, LSTMs tend to learn simpler patterns (in the sense that the output changes less). A similar trend is observed for GRU, although not as clear as for LSTM. On the other hand, Elman RNN does not show such apparent differences." }, { "figure_ref": [ "fig_7" ], "heading": "Hidden Layer Size", "publication_ref": [], "table_ref": [], "text": "Figure 7 shows the median dominant frequencies of hidden_size = 200, 2000 for LSTM, GRU, and Elman RNN. Although the trend is not so clear, for LSTM and GRU, the counts are slightly larger for ω dom = 0.5 ∼ 1.0 when hidden_size = 2000, while the counts are larger for ω dom = 0.0 ∼ 0.5 when hidden_size = 200. This is rather the opposite trend from that of num_layers. However, the above trend does not seem to appear in Elman RNN." }, { "figure_ref": [], "heading": "Discussion and Limitation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Expressive Capacity and Output", "publication_ref": [ "b10", "b19", "b10", "b19", "b10" ], "table_ref": [], "text": "Sequence Frequency\nOur results do not align with the expressive capacity of RNNs reported in previous work (Merrill et al., 2020;Weiss et al., 2018). Merrill et al. (2020); Weiss et al. (2018) formally showed that LSTM is strictly more expressive than GRU and Elman RNN. On the other hand, in our experiments, LSTM and GRU show a bias toward lower frequencies, while Elman RNN, which has the same expressive capacity as GRU, according to (Merrill et al., 2020), shows an opposite bias toward higher frequencies. Note that the expressive capacity and the inductive bias of a model are basically different concepts. This is because expressive capacity is the theoretical upper bound on the functions a model can represent with all possible combinations of its parameters, regardless of the training procedure. In contrast, inductive bias is the preference of functions that a model learns from finite train data, possibly depending on training settings. However, they are not entirely unrelated because a function that is impossible to learn in terms of expressive capacity will never be learned, which can emerge as inductive bias. We conjecture that the difference between the expressive capacity and the observed inductive bias is due to the simplicity of our experiment setting. This difference is not a negative result: It indicates that inductive bias in such a simple setting is effective in observing detailed differences that cannot be captured by expressive capacity." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Randomness of Outputs", "publication_ref": [ "b16" ], "table_ref": [], "text": "Previous study showed that FFNs hardly learn random functions since they are inherently biased toward simple structured functions (Valle-Perez et al., 2019). We can find a similar trend for RNNs in our experimental results. In other words, by regarding the outputs of RNNs as discrete-time signals, we can confirm that the signals are not random, i.e., white noises. If we assume that the output signals of the RNNs are random, the dominant frequency should be uniformly distributed from low to high-frequency regions. Therefore, the biased distribution in Figure 4 indicates that the outputs of the RNNs are not random signals. This is also clear from the example outputs in Figure 5, where the models show periodic patterns." }, { "figure_ref": [], "heading": "Practical Implication", "publication_ref": [ "b0", "b5" ], "table_ref": [], "text": "For LSTM and GRU, we observed different inductive biases between increasing the number of layers and hidden layer size. Previous study that investigated whether RNNs can learn parenthesis also reported that LSTM and GRU behaved differently when the number of layers and the hidden layer size were increased (Bernardy, 2018). Although the tasks are different, our findings align with the previous work. From a practical point of view, these findings suggest that it may be more effective to increase the number of layers than to increase the hidden layer size depending on the target task.\nBesides, the fact that LSTM and GRU, which are known to be \"more practical\" than Elman RNN, tend to learn lower frequency patterns may support the idea that output sequence frequency aligns with \"practical usefulness.\" Furthermore, a concept similar to output sequence frequency has been proposed as a complexity measure in sequence classification: sensitivity (Hahn et al., 2021). While output sequence frequency focuses on the change in output over string length, sensitivity focuses on the change in output when a string is partially replaced, keeping its length. It would be an interesting future direction to examine the validity of inductive biases in output sequence frequency as an indicator of complexity and practical usefulness." }, { "figure_ref": [], "heading": "Limitation", "publication_ref": [], "table_ref": [], "text": "There are some dissimilarities between our experimental setup and practical sequence classification tasks:\n• The task is limited to the binary classification of binary sequences, • Models are trained only on prefixes of a sequence, • The number of train data is extremely small. Therefore, in order to accurately estimate the impact of our findings on the actual task, it is necessary to expand from sequence to language in a multi-label setting with a larger vocabulary.\nDue to the computational complexity, we only tried 4 combinations of hyperparameters. However, it is still necessary to exhaustively try combinations of hyperparameters for a more detailed analysis." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This study focuses on inductive bias regarding the output sequence frequency of RNNs, i.e., how often RNNs tend to change the outputs through time steps. To this end, we constructed synthetic datasets and applied frequency domain analysis by regarding the model outputs as discrete-time signals.\nExperimental results showed that LSTM and GRU have inductive biases towards having low output sequence frequency, whereas Elman RNN tends to learn higher-frequency patterns. Such differences in inductive bias could not be captured by the expressive capacity of each architecture alone. This indicates that inductive bias analysis on synthetic datasets is an effective method for studying model behaviors.\nBy testing different hyperparameters, we found that the inductive biases of LSTM and GRU vary with the number of layers and the hidden layer size in different ways. This confirms that when increasing the total number of parameters in a model, it would be effective not only to increase the hidden layer size but also to try various hyperparameters, such as the number of layers.\nAlthough the experimental setting was limited to simple cases, we believe this research shed some light on the inherent generalization properties of RNNs and built the basis for architecture selection and design." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "A Test Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "Figure 8 shows the distributions of median test cross-entropies in all settings we tried. As we can see in Figure 8, the median test cross-entropy loss is higher than the random baseline for most datasets in all cases." }, { "figure_ref": [ "fig_10", "fig_11", "fig_13", "fig_12", "fig_9", "fig_10", "fig_11", "fig_13", "fig_12" ], "heading": "B Raw Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "In Figure 9, Figure 10, Figure 12, and Figure 11, we show the scatter plot of the train/test crossentropies for LSTM, GRU, and Ellman RNN for all the settings. The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln( 12 ) ≈ 0.7. In Figure 8, the number of datasets having near-zero test cross-entropy is relatively higher for LSTM and GRU. For example, from Figure 9 (a), Figure 10 (a), Figure 12 (a), and Figure 11 (a), we can see that the datasets with the near-zero test cross-entropy loss mostly have only 1 label change. This indicates that LSTM and GRU indeed sometimes learn to naively extend the given labels, but mostly in the extreme case where the datasets have only 1 label change. However, for Elman RNN, we cannot find such a trend." }, { "figure_ref": [ "fig_14", "fig_15", "fig_16", "fig_6", "fig_9", "fig_14", "fig_15", "fig_16", "fig_6" ], "heading": "C Raw Dominant Frequency", "publication_ref": [], "table_ref": [], "text": "In Figure 13, Figure 14, Figure 15, and Figure 16, we show the scatter plot of the dominant frequencies for LSTM, GRU, and Ellman RNN for all the settings. The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequency.\nIn Figure 8 (b,c), the number of datasets having the lowest frequency pattern is relatively higher for LSTM and GRU. We can see that these lowest frequency patterns are mostly restricted to the datasets having only 1 label change (Figure 13 (a), Figure 14 (a), Figure 15 (a), and Figure 16 (a)). This should be consistent with the findings in Appendix B. When a model simply learns to extend the labels, its dominant frequency is expected to be near its lowest when there is only one label change in the training dataset since the output sequence contains only one output change in such a case. " } ]
A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally processes input sequences. In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency. Previous work analyzed inductive bias by training models with a few synthetic data and comparing the model's generalization with candidate generalization patterns. However, when examining the output sequence frequency, previous methods cannot be directly applied since enumerating candidate patterns is computationally difficult for longer sequences. To this end, we propose to directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. Experimental results showed that Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have an inductive bias towards lowerfrequency patterns, while Elman RNN tends to learn patterns in which the output changes at high frequencies. We also found that the inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers.
Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences
[ { "figure_caption": "1 Here, k = 1 corresponds to the lowest frequency component and k = N 2 to the highest. One useful measure for analyzing the property of the signal f [n] is the dominant frequency (Ng and Goldberger, 2007). In short, dominant frequency is the frequency component of maximum amplitude and is expected to represent the general periodic pattern of the original signal f [n]. The dominant frequency ω dom is defined by ω dom = 2π N k max , where k max = arg max{|F [k]|}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of train dataset construction. The train dataset contains only the instances corresponding to the label changes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The dotted vertical line shows the random baseline loss of -ln( 1 2 ).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a) The results for num_layers = 1, hidden_size = 200. (b) The results for our base case num_layers = 2, hidden_size = 200. (c) The results for num_layers = 3, hidden_size = 200. (d) The results for num_layers = 2, hidden_size = 2000.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The median dominant frequency counts for LSTM, GRU, and Elman RNN with different hyperparameters.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example of LSTM and Elman RNN with (num_layers, hidden_size) = (2, 200). The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, 4 labels 0, 1, 1, 0 are assigned to the prefixes of length 60, 61, 84, 85. The Red and blue vertical lines correspond to the labels 0, 1, respectively. The results of 10 models with different random seeds are shown.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The median dominant frequencies of num_layers = 1, 2, 3 for LSTM, GRU, and Elman RNN with hidden_size = 200.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The median dominant frequencies of hidden_size = 200, 2000 for LSTM, GRU, and Elman RNN with num_layers = 2.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "(a) The results for num_layers = 1, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.04, and the standard deviation was at most 0.09. (b) The results for our base case num_layers = 2, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.01, and the standard deviation was at most 0.05. (c) The results for num_layers = 3, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.004, and the standard deviation was at most 0.03. (d) The results for num_layers = 2, hidden_size = 2000.In this setting, the mean of the median train cross-entropy loss was at most 0.02, and the standard deviation was at most 0.09.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with different hyperparameters. The dotted vertical line shows the random baseline loss of -ln( 1 2 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (1, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (3, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 2000). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_13", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (1, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_14", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_15", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (3, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_16", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 2000). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" } ]
Taiga Ishii; Ryo Ueda; Yusuke Miyao
[ { "authors": "Jean-Philippe Bernardy", "journal": "", "ref_id": "b0", "title": "Can Recurrent Neural Networks Learn Nested Recursion? Linguistic Issues in Language Technology", "year": "2018" }, { "authors": "Kyunghyun Cho; Bart Van Merriënboer; Caglar Gulcehre; Dzmitry Bahdanau; Fethi Bougares; Holger Schwenk; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "year": "2014" }, { "authors": "Grégoire Delétang; Anian Ruoss; Jordi Grau-Moya; Tim Genewein; Kevin Li; Elliot Wenliang; Marcus Catt; Shane Hutter; Pedro A Legg; Ortega", "journal": "", "ref_id": "b2", "title": "Neural networks and the chomsky hierarchy", "year": "2022" }, { "authors": "Chris Dyer; Adhiguna Kuncoro; Miguel Ballesteros; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Recurrent Neural Network Grammars", "year": "2016" }, { "authors": "Jeffrey L Elman", "journal": "Cognitive Science", "ref_id": "b4", "title": "Finding Structure in Time", "year": "1990" }, { "authors": "Michael Hahn; Dan Jurafsky; Richard Futrell", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "Sensitivity as a Complexity Measure for Sequence Classification Tasks", "year": "2021" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b6", "title": "Long Short-Term Memory", "year": "1997" }, { "authors": "Eugene Kharitonov; Rahma Chaabouni", "journal": "", "ref_id": "b7", "title": "What they do when in doubt: A study of inductive biases in seq2seq learners", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b8", "title": "Adam: A Method for Stochastic Optimization", "year": "2015-05-07" }, { "authors": "Charles Lovering; Rohan Jha; Tal Linzen; Ellie Pavlick", "journal": "", "ref_id": "b9", "title": "Predicting Inductive Biases of Pre-Trained Models", "year": "2020" }, { "authors": "William Merrill; Gail Weiss; Yoav Goldberg; Roy Schwartz; Noah A Smith; Eran Yahav", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "A Formal Hierarchy of RNN Architectures", "year": "2020" }, { "authors": "Jason Ng; Jeffrey J Goldberger", "journal": "Journal of Cardiovascular Electrophysiology", "ref_id": "b11", "title": "Understanding and Interpreting Dominant Frequency Analysis of AF Electrograms", "year": "2007" }, { "authors": "Aristide Nasim Rahaman; Devansh Baratin; Felix Arpit; Min Draxler; Fred Lin; Yoshua Hamprecht; Aaron Bengio; Courville", "journal": "", "ref_id": "b12", "title": "On the Spectral Bias of Neural Networks", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b13", "title": "", "year": "" }, { "authors": "J Rissanen", "journal": "Automatica", "ref_id": "b14", "title": "Modeling by shortest data description", "year": "1978" }, { "authors": "Shane Steinert; -Threlkeld ; Jakub Szymanik", "journal": "Semantics and Pragmatics", "ref_id": "b15", "title": "Learnability and semantic universals", "year": "2019" }, { "authors": "Guillermo Valle-Perez; Chico Q Camargo; Ard A Louis", "journal": "", "ref_id": "b16", "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Attention is All you Need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "Gail Weiss; Yoav Goldberg; Eran Yahav", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "On the Practical Computational Power of Finite Precision RNNs for Language Recognition", "year": "2018" }, { "authors": "Jennifer C White; Ryan Cotterell", "journal": "", "ref_id": "b20", "title": "Examining the Inductive Bias of Neural Language Models with Artificial Languages", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 70.35, 350.7, 218.78, 69.38 ], "formula_id": "formula_0", "formula_text": "F [k] = N -1 n=0 f [n] exp - √ -1 2π N kn . (1) When f [n] is a real-value signal, it is sufficient to consider only k ∈ {1, . . . , N 2 }." }, { "formula_coordinates": [ 3, 110.8, 707.13, 178.33, 26.35 ], "formula_id": "formula_1", "formula_text": "I = {s 0:i | i = 0, . . . |s| -1}, (2) O = {(1 -p, p) | p ∈ [0, 1]},(3)" }, { "formula_coordinates": [ 4, 82.71, 454.43, 206.42, 46.34 ], "formula_id": "formula_2", "formula_text": "L CE = - 1 |T | i∈T (y i ln(M(s 0:i )) +(1-y i ) ln(1 -M(s 0:i ))),(4)" } ]
2023-05-16
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "", "publication_ref": [ "b2", "b5", "b2", "b6", "b12", "b13", "b14", "b8", "b9" ], "table_ref": [], "text": "from itself to guide its own model training, resulting in improved model accuracy.\nTo support SKD, existing works have explored various methods for extracting useful knowledge from a model itself, as shown in Fig. 1. In general, a neural network model can be divided into several blocks. Each block may contain one or multiple layers in the model. Based on this model architecture, a popular SKD approach named Multi-exit SKD [3]- [6] is to re-train the early layers (also known as shallow layers) of the model under the guidance of counterpart's outputs or the model's own final output, as shown in Fig. 1 (a). For example, Be Your Own Teacher (BYOT) [3] adds an Auxiliary Classifier (AC) to each block of the model. It uses the knowledge extracted from the final output of the model to train the ACs and update corresponding blocks. Multi-exit SKD helps to ensure that all blocks in the model fully learn the features of the training dataset. However, it introduces a high computational overhead for training the additional ACs. For instance, it takes over 5 hours to train BYOT on the CIFAR100 dataset using the ResNet-101 model, compared with about 3.48 hours for training the original model.\nExisting SKD methods in the literature with less computational cost use regularization methods that leverage information from history models (i.e., time-wise SKD (TW-SKD)) [7]- [13], as shown in Fig. 1 (b) and the predictions from the same class of input data (i.e., intra-class SKD (IC-SKD)) [14], [15] as shown in Fig. 1 (c). TW-SKD methods, such as self-Distillation from the Last mini-Batch (DLB) [9], leverage the idea that a \"poor\" teacher that has a low model accuracy may provide useful knowledge compared to a well-trained teacher [10] and use historical models as the \"poor\" teacher. However, the output of the historical model can only provide limited highly abstracted and inexplicable knowledge on account that model at different training stages learns different levels of features of the input data. IC-SKD aims to learn a more generalized model output probability distribution for each class of data by minimizing the distance between the model outputs of different data that belong to the same class. However, IC-SKD overlooks the similarity of inter-class model output probability distributions, which can result in limited model performance and overfitting.\nIn this paper, we aim to answer the following key question: How to design SKD to capture more complete features of input data with relatively low computation cost such that to promote model performance?\nWe answer the above question by developing a novel informative teacher and learning a consistent shape of the model outputs of all data regardless of their belonging classes. Note that, the informative teacher does not mean that the teacher has a high model accuracy. Specifically, preliminary experiments suggest that different layers in a neural network can extract different levels of features for the input data. Typically, shallower layers can capture more shape and edge information while deeper layers can learn more semantic information. This motivates us to construct a teacher by utilizing the feature extracted from the shallow layers to guide the training of the whole model. Therefore, we propose Distillation with Reverse Guidance (DRG). DRG employs an AC for a shallow layer and uses the output of the AC to facilitate the student, i.e., the whole model, in learning the shape and edge information from the shallow layer. Thus, the model can simultaneously capture both structural and detailed features of input data, leading to improved model performance. DRG overcomes the high computation cost of BYOT and is able to extract more informative information than TW-SKD.\nFurthermore, to learn a consistent shape of the model outputs for all data, we propose Distillation with Shape-wise Regularization (DSR) that aims to explore the shape of interclass similarity. Different from vanilla KD, where the student mimics the model output distribution of the teacher, and IC-SKD, which focuses on intra-class similarity, DSR learns a consistently ranked model output shape of all data. Our experimental results show that DSR enlarges the decision boundary among classes, contributing to increased model performance.\nOur contribution can be summarized as follows:\n• We design a lightweight SKD framework with multisource information fusion to improve model performance at a low computation cost. • We proposed the DRG method that constructs an informative teacher utilizing the output of a shallow layer to facilitate the model simultaneously learning the structural and detailed features of data. • We propose the DSR method to stimulate the model learning a consistent ranked output shape of all data regardless of their belonging classes. • We evaluate the performance of proposed DRG and DSR methods and their combination over a variety of datasets and models. Notably, our proposed methods outperform the baseline methods by an average of 2% and the stateof-the-art (SOTA) up to 1.15%.\n• We analyze the rationality behind DRG and DSR through experiments and show their superiority in capturing more complete features of data than baselines and enlarging the decision boundary. The remainder of this paper is organized as follows. Section II reviews the related works of KD and SKD. We present preliminaries for the SKD problem in Section III and propose our DRG and DSR methods in Section IV. Sections V and VI demonstrate the experimental results and ablation study, respectively. Section VII discusses the rationality behind DRG and DSR. Finally, Section VIII concludes our paper." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b16", "b17", "b18", "b3", "b20", "b22", "b23", "b24", "b26", "b1", "b2", "b4", "b6", "b7", "b8", "b13", "b14", "b10", "b9", "b27", "b28", "b9", "b10", "b11" ], "table_ref": [], "text": "Knowledge distillation. Vanilla KD employs a teacherstudent framework to facilitate the student learning from the model output logits of the teacher [1] [16]. A unique parameter in KD is the temperature in the softmax function over the teacher's model output logit, by tuning which, the student can benefit more from the teacher with improved model performance [17] [18]. An improved KD method is feature-based distillation, where the student learns the teacher's intermediate feature [19] [20] [4]. Works in the literature also have focused on the privacy issues of KD, such as data-free KD that preserves an inaccessible training dataset of the teacher for the student [21]- [23], private model compression [24], and undistillable model that prevents a student from learning from the model through KD [25]- [27].\nSelf-knowledge distillation. The first SKD work can date back to Born Again Neural Networks (BAN) [2]. BAN employs a serial-distillation mechanism, namely asking teachers to guide students with the same architecture which would later be asked to guide other sub-students. The average of all students' outputs are considered as the final outputs. BYOT et. al [3]- [5] developed a multi-exit architecture for a neural network. The final output of the network is utilized to update the shallow layers. However, BYOT exerts a high computation cost due to the training of ACs for each exit of the model.\nIn addition, works in the literature also achieve SKD well by designing a much more delicate regularization to improve model performance. There are three categories of regularization, i.e., TW-SKD, IC-SKD, and SKD with Label Smoothing. TW-SKD uses the model in the history as the teacher to regularize the current model. Specifically, Snapshot distillation (SS-KD) [7] randomly chooses a model from previous iterations. Progressive refinement knowledge distillation (PS-KD) [8] and DLB [9] regard the model in the last epoch as poor-teacher. For IC-SKD, the class-wise SKD (CS-KD) [14] uses two batched of data samples from the same class and minimizes the output discrepancy between the two batches. Data-Distortion Guided Self-Distillation (DDGSD) [15] exerts different pre-processing techniques on the same batch and minimizes their model output difference. Another way to improve the performance of SKD is labelsmoothing. The essence of many label-smoothing works lies in the utility of self-teaching, and they can be viewed as special cases of SKD. Label-Smoothing Regularization (LSR) [11] introduces a method where the ground truth distribution is combined with a uniform distribution to create a virtual teacher with random accuracy [10]. Delving Deep into Label Smoothing (OLS) [28] proposes a more reasonable smoothing technique that constructs labels based on the integrated output information from previous epochs. Inspired by the widespread Zipf's law, Efficient one pass self-distillation with ZipF's Label Smoothing (ZF-LS) [29] seeks to discover the conformality of ranked outputs and introduces a novel counterpart discrepancy loss, minimizing with Zipf's distribution based on self-knowledge. Motivated by ZF-LS, it is promising to achieve consistent model outputs of all data by using ranked outputs from the last iteration as softened targets, which can be seen as a specific form of label smoothing. For SKD with label-smoothing, Teacher-free knowledge distillation (TF-KD) [10], has discovered the entity of Label Smoothing Regularization (LSR) [11] to generate high-accuracy virtual teacher. Adversarial Learning and Implicit regularization for self-Knowledge Distillation (AI-KD) [12] integrates TF-KD and PS-KD and additionally employs a Generative Adversarial Network (GAN) to align distributions between sup-student and student.\nOur work differs from the above work by designing a lightweight SKD framework with multi-source information fusion. We consider the more informative information from shallow layers of the networks and explore a consistent shape of model output for all classes of data." }, { "figure_ref": [], "heading": "III. PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "In this section, we present the preliminaries including the multi-class classification problem, KD, and SKD." }, { "figure_ref": [], "heading": "A. Multi-class Classification", "publication_ref": [], "table_ref": [], "text": "Considering a supervised classification task on a training dataset D, each data sample in the dataset is represented by {x, y} ∈ D, where x indicates the input and y is the corresponding label. We assume there are total K classes such that y ∈ {1, . . . , K}. We train a neural network model h(θ, x) parameterized by θ to minimize the loss of a data sample on the model. A typical loss function for classification is crossentropy loss. Denote z := h (θ, x) as the output logit of the model. Applying the softmax function (with temperature τ = 1) to the model output, we can obtain the probability distribution p for the input data x:\np (z|x) = softmax (z, τ ) = exp (z/τ ) K k=1 exp(z k /τ ) ,(1)\nwhere z k indicate the kth element in z. When it is clear from the context, we use p for short of p (z|x). The cross-entropy loss function is\nL CE (p (z|x) , y) = 1 K K k=1 y k log p k ,(2)\nwhere p k indicates the k th element of p. The objective is to minimize the expected risk of the model on the whole dataset:\nmin θ E {x,y}∈D L CE (p (z|x) , y).(3)" }, { "figure_ref": [], "heading": "B. Knowledge Distillation", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "In KD, there exists another teacher model to guide the training of the target model, i.e., the student. A high temperature τ > 1 is applied to soften the model output probability distribution to facilitate transferring more knowledge from the teacher to the student [1]. Denote the output probability distribution of the teacher with temperature τ for an input x by q (z |x), where z is the output logit of the teacher. The Kullback-Liebler (KL) divergence is employed to measure the difference between the teacher and student's model output probability distributions (z and z):\nL KL (q (z |x) , p (z|x)) = 1 K K k=1 q k log q k p k .(4)\nFinally, the overall loss function for vanilla KD is:\nL KD (p, y, q) =L CE (p (z|x) , y) + τ 2 • L KL (q (z |x) , p (z|x))(5)\nThe coefficient τ 2 balances the cross-entropy and KL divergence losses when the temperature τ changes [1]." }, { "figure_ref": [], "heading": "C. Self-Knowledge Distillation", "publication_ref": [], "table_ref": [], "text": "Self-knowledge distillation applies KD to improve model performance by utilizing the prior knowledge extracted from the model itself, which is different from the vanilla KD with a separate teacher model. To train the model h (θ, x), we first extract some information I (θ, x) from the model. I (θ, x) may change with time, layers, and input data, but is not related to any other model. SKD executes a self-knowledge transfer (ST) loss to minimize the discrepancy between the model and the extracted information:\nL ST (h (θ, x) , I (θ, w)) := ρ (h (θ, w) , I (θ, x)) ,(6)\nwhere ρ is a metric function, which varies for different SKD methods. For example, ρ corresponds to a l2-norm in BYOT, the KL Divergence in PS-KD, and the adversarial loss in AI-KD, etc. The ST loss function may take effect at different parts of the model h (θ, w). For example, the ST loss function updates the shallow layers of the model in BYOT and updates the whole model in TW-SKD and IC-SKD. Overall, the SKD loss function combines the original loss function using the hard labels and the ST loss function:\nL SKD = L CE (p (z|x) , y) + ζ • L ST (h (θ, x) , I (θ, x)) (7)\nwhere ζ measures the importance of ST loss, which may vary for different SKD methods.\nIV. PROPOSED METHODS In this section, we propose our DRG and DSR methods to achieve multi-source information fusion for SKD performance improvement." }, { "figure_ref": [ "fig_1" ], "heading": "A. Distillation with Reverse Guidance (DRG)", "publication_ref": [ "b9", "b29" ], "table_ref": [], "text": "Motivation: Different layers in a neural network grab different features of the input data. Typically, shallower layers can capture more shape and edge information while deeper layers can learn more detailed semantic information. The shape and edge feature of the input data vanishes gradually as the layers become deepen, resulting in ignorance of edge information in the final model output and severe model overfitting. By adding an AC to a shallow layer, we can construct a teacher model for the original model. The output of the AC is usually more underfitting than the whole model as it has a smaller model architecture. Related works have revealed the effectiveness of a \"poor\" teacher for KD [10]. However, they have neglected the potential of shallow layers for guiding the training of the whole model. Thus, we propose to use the shallow layer to reversely guide the training of the whole model to achieve information fusion of both edge and detailed features of date.\nDRG design: The framework of DRG is demonstrated on the left-hand side of Fig. 2. We consider neural networks with sequential layers/blocks, such as ResNet [30]. DRG introduces an add-on structure, i.e., AC, to the output of a shallow layer/block1 , constructing a \"poor\" teacher. Let w be the Compute discrepancy L RG using (9); 8:\nCompute loss L DRG using (10);\n9:\nθ t+1 ← -θ t -γ • ∇L DRG ; 10:\nw t+1 ← -w t -γ • ∇L DRG ; 11: end for parameter of the AC. The teacher model can be represented by g θ, w, x , where θ ⊂ θ is the parameter of the easier layers of the whole model before the layer connected to the AC. Denote the output logit and corresponding output probability distribution of g θ, w, x taking x as input by z := g θ, w, x and q (z |x) := softmax (z , τ ), respectively. We use the cross-entropy loss function to train the \"poor\" teacher model and the whole model simultaneously using the following hard-label loss.\nL HL = L CE (q(z |x), y) + L CE (p(z|x), y).(8)\nTo achieve reverse guidance, the \"poor\" teacher guides the whole model training by minimizing the KL divergence:\nL RG = τ 2 • L KL (q(z |x), p(z|x))(9)\nOverall, the whole loss function of DGR is\nL DRG = L HL + α • L RG , (10\n)\nwhere α is a coefficient between two losses. Algorithm 1 demonstrates the model training process of DRG, where γ denotes the learning rate and T indicates the total number of training iterations. θ t and w t represent the model parameters at iteration t. In each iteration t, a minibatch of data B t ⊂ D is randomly sampled to train the model. The mini-batch is simultaneously fed into the model (line 4) and the teacher, which is constructed by shallow layers of the model and the AC, (line 6). Based on the output of the original model and the teacher, we calculate the DRG loss, i.e., L DRG (line 8) and update the model and auxiliary parameters (line 9 -10) according to SGD." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "B. Distillation with Shape-wise Regularization (DSR)", "publication_ref": [ "b28" ], "table_ref": [], "text": "Motivation: Existing works have investigated the intraclass similarity of input data, such as CS-KD and DDGSD. However, to the best of our knowledge, no work has stressed the consistent property of model output among different classes, i.e., inter-class similarity. To illustrate the necessity of exploring a consistent model output property of different classes of data, we evaluate the variance of ranked model outputs, as demonstrated on the left-hand side of Fig. 3. ResNet, CIFAR100, TinyImageNet are abbreviated as \"Res\", \"C100\", and \"Tin\" respectively.\nRanking outputs [29] according to class probability would eliminate the class inharmony and gives more concentration on the overall interaction between classes. We train CIFAR100 and TinyImageNet datasets using various models till converge and normalize the training time to the training process. The variance is calculated by taking an average of the variances of each element in the model outputs over all test data samples. We can observe that the variance of ranked model output decreases along with the training process, which corresponds to increasing model accuracy. On the right-hand side of Fig. 3, we calculate the Pearson coefficients between model accuracy and the variance of ranked model output for different datasets trained with various models. All results exert a strong negative relation between model accuracy and ranked output variance. This implies that along with model training, the model outputs from various classes have a consistent tendency after being ranked. The phenomenon motivates us to regularize the ranked model output shape of different input data to improve the performance of SKD." }, { "figure_ref": [ "fig_1" ], "heading": "DSR design:", "publication_ref": [], "table_ref": [], "text": "The framework of DRG is demonstrated on the right-hand side of Fig. 2. In each iteration t, we rank the elements in the model output according to the nondecreasing order and obtain zt = {z\nt 1 , zt 1 , • • • zt K }, such that zt 1 ≤ zt 1 ≤ . . . ≤ zt K .\nDSR achieves the consistency of ranked model output between different input date leveraging the ranked model output of the last iteration, i.e., zt-1 . We use KL divergence to regularize the model using zt-1 , defined as the L t SR :\nL t SR = τ 2 • L KL (p(z t-1 |x), p(z t |x)).(11)\nOverall, DSR combines the vanilla classification loss and L t SR for SKD model training:\nL DSR = L CE (p(z|x), y) + β • L t SR ,(12)\nwhere β measures the importance of L t SR compared to the original classification loss.\nAlgorithm 2 shows the training process of DSR. Specifically in each iteration t, data batch B t is randomly sampled to train the model. Outputs of the model, i.e., z, are then ranked in ascending order to obtain z(line 5). The DSR loss is computed Randomly sample B t from D;" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "z ← h(θ t , B t );" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Rank z in ascending order to obtain z;\n6:\nCompute loss L t DSR using (12); 7:\nθ t+1 ← -θ t -γ • ∇L DSR ; 8:\nStore z for the next iteration; 9: end for using the ranked model output in the last iteration, i.e., zt-1 , (line 6). After updating the model parameters with SGD (line 7), zt will be recorded (line 8) and used in the next iteration.\nWe can combine our DRG and DSR methods for SKD using the following overall loss function:\nL = L HL + α • L RG + β • L t SR .(13)\nV. EXPERIMENTS\nWe conduct experiments for our proposed method over various datasets and models. First, we introduce settings including datasets, models, baselines, etc. Then, we analyze the experimental results for different datasets. Our code is available at https://github.com/xucong-parsifal/LightSKD." }, { "figure_ref": [], "heading": "A. Settings", "publication_ref": [ "b30", "b31", "b29", "b32", "b33", "b2", "b13", "b7", "b8", "b28", "b9" ], "table_ref": [], "text": "Datasets. We employ five datasets for classification tasks, i.e., CIFAR100, TinyImageNet, Caltech101, Stanford Dogs and CUB200.\n• CIFAR100: CIFAR100 [31] is a classical 100-class classification dataset. It contains 50,000 images for training and 10,000 for test. The image size is 32x32 pixels. • TinyImageNet: TinyImageNet is a subset of ImageNet [32], with 100, 000 train data samples and 10,000 test samples. There are 200 classes in total. The size of an image is 32x32 pixels. • Caltech101: Caltech101 is a large tough-grained dataset for classification and object detection. There are 101 main classes and 1 background class in total. • Stanford Dogs / CUB200: Stanford Dogs and CUB200 are large fine-grained datasets that consist of 120 dog classes and 200 bird classes, respectively. In all experiments, training samples are processed with Ran-domCrop (32x32 for CIFAR100,TinyImageNet; 224x224 for others) and RandomHorizontalFlip to ensure that all images have a consistent size and to add randomness to the training process.\nModels. We employ five classical neural network models for the above datasets including ResNet18, ResNet50, ResNet101 [30], ResNeXt50 32x4d [33], and DenseNet121 [34]. The ResNet series is well-known for its innovative shortcut connections, which help to reduce overfitting. In contrast, the DenseNet architecture was the first to introduce Hyperparameters. We fixed the number of epochs to 200 and set the temperature τ using a grid search. We set hyperparameters α and β to 0.2 and 1, respectively, and employ a manual learning rate adjustment mechanism for our experiments. For CIFAR100, the initial learning rate was set to 0.1 and decreased to 0.2 of its previous value at 60, 120, and 160 epochs. For TinyImageNet, Stanford Dogs, CUB200, and Caltech101, the initial learning rate was set to 0.1 and decreased to 0.1 of its previous value at 100 and 150 epochs. We use a batch size of 128 for CIFAR100 and TinyImageNet, and 64 for the other datasets. The optimizer used was SGD with a momentum of 0.9 and weight decay of 5e-4. For DRG, we add an AC after the second block of the model to construct the \"poor\" teacher.\nBaselines. We combine our proposed method with the following methods:\n• Vanilla: training the original model without SKD;\n• BYOT [3]: adding an Auxiliary Classifier (AC) to each block of the model; • CS-KD [14]: an IC-SKD method that uses two batched of data samples from the same class and minimizes the output discrepancy between the two batches; • PS-KD [8]: a TW-SKD method that employs the in the last epoch as a teacher; • DLB [9]: a TW-SKD method that regards the model in the last iteration as a teacher, meanwhile employing different augmentation techniques for the same data batch. It differs from PS-KD in the supervision granularity and data preprocessing. • ZF-LS lb [29]: a label smoothing method that minimizes the cross entropy between the ranked model outputs and zipf's distribution; • TF-KD reg [10]: an SKD based on ameliorating LSR. In the face of complex tasks, our results show lower probabilities for the GT class and higher probabilities for other classes. This suggests that our methods extract more integrated information and are less overconfident and overfitting, resulting in a more careful and delicate decision-making process." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "B. Experimental results", "publication_ref": [ "b34", "b8" ], "table_ref": [ "tab_1", "tab_2", "tab_3" ], "text": "1) Results on CIFAR100 and TinyImageNet: Our results are presented in Table I (for CIFAR100) and Table II (for TinyImageNet).\nCompared with baseline algorithms, we have the following observations:\n• Compared with vanilla single model training: our methods consistently outperform the vanilla single model training in top-1 accuracy, with a significant improvement ranging from 1.26% to 2.87%. • Compared with BYOT, CS-KD, PS-KD, ZF-LS, and TF-KD: Our methods generally achieve higher accuracy than these methods, with an average accuracy boost of 1.08%. Particularly for CIFAR100 over ResNet18 model, our methods exceed their maximum accuracy by 0.97%. • Compared with DLB: To the best of our knowledge, DLB is the current claimed SOTA. Our results show that our methods perform better than DLB. Especially, our methods surpass DLB on large-scale networks, such as the ResNet100 for CIFAR100. This is because DLB uses the same images with different transformations, which may lead to overfitting and diluting the regularization effects in larger networks. Our methods avoid this problem. Notably, the combination of our methods, i.e., DRG+DSR, is particularly effective and has achieved SOTA performance. Although DSR may not individually achieve SOTA, it has contributed significantly to the success of the combination (+0.51% on ResNet18, TinyImageNet; +0.63% on ResNet18 and TinyImageNet), surpassing its individual accuracy boost.\nTime and space costs. The time and space costs of different methods on CIFAR100 dataset with various models are shown in Fig. 4, where the time cost is evaluated by the consuming time of each iteration and the space cost is the storage space of the models. We can observe that BYOT takes about 0.064s per iteration on ResNet18 and spends much more when the model gets larger. Although DLB is faster than BYOT on small models, it incurs a vast time cost on ResNet101, which may result from re-sampling the training dataset to construct minibatches and frequently recording the images and outputs of the last iteration. Remarkably, our combined method DRG+DSR receives the least time and space cost. Specifically, the time cost of our DRG+DSR is about only 70 percent of that of others; the Space-cost of our DRG+DSR is also extraordinarily smaller than others (×0.67 ∼ ×0.83). Most importantly, we can achieve better performance than BYOT and DLB.\nRobustness. Our proposed methods are more robust over different neural network models than baselines. Specifically for CIFAR100, we achieve the best results among all methods, especially for large-scale models such as ResNet100, ResNeXt50 32×4d, and DenseNet-121, indicating the robustness of our methods across different models.\n2) Results on large-scale fine-grained datasets: We extend our experiments to include the large fine-grained datasets of Stanford Dogs and CUB200. Figure 5 shows the ranked model output probability of the top 30 classes for two data examples. The Green bars mark the ground-truth label. Our results indicate that vanilla training of a single model may give a wrong prediction as the predicted label with the highest probability is not consistent with the true label. In comparison, our methods generate model output probability with low variance, exerting higher probabilities for several classes outside the true label. This means our models could select a range of candidate classes and make decisions more carefully and delicately, rather than making an exact decision that neglects the relationships between different classes.\n3) Compatibility Analysis: To validate the effectiveness and compatibility of our methods over the existing methods, we plug DSR and DRG into Cutout [35] and PS-KD. Cutout is a popular data augmentation technique, which employs a mask to randomly eliminate part of the input image data. We set the number of masks and mask size to 1 and 16px respectively, which is consistent with [9]. Table III presents the performance of these methods before and after the integration of DRG, DSR, and their combination. The results demonstrate that the addition of DRG, DSR, or their combination significantly improves the accuracy of Cutout by 0.82% to 2.42%. Similarly, the integration of these methods with PS-KD results in an accuracy boost of 0.39% to 0.71% compared to vanilla PS-KD." }, { "figure_ref": [], "heading": "VI. ABLATION STUDY", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct an ablation study of our proposed methods. We first explore the number of teachers and the position of selected blocks in DRG. Then we evaluate the effect of different hyperparameters including temperature and the coefficients in objective loss functions." }, { "figure_ref": [], "heading": "A. AC number and block position in DRG", "publication_ref": [], "table_ref": [], "text": "For DRG, we can choose one or a subset of blocks in the neural network model to add ACs to selected blocks in order to accelerate the model learning process while maintaining accuracy. Table IV displays the accuracy and time cost results of CIFAR100, over the ResNet18 model for different sets of selected blocks.\nWe have the following observation:\n• When adding AC to a single block in the deeper layer of the model, such as the third block (B #3) compared to the first and second blocks, DRG experiences a sharp decrease in test accuracy. That is because the outputs of deeper layers have a higher level of similarity with the final output, contributing less to the formulation fusion and possibly leading to overfitting. Therefore, only constructing one \"poor\" teacher is enough for our DRG, resulting in a lightweight SKD design." }, { "figure_ref": [], "heading": "B. Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Temperature τ : We evaluate the performance of DRG and DSR under varying temperature on CIFAR100, ResNet18, as shown in Fig. 6. The results indicate that DRG and DSR achieve the highest accuracy when the temperature are set to 1 and 4, respectively.\nCoefficients α and β: We evaluate the performance of DRG and DSR for different coefficients α and β in ( 10) and ( 12) on CIFAR100, ResNet18. We vary α and β from 0.01 to 1 and from 0.1 to 3, respectively. The results in Fig. 7 show that the best accuracy is achieved when α and β are set to 0.2 and 1, respectively. This suggests that a moderate level of usage of both DRG and DSR provides optimal performance for SKD." }, { "figure_ref": [], "heading": "VII. DISCUSSION", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, we discuss the rationality behind our proposed methods through experiments. First, we show the capacity of DRG in information fusion. Then, we analyze the double-effect of DSR in enlarging decision boundary and label smoothing. A. Informulation Fusion of DRG DRG achieves the information fusion of features extracted from different parts of a neural network model. To illustrate this, we employ GradCAM [36] to virtualize the features characterized by different parts of the model and our DRG method. GradCAM is a method for generating attention heatmaps to visualize the focusing position of a model for the input data. We present the GradCAM results of the output of AC after the shallow layer (i.e., the second block of ResNet18 in our experiments), the output of the whole model, and out DRG method in Fig. 8.\nThe results show that the classifier after the shallow layer mainly focuses on the edge and shape features of the input date, such as the legs of the table and the outline of the panda. In contrast, the whole model with more layers forgets edge features and extracts more determined information, such as the ears of the panda. By using the classifier after the shallow layer as the \"poor\" teacher of KD, DRG can capture both edge and detailed information of the input data, providing valuable insights into the information fusion of our DRG method." }, { "figure_ref": [], "heading": "B. Double-effect of DSR", "publication_ref": [], "table_ref": [], "text": "We can interpret the rationality behind DSR from the following two perspectives.\nFirst, DSR is capable of achieving the consensus of ranked model output probability, which enlarges the decision boundary among different classes. Fig. 9 demonstrates the virtualized decision boundary of DRG and DSR over (CIFAR100, ResNet18) using FIT-SNE [37] results2 . We randomly sample 50 classes to clearly show the FIT-SNE virtualization. We can observe that our DSR method exerts a more clear decision boundary than vanilla single model training and DRG.\nMoreover, DSR is equivalent to a label-smoothing method that progressively designs a label from a distribution rather than a predetermined shape. Specifically, the \"soft\" label used in DSR is the ranked label of another data sample, which is randomly sampled from the dataset. This contributes to a better generalization of DSR." }, { "figure_ref": [], "heading": "VIII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a lightweight SKD framework with two methods, DRG and DSR, to promote multi-source information fusion and improve the performance of SKD. We construct only one auxiliary teacher in DRG and highlight the inter-class model output shape in DSR to achieve better test accuracy with a low time cost. Experimental results over enormous datasets and models show that DRG and DSR, and their combination, outperform the baselines with lower or competitive time costs and better robustness. In summary, our proposed methods demonstrate significant improvements in self-knowledge distillation through novel approaches to multisource information fusion." } ]
Knowledge Distillation (KD) is a powerful technique for transferring knowledge between neural network models, where a pre-trained teacher model is used to facilitate the training of the target student model. However, the availability of a suitable teacher model is not always guaranteed. To address this challenge, Self-Knowledge Distillation (SKD) attempts to construct a teacher model from itself. Existing SKD methods add Auxiliary Classifiers (AC) to intermediate layers of the model or use the history models and models with different input data within the same class. However, these methods are computationally expensive and only capture time-wise and classwise features of data. In this paper, we propose a lightweight SKD framework that utilizes multi-source information to construct a more informative teacher. Specifically, we introduce a Distillation with Reverse Guidance (DRG) method that considers different levels of information extracted by the model, including edge, shape, and detail of the input data, to construct a more informative teacher. Additionally, we design a Distillation with Shape-wise Regularization (DSR) method that ensures a consistent shape of ranked model output for all data. We validate the performance of the proposed DRG, DSR, and their combination through comprehensive experiments on various datasets and models. Our results demonstrate the superiority of the proposed methods over baselines (up to 2.87%) and state-of-the-art SKD methods (up to 1.15%), while being computationally efficient and robust. The code is available at https://github.com/xucong-parsifal/LightSKD.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
[ { "figure_caption": "Fig. 1 :1Fig.1: Overview of existing SKD methods, i.e., multi-exit SKD, TW-SKD, and IC-SKD, and our methods, i.e., DRG and DSR.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Illustrations of proposed methods. Left: DRG, where an AC is added to the output of a shallow layer to construct a \"poor\" teacher to guide the whole model training. Right: DSR, where model outputs are ranked to form a inter-class regularization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: The variance of ranked outputs in one epoch along the training process (left) and Pearson's coefficient of variance and accuracy (right) for different datasets trained with various models. ResNet, CIFAR100, TinyImageNet are abbreviated as \"Res\", \"C100\", and \"Tin\" respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 22Distillation with Shape-wise Regularization. Input: D, γ, τ, β, T 1: Initialize θ ← θ 0 , z-1 ← 0; 2: for t ∈ {0, . . . , T -1} do 3:", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: Time and space cost of different methods trained with various models on CIFAR100. ResNet is abbreviated as \"Res\". Blue, Green and Red points represent experiments of BYOT, DLB and our methods respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Example experimental results on Stanford Dogs (top) and CUB200 (bottom). All bar figures show the ranked predictive probability of the top 30 classes, with ground-truth (GT) classes marked in Green. The baseline results for vanilla single model training are shown in the second column, while the other columns display results from DRG and DSR, and their combination.In the face of complex tasks, our results show lower probabilities for the GT class and higher probabilities for other classes. This suggests that our methods extract more integrated information and are less overconfident and overfitting, resulting in a more careful and delicate decision-making process.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "•Fig. 6 :Fig. 7 :67Fig. 6: Performance of DRG and DSR under varying temperature.", "figure_data": "", "figure_id": "fig_6", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :Fig. 9 :89Fig. 8: GradCAM heatmaps of different methods on Caltech101 over ResNet18. From left to right: input images, output of AC after shallow layer, output of model by BYOT, and the output of DSR (ours). As the heatmaps exemplify, instead of excessive care of one single feature, DRG merges the feature of both classifiers after the shallow layer and the whole model.", "figure_data": "", "figure_id": "fig_7", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Distillation with Reverse Guidance (DRG). Input: D, γ, τ, α, T 1: Initialize θ ← θ 0 , w ← w 0 ;", "figure_data": "5:Compute loss L HL using (8);6:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Top-1 test accuracy on CIFAR100. Values marked in Red, Blue are the best and the second best accuracy respectively.", "figure_data": "METHODSRESNET18RESNET50RESNET101RESNEXT50 32X4DDENSENET-121VANILLA77.29%77.07%78.52%78.87%78.70%BYOT78.25%79.63%80.71%80.18%79.63%CS-KD78.55%76.91%77.43%79.69%78.92%PS-KD78.67%79.02%79.41%80.38%79.52%DLB79.52%79.88%80.02%80.52%79.64%ZF-LS lb77.49%77.38%77.27%79.42%78.87%TF-KDreg78.33%78.30%79.19%79.27%79.38%DRG (OURS)79.07% (+1.78%) 79.87% (+2.80%) 80.86% (+2.34%)81.01% (+2.14%)79.99% (+1.29%)DSR (OURS)78.15% (+0.88%) 79.12% (+2.05%) 79.78% (+1.26%)79.01% (+0.14%)79.08% (+0.38%)DRG+DSR (OURS) 79.30% (+2.01%) 79.94% (+2.87%) 80.72% (+2.20%)80.91% (+2.04%)79.76% (+1.26%)", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Top-1 test accuracy on TinyImageNet. Values marked in Red, Blue are the best and the second best accuracy respectively.", "figure_data": "METHODSRESNET18RESNET50RESNEXT50 32X4DVANILLA56.69%58.07%59.55%BYOT57.69%60.59%60.07%PS-KD57.05%60.70%60.87%DLB57.09%59.89%60.65%DRG (OURS)57.57% (+0.88%)60.41% (+2.34%)60.94% (+1.39%)DSR (OURS)56.75% (+0.06%)58.34% (+0.27%)60.34% (+0.79%)DRG+DSR (OURS)58.08% (+1.39%)61.04% (+2.97%)61.14% (+1.59%)fully-connected blocks as a means of improving feature reuseand facilitating information flow between layers.Environment and hardwares: Our implementations arebased on PyTorch, with Python version 3.8.5, Torch version1.13.0, and Torchvision version 0.14.0. All experiments wereconducted using an NVIDIA RTX 3090 with 24GB memory.", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Results of different combinations of our methods and existing methods for CIFAR100 over ResNet18.", "figure_data": "METHODSACCURACYCUTOUT77.39%CUTOUT+DRG80.12%(+2.73%)CUTOUT+DSR78.21%(+0.82%)CUTOUT+DRG+DSR79.81%(+2.42%)PS-KD78.67%PS-KD+DRG79.18%(+0.51%)PS-KD+DSR79.38%(+0.71%)PS-KD+DRG+DSR79.06%(+0.39%)", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Accuracy and time-cost of different block subsets in DRG for CIFAR100 over ResNet18.", "figure_data": "B #1 B #2 B #3ACCURACY %TIME-COST (S/ITER).78.93%(×0.9953)0.044(×0.99)79.30%0.04576.96%(×0.9705)0.046(X1.01)79.42%(×1.0015)0.052(×1.17)78.54%(×0.990)0.054(×1.21)79.32%(×1.0002)0.055(×1.22)", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" } ]
Xucong Wang; Pengchao Han; Lei Guo
[ { "authors": "G Hinton; O Vinyals; J Dean", "journal": "", "ref_id": "b0", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "T Furlanello; Z Lipton; M Tschannen; L Itti; A Anandkumar", "journal": "PMLR", "ref_id": "b1", "title": "Born again neural networks", "year": "2018" }, { "authors": "L Zhang; J Song; A Gao; J Chen; C Bao; K Ma", "journal": "", "ref_id": "b2", "title": "Be your own teacher: Improve the performance of convolutional neural networks via self distillation", "year": "2019" }, { "authors": "M Ji; S Shin; S Hwang; G Park; I.-C Moon", "journal": "", "ref_id": "b3", "title": "Refine myself by teaching myself: Feature refinement via self-knowledge distillation", "year": "2021" }, { "authors": "S Li; M Lin; Y Wang; Y Wu; Y Tian; L Shao; R Ji", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b4", "title": "Distilling a powerful student model via online knowledge distillation", "year": "2022" }, { "authors": "M Phuong; C H Lampert", "journal": "", "ref_id": "b5", "title": "Distillation-based training for multi-exit architectures", "year": "2019" }, { "authors": "C Yang; L Xie; C Su; A L Yuille", "journal": "", "ref_id": "b6", "title": "Snapshot distillation: Teacherstudent optimization in one generation", "year": "2019" }, { "authors": "K Kim; B Ji; D Yoon; S Hwang", "journal": "", "ref_id": "b7", "title": "Self-knowledge distillation with progressive refinement of targets", "year": "2021" }, { "authors": "Y Shen; L Xu; Y Yang; Y Li; Y Guo", "journal": "", "ref_id": "b8", "title": "Self-distillation from the last mini-batch for consistency regularization", "year": "2022" }, { "authors": "L Yuan; F E Tay; G Li; T Wang; J Feng", "journal": "", "ref_id": "b9", "title": "Revisiting knowledge distillation via label smoothing regularization", "year": "2020" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b10", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "H Kim; S Suh; S Baek; D Kim; D Jeong; H Cho; J Kim", "journal": "", "ref_id": "b11", "title": "Aikd: Adversarial learning and implicit regularization for self-knowledge distillation", "year": "2022" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "S Yun; J Park; K Lee; J Shin", "journal": "", "ref_id": "b13", "title": "Regularizing class-wise predictions via self-knowledge distillation", "year": "2020" }, { "authors": "T.-B Xu; C.-L Liu", "journal": "", "ref_id": "b14", "title": "Data-distortion guided self-distillation for deep neural networks", "year": "2019" }, { "authors": "J Kim; S Park; N Kwak", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018" }, { "authors": "Z Li; X Li; L Yang; B Zhao; R Song; L Luo; J Li; J Yang", "journal": "", "ref_id": "b16", "title": "Curriculum temperature for knowledge distillation", "year": "2022" }, { "authors": "X.-C Li; W.-S Fan; S Song; Y Li; B Li; Y Shao; D.-C Zhan", "journal": "", "ref_id": "b17", "title": "Asymmetric temperature scaling makes larger networks teach well again", "year": "2022" }, { "authors": "A Romero; N Ballas; S E Kahou; A Chassang; C Gatta; Y Bengio", "journal": "", "ref_id": "b18", "title": "Fitnets: Hints for thin deep nets", "year": "2014" }, { "authors": "B Heo; J Kim; S Yun; H Park; N Kwak; J Y Choi", "journal": "", "ref_id": "b19", "title": "A comprehensive overhaul of feature distillation", "year": "2019" }, { "authors": "H Chen; Y Wang; C Xu; Z Yang; C Liu; B Shi; C Xu; C Xu; Q Tian", "journal": "", "ref_id": "b20", "title": "Data-free learning of student networks", "year": "2019" }, { "authors": "K Binici; S Aggarwal; N T Pham; K Leman; T Mitra", "journal": "", "ref_id": "b21", "title": "Robust and resource-efficient data-free knowledge distillation by generative pseudo replay", "year": "2022" }, { "authors": "B Zhao; Q Cui; R Song; Y Qiu; J Liang", "journal": "", "ref_id": "b22", "title": "Decoupled knowledge distillation", "year": "2022" }, { "authors": "J Wang; W Bao; L Sun; X Zhu; B Cao; S Y Philip", "journal": "", "ref_id": "b23", "title": "Private model compression via knowledge distillation", "year": "2019" }, { "authors": "H Ma; T Chen; T.-K Hu; C You; X Xie; Z Wang", "journal": "", "ref_id": "b24", "title": "Undistillable: Making a nasty teacher that cannot teach students", "year": "2021" }, { "authors": "S Kundu; Q Sun; Y Fu; M Pedram; P Beerel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Analyzing the confidentiality of undistillable teachers in knowledge distillation", "year": "2021" }, { "authors": "S Jandial; Y Khasbage; A Pal; V N Balasubramanian; B Krishnamurthy", "journal": "Springer", "ref_id": "b26", "title": "Distilling the undistillable: Learning from a nasty teacher", "year": "2022" }, { "authors": "C.-B Zhang; P.-T Jiang; Q Hou; Y Wei; Q Han; Z Li; M.-M Cheng", "journal": "IEEE Transactions on Image Processing", "ref_id": "b27", "title": "Delving deep into label smoothing", "year": "2021" }, { "authors": "J Liang; L Li; Z Bing; B Zhao; Y Tang; B Lin; H Fan", "journal": "Springer", "ref_id": "b28", "title": "Efficient one pass self-distillation with zipf's label smoothing", "year": "2022" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b29", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b30", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "Ieee", "ref_id": "b31", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "S Xie; R Girshick; P Dollár; Z Tu; K He", "journal": "", "ref_id": "b32", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b33", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "T Devries; G W Taylor", "journal": "", "ref_id": "b34", "title": "Improved regularization of convolutional neural networks with cutout", "year": "2017" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b35", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "G C Linderman; M Rachh; J G Hoskins; S Steinerberger; Y Kluger", "journal": "Nature methods", "ref_id": "b36", "title": "Fast interpolation-based t-sne for improved visualization of single-cell rna-seq data", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 342.78, 338.45, 220.25, 26.56 ], "formula_id": "formula_0", "formula_text": "p (z|x) = softmax (z, τ ) = exp (z/τ ) K k=1 exp(z k /τ ) ,(1)" }, { "formula_coordinates": [ 3, 363.69, 407.91, 199.35, 30.55 ], "formula_id": "formula_1", "formula_text": "L CE (p (z|x) , y) = 1 K K k=1 y k log p k ,(2)" }, { "formula_coordinates": [ 3, 374.39, 474.44, 188.65, 14.66 ], "formula_id": "formula_2", "formula_text": "min θ E {x,y}∈D L CE (p (z|x) , y).(3)" }, { "formula_coordinates": [ 3, 348.81, 641.07, 214.23, 30.55 ], "formula_id": "formula_3", "formula_text": "L KL (q (z |x) , p (z|x)) = 1 K K k=1 q k log q k p k .(4)" }, { "formula_coordinates": [ 3, 344.13, 694.09, 218.91, 24.83 ], "formula_id": "formula_4", "formula_text": "L KD (p, y, q) =L CE (p (z|x) , y) + τ 2 • L KL (q (z |x) , p (z|x))(5)" }, { "formula_coordinates": [ 4, 66.11, 185.75, 233.91, 9.68 ], "formula_id": "formula_5", "formula_text": "L ST (h (θ, x) , I (θ, w)) := ρ (h (θ, w) , I (θ, x)) ,(6)" }, { "formula_coordinates": [ 4, 56.48, 316.67, 243.55, 9.68 ], "formula_id": "formula_6", "formula_text": "L SKD = L CE (p (z|x) , y) + ζ • L ST (h (θ, x) , I (θ, x)) (7)" }, { "formula_coordinates": [ 4, 313.75, 177.56, 129.76, 22.06 ], "formula_id": "formula_7", "formula_text": "θ t+1 ← -θ t -γ • ∇L DRG ; 10:" }, { "formula_coordinates": [ 4, 348.62, 366.57, 214.41, 9.65 ], "formula_id": "formula_8", "formula_text": "L HL = L CE (q(z |x), y) + L CE (p(z|x), y).(8)" }, { "formula_coordinates": [ 4, 368.18, 418.13, 194.85, 11.72 ], "formula_id": "formula_9", "formula_text": "L RG = τ 2 • L KL (q(z |x), p(z|x))(9)" }, { "formula_coordinates": [ 4, 385.4, 460.19, 173.49, 9.65 ], "formula_id": "formula_10", "formula_text": "L DRG = L HL + α • L RG , (10" }, { "formula_coordinates": [ 4, 558.89, 460.51, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 49.56, 519.96, 250.46, 24.43 ], "formula_id": "formula_12", "formula_text": "t 1 , zt 1 , • • • zt K }, such that zt 1 ≤ zt 1 ≤ . . . ≤ zt K ." }, { "formula_coordinates": [ 5, 98.06, 599.29, 201.96, 12.69 ], "formula_id": "formula_13", "formula_text": "L t SR = τ 2 • L KL (p(z t-1 |x), p(z t |x)).(11)" }, { "formula_coordinates": [ 5, 101.65, 656.88, 198.37, 12.69 ], "formula_id": "formula_14", "formula_text": "L DSR = L CE (p(z|x), y) + β • L t SR ,(12)" }, { "formula_coordinates": [ 5, 317.73, 142.14, 6.2, 6.91 ], "formula_id": "formula_15", "formula_text": "6:" }, { "formula_coordinates": [ 5, 317.73, 150.9, 124.23, 22.06 ], "formula_id": "formula_16", "formula_text": "θ t+1 ← -θ t -γ • ∇L DSR ; 8:" }, { "formula_coordinates": [ 5, 372.97, 277.43, 190.06, 12.69 ], "formula_id": "formula_17", "formula_text": "L = L HL + α • L RG + β • L t SR .(13)" } ]
10.1145/1553374.1553380
2023-05-19
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b2","b1","b32","b23","b41","b8","b35"(...TRUNCATED)
"Information extraction (IE) systems aim to automatically extract structured information, such as na(...TRUNCATED)
Easy-to-Hard Learning for Information Extraction *
[{"figure_caption":"Figure 1 :1Figure1: Overview of E2H consisting of three stages, i.e., the easy s(...TRUNCATED)
Chang Gao; Wenxuan Zhang; Wai Lam; Bing Lidong
[{"authors":"Yoshua Bengio; Jérôme Louradour; Ronan Collobert; Jason Weston","journal":"Associatio(...TRUNCATED)
[{"formula_coordinates":[2.0,306.14,589.38,212.89,15.4],"formula_id":"formula_0","formula_text":"{ ((...TRUNCATED)
2023-05-16
[{"figure_ref":["fig_0"],"heading":"Introduction","publication_ref":["b23","b16","b38","b16","b1","b(...TRUNCATED)
"3D LiDAR-based single object tracking (SOT) has gained increasing attention as it plays a crucial r(...TRUNCATED)
Correlation Pyramid Network for 3D Single Object Tracking
[{"figure_caption":"Figure 1 .1Figure 1. Visualization results of the four different categories. The(...TRUNCATED)
Mengmeng Wang; Teli Ma; Xingxing Zuo; Jiajun Lv; Yong Liu
[{"authors":"","journal":"SC3D","ref_id":"b0","title":"","year":""},{"authors":"Luca Bertinetto; Jac(...TRUNCATED)
[{"formula_coordinates":[3.0,50.11,668.33,236.25,33.56],"formula_id":"formula_0","formula_text":"B 1(...TRUNCATED)
2024-02-26
[{"figure_ref":["fig_0"],"heading":"Introduction","publication_ref":["b1","b21","b38","b48","b36","b(...TRUNCATED)
"Iterated belief revision requires information about the current beliefs. This information is repres(...TRUNCATED)
Representing states in iterated belief revision
[{"figure_caption":"Figure 1 :1Figure 1: Comparison of the four considered representations","figure_(...TRUNCATED)
Paolo Liberatore
[{"authors":"C Areces; V Becher","journal":"Springer Science & Business Media","ref_id":"b0","title"(...TRUNCATED)
[{"formula_coordinates":[4.0,242.4,141.4,125.4,124.34],"formula_id":"formula_0","formula_text":"❅ (...TRUNCATED)
10.18653/v1/2021.acl-long.224
2023-05-22
[{"figure_ref":["fig_0","fig_0"],"heading":"Introduction","publication_ref":["b9","b3","b4","b15","b(...TRUNCATED)
"We present a new task, speech dialogue translation mediating speakers of different languages. We co(...TRUNCATED)
Towards Speech Dialogue Translation Mediating Speakers of Different Languages
[{"figure_caption":"Figure 1 :1Figure1: The importance of considering context in SDT. \"甘い\" can(...TRUNCATED)
Shuichiro Shimizu; Chenhui Chu; Sheng Li; Sadao Kurohashi
[{"authors":"Luisa Bentivogli; Mauro Cettolo; Marco Gaido; Alina Karakanta; Alberto Martinelli; Matt(...TRUNCATED)
[{"formula_coordinates":[2.0,306.14,285.94,218.27,39.74],"formula_id":"formula_0","formula_text":"er(...TRUNCATED)
10.1016/j.inffus.2021.05.008
2023-07-08
[{"figure_ref":["fig_7","fig_7","fig_0"],"heading":"Introduction","publication_ref":["b19","b45","b2(...TRUNCATED)
"Generating synthetic data through generative models is gaining interest in the ML community and bey(...TRUNCATED)
Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data
[{"figure_caption":"Figure 2 .2Figure 2. Conclusions drawn from synthetic data do not always transfe(...TRUNCATED)
Boris Van Breugel; Zhaozhi Qian; Mihaela Van Der Schaar
[{"authors":"M Abdar; F Pourpanah; S Hussain; D Rezazadegan; L Liu; M Ghavamzadeh; P Fieguth; X Cao;(...TRUNCATED)
[{"formula_coordinates":[3.0,451.42,102.5,86.52,8.64],"formula_id":"formula_0","formula_text":"(i) ((...TRUNCATED)
2023-05-16
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b12","b27","b8","b1","b19","b20","b4"(...TRUNCATED)
"The problem of model counting, also known as #SAT, is to compute the number of models or satisfying(...TRUNCATED)
Rounding Meets Approximate Model Counting
[{"figure_caption":"The number of repetitions depends on max(Pr[L], Pr[U ]). The current algorithmic(...TRUNCATED)
Jiong Yang; Kuldeep S Meel
[{"authors":"R Alur; R Bodik; G Juniwal; M M K Martin; M Raghothaman; S A Seshia; R Singh; A Solar-L(...TRUNCATED)
[{"formula_coordinates":[3.0,134.77,564.29,240.41,14.38],"formula_id":"formula_0","formula_text":"es(...TRUNCATED)
10.18653/v1/2020.acl-main.421
2023-05-16
[{"figure_ref":[],"heading":"Introduction","publication_ref":[],"table_ref":[],"text":"Product quest(...TRUNCATED)
"Product Question Answering (PQA) systems are key in e-commerce applications to provide responses to(...TRUNCATED)
xPQA: Cross-Lingual Product Question Answering across 12 Languages
[{"figure_caption":"Figure 2 :2Figure 2: Summary of experimented approaches. The ePQA_MT (and xPQA_M(...TRUNCATED)
Xiaoyu Shen; Akari Asai; Bill Byrne; Adrià De Gispert
[{"authors":"David Adelani; Jesujoba Alabi; Angela Fan; Julia Kreutzer; Xiaoyu Shen; Machel Reid; Da(...TRUNCATED)
[]
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Citation Parsing Dataset

This dataset is generated by GPT3.5

Downloads last month
35
Edit dataset card