|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T02:11:21.115511Z" |
|
}, |
|
"title": "Cross-Lingual Transfer with MAML on Trees", |
|
"authors": [ |
|
{ |
|
"first": "Jezabel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Garcia", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Federica", |
|
"middle": [], |
|
"last": "Freddi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Feng-Ting", |
|
"middle": [], |
|
"last": "Liao", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Mcgowan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Nieradzik", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Da-Shan", |
|
"middle": [], |
|
"last": "Shiu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ye", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Bernacchia", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related. Sharing information between unrelated tasks might hurt performance, and it is unclear how to transfer knowledge across tasks that have a hierarchical structure. Our research extends a meta-learning model, MAML, by exploiting hierarchical task relationships. Our algorithm, TreeMAML, adapts the model to each task with a few gradient steps, but the adaptation follows the hierarchical tree structure: in each step, gradients are pooled across tasks clusters and subsequent steps follow down the tree. We also implement a clustering algorithm that generates the tasks tree without previous knowledge of the task structure, allowing us to make use of implicit relationships between the tasks. We show that TreeMAML successfully trains natural language processing models for crosslingual Natural Language Inference by taking advantage of the language phylogenetic tree. This result is useful, since most languages in the world are under-resourced and the improvement on cross-lingual transfer allows the internationalization of NLP models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related. Sharing information between unrelated tasks might hurt performance, and it is unclear how to transfer knowledge across tasks that have a hierarchical structure. Our research extends a meta-learning model, MAML, by exploiting hierarchical task relationships. Our algorithm, TreeMAML, adapts the model to each task with a few gradient steps, but the adaptation follows the hierarchical tree structure: in each step, gradients are pooled across tasks clusters and subsequent steps follow down the tree. We also implement a clustering algorithm that generates the tasks tree without previous knowledge of the task structure, allowing us to make use of implicit relationships between the tasks. We show that TreeMAML successfully trains natural language processing models for crosslingual Natural Language Inference by taking advantage of the language phylogenetic tree. This result is useful, since most languages in the world are under-resourced and the improvement on cross-lingual transfer allows the internationalization of NLP models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Deep learning models require a large amount of data in order to perform well when trained from scratch. When data is scarce for a given task, we can transfer the knowledge gained in a source task to quickly learn a target task, if the two tasks are related. Multi-task learning studies how to learn multiple tasks simultaneously with a single model, by taking advantage of task relationships (Ruder, 2017; Zhang and Yang, 2018) . However, in Multitask learning models, a set of tasks is fixed in advance and they do not generalize to new tasks. Instead, Meta-learning is inspired by the human abil-ity to learn how to quickly learn new tasks by using the knowledge of previously learned ones.", |
|
"cite_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 405, |
|
"text": "(Ruder, 2017;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 427, |
|
"text": "Zhang and Yang, 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Meta-learning has been widely used in multiple domains, especially in recent years since the advent of Deep Learning (Hospedales et al., 2020) . A successful model for meta-learning, MAML (Finn et al., 2017) , does not diversify task relationships according to their similarity and it is unclear how to modify it for that purpose. Furthermore, there is still a lack of methods for sharing information across tasks that have a hierarchical structure, and the goal of our work is to fill this gap.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 142, |
|
"text": "(Hospedales et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 207, |
|
"text": "(Finn et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The use of MAML-like algorithms in NLP has just recently been proved successful for Natural Language Inference (NLI) and Question Answering (QA) (Nooralahzadeh et al., 2020) . These results represent a practical meta-learning solution to the fundamental problem of applying NLP models to under-resourced languages where data annotation is scarce. This work, combined with the fact that languages can be organized hierarchically using their phylogenetic tree (Dunn et al., 2011) , motivated us to develop a hierarchical meta-learning algorithm, that we call TreeMAML.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 173, |
|
"text": "(Nooralahzadeh et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 477, |
|
"text": "(Dunn et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we make the following contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a novel modification of MAML to account for a hierarchy of tasks. The algorithm uses the tree structure of data during adaptation, by pooling gradients across tasks at each adaptation step and subsequent steps follow down the tree (see Figure 1a ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 258, |
|
"text": "Figure 1a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We modify the hierarchical clustering from Menon et al. (2019) to allow asymmetric tree structure. We apply this clustering algorithm to learn dynamic trees that exploit the similarity between tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 64, |
|
"text": "Menon et al. (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We apply TreeMAML to few-shot NLI, using the XNLI dataset (Conneau et al., 2018) , obtaining accuracies higher than previous stateof-the-art.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 82, |
|
"text": "(Conneau et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The problem of quantifying and exploiting task relationships has a long history in Multi-task learning and is usually approached by parameter sharing, see Ruder (2017) ; Zhang and Yang (2018) for reviews. However, Multi-task Learning is fundamentally different from Meta-learning as it does not consider the problem of generalizing to new tasks (Hospedales et al., 2020) . Recent work includes Zamir et al. (2018) , who studies a large number of computer vision tasks and quantifies the transfer between all pairs of tasks. Achille et al. (2019) proposes a novel measure of task representation by assigning an importance score to each model parameter in each task. The score is based on each task's loss function gradients with respect to each model parameter. This work suggests that gradients can be used as a measure of task similarity and we use this insight in our proposed algorithm. In Meta-learning, a few papers have been recently published on learning and using task relationships. The work of Yao et al. (2019) applies hierarchical clustering to task representations learned by an autoencoder and uses those clusters to adapt the parameters to each task. The model of Liu et al. (2019) maps the classes of each task into the edges of a graph, it meta-learns relationships between classes and how to allocate new classes by using a graph neural network with attention. However, these algorithms are not model-agnostic; they have a fixed backbone and loss function and are thus difficult to apply to new problems. Instead, we design our algorithm as a straightforward generalization of Model-agnostic meta-learning (MAML, Finn et al. (2017) ) and it can be applied to any loss function and backbone.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 167, |
|
"text": "Ruder (2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 191, |
|
"text": "Zhang and Yang (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 370, |
|
"text": "(Hospedales et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "Zamir et al. (2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 545, |
|
"text": "Achille et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1021, |
|
"text": "Yao et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1196, |
|
"text": "Liu et al. (2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1624, |
|
"end": 1649, |
|
"text": "(MAML, Finn et al. (2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A couple of studies looked into modifying MAML to account for task similarities. The work of Jerfel et al. (2019) finds a different initial condition for each cluster of tasks and applies the algorithm to the problem of continual learning. The work of Katoch et al. (2020) defines parameter updates for a task by aggregating gradients from other tasks according to their similarity. However, in contrast with our algorithm, both of these models are not hierarchical, tasks are clustered on one level only and cannot be represented by a tree structure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 272, |
|
"text": "Katoch et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, MAML has been applied to crosslingual meta-learning (Gu et al., 2018; Dou et al., 2019) . In particular, the implementation by Nooralahzadeh et al. (2020) , called XMAML, obtained good results on NLI and QA tasks. As in the previously mentioned computer vision studies, some of these NLP algorithms looked into the relationships among languages to select the support languages used in their meta-learning algorithm, but they do not use the hierarchical structure of the languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 79, |
|
"text": "(Gu et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 97, |
|
"text": "Dou et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 164, |
|
"text": "Nooralahzadeh et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We follow the notation of Hospedales et al. (2020) . We assume the existence of a distribution over tasks \u03c4 and, for each task, a distribution over data points D and a loss function L. The loss function of the meta-learning problem, L meta , is defined as an average across both distributions of tasks and data points:", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 50, |
|
"text": "Hospedales et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The meta-learning problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L meta (\u03c9) = E \u03c4 E D|\u03c4 L \u03c4 (\u03b8 \u03c4 (\u03c9); D)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "The meta-learning problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The goal of meta-learning is to minimize the loss function with respect to a vector of metaparameters \u03c9. The vector of parameters \u03b8 is taskspecific and depends on the meta-parameters \u03c9. Different meta-learning algorithms correspond to a different choice of \u03b8 \u03c4 (\u03c9). We describe below the choice of MAML that will also be followed by TreeMAML. During meta-training, the loss is evaluated on a sample of m tasks and n v validation data points for each task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The meta-learning problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "L meta (\u03c9) = 1 mn v m i=1 nv j=1 L \u03c4 i (\u03b8 \u03c4 i (\u03c9); D ij )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The meta-learning problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2) For each task i, the parameters \u03b8 \u03c4 i are learned by a set of n t training data points, distinct from the validation data. During meta-testing, a new (target) task is given and the parameters \u03b8 are learned by a set of n r target data points. In this work, we also use a batch of training data points to adapt \u03b8 at test time. No training data is used to compute the model's final performance, which is computed on separate test data of the target task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The meta-learning problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "MAML aims at finding the optimal initial condition \u03c9 from which a suitable parameter set can be found, separately for each task, after K gradient steps (Finn et al., 2017) . For task i, we define the single gradient step with learning rate \u03b1 as", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 171, |
|
"text": "(Finn et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "U i (\u03c9) = \u03c9 \u2212 \u03b1 n t nt j=1 \u2207L(\u03c9; D ij )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Then, MAML with K gradient steps corresponds to K iterations of this step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 \u03c4 i (\u03c9) = U i (U i (...U i (\u03c9))) (K times)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This update is usually referred to as inner loop and is performed separately for each task, while optimization of the loss 2 is referred to as outer loop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We propose to modify MAML in order to account for a hierarchical structure of tasks. The idea is illustrated in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 120, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "At each gradient step k, we assume that tasks are aggregated into C k clusters and the parameters for each task are updated according to the average gradient across tasks within the corresponding cluster (in Fig.1b , we use K = 3 steps and C 1 = 2, C 2 = 4, C 3 = 8). We denote by T c the set of tasks in cluster c. Then, the gradient update for the parameters of each task belonging to cluster c is equal to", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 214, |
|
"text": "Fig.1b", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "U c (\u03c9) = \u03c9 \u2212 \u03b1 n t |T c | i\u2208Tc nt j=1 \u2207L(\u03c9; D (i) j ) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Furthermore, we denote by c k i the cluster to which task i belongs at step k. Then, TreeMAML with k gradient steps corresponds to K iterations of this step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 \u03c4 i (\u03c9) = U c K i (U c K\u22121 i (...U c 1 i (\u03c9)))", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The intuition is the following: if each task has scarce data, gradient updates for single tasks are noisy and adding up gradients across similar tasks increases the signal. Note that we recover MAML if C k is equal to the total number of tasks m at all steps. On the other hand, if C k = 1, then the inner loop would take a step with a gradient averaged across all tasks. Because at one specific step the weight updates are equal for all tasks within a cluster, it is possible to define the steps of the inner loop update per cluster c instead of per task \u03b8 \u03c4 i . Given a cluster c and its parent cluster p c in the tree, the update at step k is given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u03b8 c,k = \u03b8 pc,k\u22121 \u2212 \u03b1 n t |T c | i\u2208Tc nt j=1 \u2207L(\u03b8 pc,k\u22121 ; D ij ) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 c k is the parameter value for cluster c at step k. In terms of the notation used in expression 6, we have the equivalence \u03b8 \u03c4 i (\u03c9) = \u03b8 c i ,K , which depends on the initial condition \u03c9. The full procedure is described in Algorithm 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We consider two versions of the algorithm, depending on how we obtain the tree structure similar to Srivastava and Salakhutdinov (2013):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Fixed TreeMAML. The tree is fixed by the knowledge of the tree structure of tasks when this structure is available. In that case, the values of C k are determined by such tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Learned TreeMAML. The tree is unknown a priori and is learned using a hierarchical clustering algorithm. In that case, the values of C k are determined at each step by the clustering algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the latter case, we cluster tasks based on the gradients of each task loss, consistent with recent work Achille et al. (2019) . After each step k at cluster c i , the clustering algorithm takes as input the gradient vectors of the children tasks i", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 128, |
|
"text": "Achille et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g ik = 1 n t nt j=1 \u2207L(\u03b8 c i ,k ; D ij )", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and these gradients are further allocated into clusters according to their similarity. The clustering algorithm is described in subsection 3.2. Similar to MAML, adaptation to a new task is performed by computing \u03b8 (i) (\u03c9) on a batch of data of the target task. In order to exploit task relationships, we first reconstruct the tree structure by using a batch of training data and then we introduce the new task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TreeMAML", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We employ a hierarchical clustering algorithm to cluster the gradients of our model parameters in the learned TreeMAML case. We specifically opt for an online clustering algorithm to maximise computational efficiency at test time and scalability. When a new task is evaluated, we reuse the tree structure generated for a training batch and add the Figure 1 : Illustration of the MAML(a) and TreeMAML(b) algorithms. Both algorithms are designed to quickly adapt to new tasks with a small number of training samples. MAML achieves this by introducing a gradient step in the direction of the single task. TreeMAML follows a similar approach, but it exploits the relationship between tasks by introducing the hierarchical aggregation of the gradients.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 356, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clustering Algorithm", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Require: distribution over tasks p(\u03c4 ); distribution over data for each task p(D|\u03c4 ); Require: number of inner steps K; number of training tasks m; learning rates \u03b1, \u03b2; Require: number of clusters C k for each step k; loss function L \u03c4 (\u03c9, D) for each task randomly initialize \u03c9 while not done do sample batch of i = 1 : m tasks {\u03c4 i } \u223c p(\u03c4 ) for all tasks i = 1 : m initialize a single cluster c i = 1 initialize \u03b8 1,0 = \u03c9 for steps k = 1 : K do for tasks i = 1 : m do sample batch of j = 1 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "n v data points {D ij } \u223c p(D|\u03c4 i ) evaluate gradient g ik = 1 nt nt j=1 \u2207L \u03c4 i (\u03b8 c i ,k\u22121 ; D ij ) end for regroup tasks into C k clusters T c = {i : c i = c} according to similarity of {g ik } and parent clusters {p c } update \u03b8 c,k = \u03b8 pc,k\u22121 \u2212 \u03b1 |Tc| i\u2208Tc g ik for all clusters c = 1 : C k end for update \u03c9 \u2190 \u03c9 \u2212 \u03b2 1 mnv m i=1 nv j=1 \u2207 \u03c9 L \u03c4 i (\u03b8 c i ,K (\u03c9); D ij ) end while", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "new task. This process saves us from computing a new task hierarchy from scratch for every new task. Moreover, with offline hierarchical clustering, all the data needs to be available to the clustering algorithm simultaneously, which becomes a problem when dealing with larger batch sizes. Therefore online clustering favours scalability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We follow the online top-down (OTD) approach set out by Menon et al. (2019) and adapt this to approximate non-binary tree structures. Our clustering algorithm is shown in Algorithm 2. Specifically, we introduce two modifications to the original OTD algorithm:", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 75, |
|
"text": "Menon et al. (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Maximum Tree Depth Parameter D: This is equivalent to the number of inner steps to take in the TreeMAML since the tree is a representation of the inner loop where each layer in the tree represents a single inner step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Non-binary Tree Approximation: We introduce a hyperparameter \u03be which represents how far the similarity of a new task needs to be to the average cluster similarity in order to be considered a child of that same cluster. This is not an absolute value of distance, but it is a multiplicative factor of the standard deviation of the intracluster similarities. Introducing this factor allows clusters at any level to have more than two children. Languages can be embraced in a forest of phylogenetic trees (Dunn et al., 2011) , for example, the Indo-European and Asian trees ( Figure 2 ). TreeMAML exploits this hierarchical structure to generalize the performance of models across languages, including under-resourced languages, useing all the available languages in the tree.", |
|
"cite_spans": [ |
|
{ |
|
"start": 503, |
|
"end": 522, |
|
"text": "(Dunn et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 574, |
|
"end": 582, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We adapt a high-resource language model, Multi-BERT (Devlin et al., 2018) , to a NLI task. In particular, we consider the problem of Few-Shot NLI using the XNLI data set (Conneau et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 73, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 192, |
|
"text": "(Conneau et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This dataset consists of a crowd-sourced collection of 5,000 test and 2,500 dev sentence-label pairs from the MultiNLI corpus. They are annotated with textual entailment and translated into 15 languages: English (en), French (fr), Spanish (es), German (de), Greek(el), Bulgarian (bg), Russian (ru), Turkish, Arabic, Vietnamese (vi), Thai (th), Chinese (zh), Hindi (hi), Swahili and Urdu (ur). Twelve of these languages are part of the same phylogenetic tree, and we focus our study on those languages (see Figure 2) . We separately set as target language each language of the tree and we used the eleven remaining languages as auxiliary languages for meta-training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 506, |
|
"end": 515, |
|
"text": "Figure 2)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each sentence has also an associated topic, or genre, among a collection of 10 possible genres (Face-To-Face, Telephone, Government, 9/11, Letters, Oxford University Press (OUP), Slate, Verbatim, and Government, Fiction). We define each combination of a language and a genre as a task, and we consider the problem of few-shot metalearning using three shots for each task during metatraining. We add the new target task to the original distribution of tasks, we apply the TreeMAML algorithm and evaluate the model on the target language test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the TreeMAML algorithm to fine-tune the top layer of Multi-BERT (layer 12), with four inner steps. We compare our results with MAML, using the same number of inner steps, with the baseline Multi-BERT, and with XMAML Nooralahzadeh et al. (2020) . An important difference of our approach is that, while XMAML uses only two auxiliary languages to fine tune Multi-BERT to a target language, we use all other languages as auxiliary languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 250, |
|
"text": "XMAML Nooralahzadeh et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 TreeMAML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the case of fixed TreeMAML, we use the phylogenetic tree in Figure 2 . Fine-tuning of Multi-BERT for the target language benefits not only from proximal (auxiliary) languages, but also from all other languages in the tree that share roots with the target language. For example, if the target language is German, the fine-tuning in fixed TreeMAML would use for the first step of the gradient update all the remaining languages. In the second step, the auxiliary languages would be all the training languages of the Indo-European branch. The third and last steps uses only English.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 71, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fixed TreeMAML", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The accuracy of TreeMAML is consistently higher than the one of the Baseline or MAML and an average of \u223c 3% better than the one achieved by XMAML, see Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fixed TreeMAML", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Note that we used a relatively simple version of the phylogenetic tree. A more detailed version could be used for testing under-resourced languages, or to emphasize the dependencies inside the tree. For example, a Bavarian testing data set would fall inside the German branch, or we could add depth to the tree by adding Germanic sub-branches, such as high-german, anglo-frisian and low-franconian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fixed TreeMAML", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While fixed TreeMAML uses previous knowledge to construct the tree, learned TreeMAML allows learning the relation between languages and genres, potentially reflecting a priory unknown relationships in the XNLI corpus, but also potentially fitting some noise. Note that learned TreeMAML has one additional parameter, the maximum tree depth, as explained in 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The relationships between languages and genres is learned at each step of gradient descent, for each batch of data. Therefore, the tree for one batch can be different from the tree for the next batch. This difference is due to the fact that the clustering algorithm only cares about the similarity of the gradients, and this similarity does not need to be always the same between two languages. It may depend on the particular words used in the sentences, or in the tasks genres. For example, for a particular batch, some sentences from the same genre in English and French could have closer gradients than other sentences in Germanic languages with a different genre.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The clustering process happens at both training and testing time, which means that learned TreeMAML may improve the accuracy by improv-ing the training, but also by producing the best hierarchy for the target task at testing time. This may be particularly useful for under-resourced languages where non-obvious dependencies between the task in the target languages and the tasks in other languages can be exploited to improve the test accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As shown in Table 1 , fixed/learned TreeMAML outperforms other methods in almost all languages. These results show that using the languages hierarchical structure helps achieving better cross-lingual transfer and higher accuracy in the XNLI task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the case of Greek (el), TreeMAML outperforms XMAML, but the baseline Multi-BERT obtains a slightly higher accuracy. This result could be due to the simplified structure of the tree that we use, which does not adequately reflect the actual distance in between languages from the Indo-European family. Besides Greek, Thai (th) is the only language for which TreeMAML does not get higher accuracy. This is mainly due to oversimplified tree used. We used a generic \"Asian\" language tree, but Chinese, Vietnamese and Thai belong to three separate language families.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Learned TreeMAML performs very similar to fixed TreeMAML in most experiments, achieving higher values for some of the languages. We believe that the difference depends on how well our clustering algorithm performs in each case. For some languages, learned TreeMAML is just learning the same tree structure that we use in fixed TreeMAML, and both algorithms produce almost (Nooralahzadeh et al., 2020) . The lower part of the table shows the performance when using all languages (except the target) as auxiliary languages. The difference in the amount of data used for training may account for a part of the difference in performance between XMAML and TreeMAML, and may also explain why our Baseline outperforms XMAML for some languages. The results are reported for each of our experiment by averaging the performance over three different runs. The standard deviation is for all our experiments below 1%. the same results. In other cases, the clustering algorithm assigns tasks to the wrong branch, making learned TreeMAML perform worse. For some other languages, learned TreeMAML performs better, possibly because it finds other useful relationships, for example tasks belonging to the same genre in different languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 400, |
|
"text": "(Nooralahzadeh et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned TreeMAML", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This paper presents a method to exploit the data hierarchy in the meta-learning framework, TreeMAML. This algorithm can use a priory knowledge of the data set (fixed TreeMAML), or learn the hierarchical structure using our modification of the OTD clustering algorithm (learned TreeMAML).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Since languages follow a hierarchical phylogenetic tree, we hypothesized that we could use TreeMAML to meta-train models for cross-lingual understanding. We applied TreeMAML to the cross-lingual XNLI problem and show an improvement in accuracy \u223c 3% with respect to the state of the art obtained by XMAML (Nooralahzadeh et al., 2020) (Table 1 ). The improvement with respect to XMAML suggests that using all available languages results in increased performance. Furthermore, the improvement with respect to MAML suggests that using the tree structure of those languages also improves performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 332, |
|
"text": "(Nooralahzadeh et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 341, |
|
"text": "(Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "How much each auxiliary language contributes to the target language's performance may depend on its relative position in the language tree. These results are especially encouraging for meta-training of cross-lingual understanding tasks for underresourced languages. Future work may include an improved algorithm that takes into account not only the position of a language in the tree, but also the distances between languages in a branch by, for example, introducing weighted averaging of the gradients.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In our NLI experiments, learned TreeMAML is in most cases as good or even better than fixed TreeMAML. One possible explanation is that clustering learns the tree for each batch of data at each gradient step, which allows it to pick up NLIrelevant similarities that are not described by a phylogenetic tree, as the genre of the task in the XNLI data set, structural similarities and lexical similarity, which can be the result of language contact, as for example lexical borrowings. This may help with cross-lingual understanding tasks for uncommon languages for which the exact position in the tree may be unclear or not enough data may be available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As discussed in section 4.2, the lack of improvement in accuracy in the Greek language and the low performance in Thai can be rooted in the experimental design's naive assumptions about: which languages to include in the experiment, and the correctness of the language tree. The results in which TreeMAMl performs worse than the other algorithms speak in favour of the robustness of this algorithm to properly learn cross-lingual relationships and exploit them to perform natural language understanding tasks. Therefore, the use of learned TreeMAML could help with the internationalization of the NLP models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Task2Vec: Task Embedding for Meta-Learning", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Achille", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Tewari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avinash", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhransu", |
|
"middle": [], |
|
"last": "Maji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charless", |
|
"middle": [], |
|
"last": "Fowlkes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Soatto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.03545.ArXiv:1902.03545" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, and Pietro Perona. 2019. Task2Vec: Task Embedding for Meta-Learning. arXiv:1902.03545. ArXiv: 1902.03545.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "XNLI: Evaluating Cross-lingual Sentence Representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruty", |
|
"middle": [], |
|
"last": "Rinott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.05053" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating Cross-lingual Sentence Representations. arXiv:1809.05053.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "and Antonios Anastasopoulos. 2019. Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks", |
|
"authors": [ |
|
{ |
|
"first": "Zi-Yi", |
|
"middle": [], |
|
"last": "Dou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keyi", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.10423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks. arXiv:1908.10423.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Evolved structure of language shows lineage-specific trends in wordorder universals", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Dunn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Greenhill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Levinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russell", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Gray", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Nature", |
|
"volume": "473", |
|
"issue": "7345", |
|
"pages": "79--82", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1038/nature09923" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Dunn, Simon J. Greenhill, Stephen C. Levin- son, and Russell D. Gray. 2011. Evolved structure of language shows lineage-specific trends in word- order universals. Nature, 473(7345):79-82.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", |
|
"authors": [ |
|
{ |
|
"first": "Chelsea", |
|
"middle": [], |
|
"last": "Finn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pieter", |
|
"middle": [], |
|
"last": "Abbeel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Levine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.03400.ArXiv:1703.03400" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv:1703.03400. ArXiv: 1703.03400.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Meta-Learning for Low-Resource Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.08437" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor O. K. Li. 2018. Meta-Learning for Low-Resource Neural Machine Translation. arXiv:1808.08437.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Meta-Learning in Neural Networks: A Survey", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Hospedales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antreas", |
|
"middle": [], |
|
"last": "Antoniou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Micaelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amos", |
|
"middle": [], |
|
"last": "Storkey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.05439.ArXiv:2004.05439" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2020. Meta-Learning in Neural Networks: A Survey. arXiv:2004.05439. ArXiv: 2004.05439.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Reconciling meta-learning and continual learning with online mixtures of tasks", |
|
"authors": [ |
|
{ |
|
"first": "Ghassen", |
|
"middle": [], |
|
"last": "Jerfel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Grant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Heller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ghassen Jerfel, Thomas L Griffiths, Erin Grant, and Katherine Heller. 2019. Reconciling meta-learning and continual learning with online mixtures of tasks. NIPS, page 12.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pavan Turaga, and Andreas Spanias. 2020. Invenio: Discovering Hidden Relationships Between Tasks/Domains Using Structured Meta Learning", |
|
"authors": [ |
|
{ |
|
"first": "Sameeksha", |
|
"middle": [], |
|
"last": "Katoch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kowshik", |
|
"middle": [], |
|
"last": "Thopalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jayaraman", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Thiagarajan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.10600.ArXiv:1911.10600" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameeksha Katoch, Kowshik Thopalli, Jayaraman J. Thiagarajan, Pavan Turaga, and Andreas Spanias. 2020. Invenio: Discovering Hidden Relationships Between Tasks/Domains Using Structured Meta Learning. arXiv:1911.10600. ArXiv: 1911.10600.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning to Propagate Labels", |
|
"authors": [ |
|
{ |
|
"first": "Yanbin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juho", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minseop", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saehoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunho", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sung", |
|
"middle": [ |
|
"Ju" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transductive Propagation Network for Few-shot Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.10002.ArXiv:1805.10002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, and Yi Yang. 2019. Learning to Propagate Labels: Transduc- tive Propagation Network for Few-shot Learning. arXiv:1805.10002. ArXiv: 1805.10002.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Online Hierarchical Clustering Approximations", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [ |
|
"Krishna" |
|
], |
|
"last": "Menon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Rajagopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baris", |
|
"middle": [], |
|
"last": "Sumengen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gui", |
|
"middle": [], |
|
"last": "Citovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjiv", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.09667.ArXiv:1909.09667" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Krishna Menon, Anand Rajagopalan, Baris Sumengen, Gui Citovsky, Qin Cao, and Sanjiv Ku- mar. 2019. Online Hierarchical Clustering Approxi- mations. arXiv:1909.09667. ArXiv: 1909.09667.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-Shot Cross-Lingual Transfer with Meta Learning", |
|
"authors": [ |
|
{ |
|
"first": "Farhad", |
|
"middle": [], |
|
"last": "Nooralahzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.02739.ArXiv:2003.02739" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero- Shot Cross-Lingual Transfer with Meta Learning. arXiv:2003.02739. ArXiv: 2003.02739.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "An Overview of Multi-Task Learning in Deep Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.05098.ArXiv:1706.05098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. 2017. An Overview of Multi- Task Learning in Deep Neural Networks. arXiv:1706.05098. ArXiv: 1706.05098.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Discriminative Transfer Learning with Tree-based Priors", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Russ R Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "2094--2102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava and Russ R Salakhutdinov. 2013. Dis- criminative Transfer Learning with Tree-based Pri- ors. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 2094-2102. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Hierarchically Structured Meta-learning", |
|
"authors": [ |
|
{ |
|
"first": "Huaxiu", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junzhou", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenhui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.05301.ArXiv:1905.05301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huaxiu Yao, Ying Wei, Junzhou Huang, and Zhenhui Li. 2019. Hierarchically Structured Meta-learning. arXiv:1905.05301. ArXiv: 1905.05301.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Taskonomy: Disentangling Task Transfer Learning", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zamir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Sax", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [], |
|
"last": "Guibas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jitendra", |
|
"middle": [], |
|
"last": "Malik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvio", |
|
"middle": [], |
|
"last": "Savarese", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.08328.ArXiv:1804.08328" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling Task Transfer Learning. arXiv:1804.08328. ArXiv: 1804.08328.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A Survey on Multi-Task Learning", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1707.08114.ArXiv:1707.08114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Zhang and Qiang Yang. 2018. A Survey on Multi-Task Learning. arXiv:1707.08114. ArXiv: 1707.08114.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Online top down (OTD) -Non-binary Require: origin cluster node C with a given set of children A = {x 1 , x 2 , ..x N } Require: new task x; maximum depth allowed D; similarity metric, \u03c9() Require: standard deviation multiplicative hyperparameter \u03be; if |A| = 0 then new task becomes a new child A = {x} else if |A| = 1 then add new task to set of children A \u2190 A \u222a {x} else if \u03c9(A \u222a {x}) > \u03c9(A) then identify most similar child x * = arg min x i (\u03c9({x i , x})) if reached maximum depth C depth + 1 = D then add new task to set of children A \u2190 A \u222a {x} else recursively perform OTD to create new node C = OTD(x * , x) add new node to set of children A \u2190 (A \\ {x * }) \u222a C end if else if \u03c9(A \u222a {x}) < \u03c9(A) \u2212 \u03be\u03c3 T then current node and new task become children to new cluster A \u2190 {C, x} else add new task to set of children A \u2190 A \u222a {x} end if 4 Cross-Lingual NLI" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Simplified version of the phylogenetic language tree. The tree include 12 of the 15 languages of XNLI data set and have depth three (Three levels of hierarchy)" |
|
}, |
|
"TABREF0": { |
|
"text": "Nooralahzadeh et al., 2020) Multi-BERT (Baseline) 81.94 75.39 75.79 73.25 69.54 71.60 70.84 73.23 61.18 73.93 64.37 63.71 71.23 XMAML 82.71 75.97 76.51 74.07 70.66 72.77 72.12 73.87 62.5 74.85 65.75 64.59 72.20 all languages (ours) Multi-BERT (Baseline) 83.56 76.22 76.89 73.11 72.89 72.89 71.33 74.67 57.56 74.89 63.11 63.33 71.70 MAML 83.11 78.22 77.11 73.56 69.33 71.78 71.33 74.22 57.33 75.11 63.33 63.78 71.52 Fixed TreeMAML 84.67 79.78 78.22 76.89 72.00 74.22 73.33 74.44 59.56 79.11 66.00 66.89 73.76 Learned TreeMAML 84.22 77.33 79.78 78.00 71.56 73.78 74.00 74.89 59.78 76.44 65.11 65.56 73.37", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>en</td><td>fr</td><td>es</td><td>de</td><td>el</td><td>bg</td><td>ru</td><td>vi</td><td>th</td><td>zh</td><td>hi</td><td>ur</td><td>avg</td></tr><tr><td/><td/><td/><td colspan=\"2\">two languages (</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "The top part of the table shows the results of training with two auxiliary languages", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |