ACL-OCL / Base_JSON /prefixA /json /adaptnlp /2021.adaptnlp-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
107 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:06.599633Z"
},
"title": "Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning",
"authors": [
{
"first": "Gordon",
"middle": [],
"last": "Buck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributional methods for learning word embeddings require a sufficient number of occurrences of a word in the training corpus to accurately learn its embedding. Even though the embeddings can be trained on raw text implying that an embedding for every word is obtained, in practice outof-vocabulary (OOV) words do occur in the downstream applications embeddings are used, for example due to domain-specific terminology. Nevertheless, OOV words are often content words such as names which convey important information for downstream tasks; for example, drug names are key in the biomedical domain. However, the amount of downstream language data is typically much smaller than the corpus used for training word embeddings, thus methods that rely on distributional properties of words across large amounts of data perform poorly (Herbelot and Baroni, 2017) .",
"cite_spans": [
{
"start": 829,
"end": 856,
"text": "(Herbelot and Baroni, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Researchers often assign OOV words to random embeddings or to an \"unknown\" embedding, however these solutions fail to capture the distributional properties of words. Zero-shot approaches (Pinter et al., 2017; Kim et al., 2016; Bojanowski et al., 2017) attempt to predict the embeddings for OOV words from their characters alone. These approaches rely on inferring the meaning of a word from its subword information, such as morphemes or WordPiece tokens used in BERT (Devlin et al., 2019) . While this works well for many words, it performs poorly for names and words where morphology is not informative.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Pinter et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 209,
"end": 226,
"text": "Kim et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 227,
"end": 251,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 467,
"end": 488,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given that an OOV word occurs once, the chance of a second occurrence is much higher than the first (Church, 2000) . Hence while OOV words can be rare and not seen in training, it is reasonable to expect that a limited number of occurrences will be present in the data of a downstream application. Few-shot approaches (Garneau et al., 2018; Khodak et al., 2018; Hu et al., 2019) leveraged this to predict the embeddings for OOV words from just a few contexts, often in conjunction with their morphological information. Hu et al. (2019) proposed an attention-based architecture for OOV word embedding learning as a few-shot regression problem. The model is trained to predict the embedding of a word based on a few contexts and its character sequence. Such a model is trained by simulating OOV words in the training corpus, with their target embeddings provided by learning them on the same corpus. As OOV words must have their embeddings inferred from contexts outside the training corpus, the authors show that using an adaptation of the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) to adapt the model's parameters to the target domain improves the quality of the learned OOV word embeddings.",
"cite_spans": [
{
"start": 100,
"end": 114,
"text": "(Church, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 318,
"end": 340,
"text": "(Garneau et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 341,
"end": 361,
"text": "Khodak et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 362,
"end": 378,
"text": "Hu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 519,
"end": 535,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1085,
"end": 1104,
"text": "(Finn et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, MAML is known to be unstable due to the calculation of gradients requiring backpropagation through multiple instances of the model, as the learning process must be unrolled to calculate gradients with respect to the initial parameters (Antoniou et al., 2019) . In practice, the learning process is often truncated to a small number of gradient steps, but has been shown to have a short-horizon bias (Wu et al., 2018) , causing it to underperform.",
"cite_spans": [
{
"start": 244,
"end": 267,
"text": "(Antoniou et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 408,
"end": 425,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we explore OOV word embedding learning using Leap (Flennerhag et al., 2019) , a meta-learning framework which takes into consideration the entire learning trajectory, not only the beginning and end points. Each task is associated with a loss surface over the model's parameters on which the learning process travels, and the aim is to minimize the expected length of this process across tasks. Leap also does not require backpropagation through the learning process, allowing it to adapt over a larger number of gradient steps and thus not suffering from the short-horizon bias that MAML is prone to.",
"cite_spans": [
{
"start": 63,
"end": 88,
"text": "(Flennerhag et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct an intrinsic evaluation of MAML and Leap on the dataset of Lazaridou et al. (2017) which simulates OOV words by combining the contexts of two semantically similar words to form a 'chimera'. We find that Leap performs better than MAML at adapting model parameters to a new corpus. We also conduct an extrinsic evaluation on NER in the biomedical domain where the results are comparable to MAML, without improving in most cases on a random embedding baseline. Finally, we examine which contexts are more beneficial to learn an embedding from, and note that the contexts from which an embedding is learned matters more than the meta-learning method employed.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "Lazaridou et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meta-learning algorithms aim to capture knowledge across a variety of learning tasks such that fine-tuning a model on a specific task both avoids overfitting and leads to good performance. Approaches include learning a similarity function by which to cluster and classify data points (Vinyals et al., 2016; Snell et al., 2017) ; learning an update rule for neural network optimization (Ravi and Larochelle, 2017) ; and learning an initialization from which to fine-tune a model. We consider algorithms for the latter. Formally, we consider task T to consist of a dataset D T of labelled examples (x, y) and loss function L T (\u03b8) \u2192 R which maps a model's parameters \u03b8 to a real-valued loss. For batch gradient descent this loss is constant for a given set of parameters, however for stochastic gradient descent it depends on the sampled examples. During training, tasks T \u223c p(T ) are sampled from a distribution p(T ) over the tasks the model should be able to adapt to. The aim is then to train the model's parameters such that they capture features common to all tasks in p(T ), and thus are a promising initialization for any of these tasks. Below we describe the approaches taken by MAML and Leap.",
"cite_spans": [
{
"start": 284,
"end": 306,
"text": "(Vinyals et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 307,
"end": 326,
"text": "Snell et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 385,
"end": 412,
"text": "(Ravi and Larochelle, 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-Learning",
"sec_num": "2"
},
{
"text": "During training with MAML, for each task T the model's parameters are updated from an initialization \u03b8 to \u03b8 K T through K gradient steps according to L T . The final parameters \u03b8 K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "T are then used to calculate the final task losses, and the original parameters \u03b8 are updated to minimize their sum. This meta-objective is given below in equation 1, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "u T (\u03b8) = \u03b8 \u2212 \u03b1\u2207 \u03b8 L T (\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "is a single gradient step:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "min \u03b8 T \u223cp(T ) L T (\u03b8 K T ) = T \u223cp(T ) L T (u K T (\u03b8)) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "The final task losses L T (\u03b8 K T ) are usually computed using examples held out from training \u03b8 K T to simulate a testing loss. The meta-optimization is then performed by backpropagating with respect to the original parameters, \u03b8, rather than the trained parameters \u03b8 K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "T . This aims to optimize \u03b8 such that a small number of gradient steps, K, on a particular task produces a low testing loss. Algorithm 1 below gives the overview of the training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "Algorithm 1: MAML Input: p(T ): a distribution over tasks Input: \u03b1, \u03b2: step size parameters 1 define u T (\u03b8) = \u03b8 \u2212 \u03b1\u2207 \u03b8 L T (\u03b8); 2 initialize model parameters \u03b8; 3 while not done do 4 sample batch B of tasks T \u223c p(T ); 5 forall T \u2208 B do 6 \u03b8 K T \u2190 u K T (\u03b8); 7 end 8 \u03b8 \u2190 \u03b8 \u2212 \u03b2\u2207 \u03b8 T \u2208B L T (\u03b8 K T ); 9 end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "Algorithm 2: Leap Input: p(T ): a distribution over tasks Input: \u03b1, \u03b2: step size parameters 1 initialize model parameters \u03b8; 2 while not done do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "3 \u2207F \u2190 0; 4 sample batch B of tasks T \u223c p(T ); 5 forall T do 6 \u03b8 0 T \u2190 \u03b8; 7 \u03c8 0 T \u2190 \u03b8; 8 forall i \u2208 {0, ..., K \u2212 1} do 9 \u03b8 i+1 T \u2190 \u03b8 i T \u2212 \u03b1\u2207 \u03b8 i T L T (\u03b8 i T ); 10 \u03c8 i+1 T \u2190 \u03c8 i T \u2212 \u03b1\u2207 \u03c8 i T L T (\u03c8 i T ); 11 \u2207F \u2190 \u2207F + (L T (\u03b8 i T )\u2212L T (\u03c8 i+1 T ))\u2207 \u03b8 i T L T (\u03b8 i T )+(\u03b8 i T \u2212\u03c8 i+1 T ) \u03b3 i+1 T \u2212\u03b3 i T 2 ; 12 end end \u03b8 \u2190 \u03b8 \u2212 \u03b2 |B| \u2207F ; end Computing \u2207 \u03b8 T \u2208B L T (\u03b8 K T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "requires backpropagation through the learning process for each task T \u2208 B. Gradients are computed by backpropagation through K + 1 different instances of the model's parameters, which can cause both exploding and diminishing gradient problems, and becomes unstable for large K (Antoniou et al., 2019) . However, truncating the learning process with a small K results in a short-horizon bias (Wu et al., 2018) , where the learned parameters adapt poorly to tasks over a number of gradient steps larger than K. To extend to more steps, a first order approximation of MAML has been shown to achieve similar performance (Nichol et al., 2018; Finn et al., 2017) . However, this still considers only the initial and the final parameters and loss, and for larger K the intermediate steps become more significant.",
"cite_spans": [
{
"start": 277,
"end": 300,
"text": "(Antoniou et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 391,
"end": 408,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 616,
"end": 637,
"text": "(Nichol et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 638,
"end": 656,
"text": "Finn et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MAML",
"sec_num": "2.1"
},
{
"text": "In Leap the learning process is viewed as a path along a loss surface L, traversed by K gradient steps from initial to final parameters. The intuition behind Leap is that geometrical similarities between learning processes associated with different tasks can be exploited for transfer learning. In particular, Leap seeks to find an initialization that reduces the expected length of learning processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "Following Flennerhag et al. (2019) , we consider the learning process to be a sequence of discrete",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Flennerhag et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "points {\u03b3 i } K i=0 with \u03b3 i = (\u03b8 i , L(\u03b8 i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "corresponding to K gradient updates, and as such we consider the learning process to be the shortest path passing through these points. The length of a learning process is then approximated as the cumulative chordal distance of the path from initial parameters \u03b8 = \u03b8 0 to final parameters \u03b8 K .The cumulative chordal distance approximates the length of the arc passing through the points {\u03b3 i } K i=0 , and is key to minimizing the length of the learning process rather than simply moving the initial parameters towards the final parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "d(\u03b8; L) = K\u22121 i=0 \u03b3 i+1 \u2212 \u03b3 i 2 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "Considering again a distribution of tasks p(T ), an initialization \u03b8 is associated with an expected learning process length E T \u223cp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "(T ) [d(\u03b8; L T )].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "When dealing with complex non-convex loss surfaces, minimizing only the expected learning process length makes no guarantees about the final loss L(\u03b8 K ) and thus may inadvertently find final parameters \u03b8 K with lower performance. Note that MAML takes this into account directly in its objective by optimizing for the final loss on a separate heldout set. Leap instead enforces this as part of its meta-objective by requiring that the optimized initialization \u03b8 must not converge to a higher loss than a baseline initialization \u03c8 = \u03c8 0 for all tasks T . That is, L T (\u03b8 K ) \u2264 L T (\u03c8 K ) assuming convergence after K gradient steps. For this purpose, Leap defines an objective which optimizes the initialization for expected learning process length only along the task-specific learning processes originating from a baseline initialization. This ensures that the learning processes originating from the optimized initialization will have no greater final loss than their counterpart originating from the baseline initialization, given that both consist of K gradient steps. The objective is given in equation 3, where points {\u03b3 i T } K i=0 lie along the learning process for task T originating from the optimized initialization \u03b8, while points {\u03b3 i T } K i=0 lie along the respective learning process originating from the fixed baseline initialization \u03c8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "d(\u03b8; L T , \u03c8) = K\u22121 i=0 \u03b3 i+1 T \u2212 \u03b3 i T 2 min \u03b8F (\u03b8; \u03c8) = E T \u223cp(T ) [d(\u03b8; L T , \u03c8)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "(3) Gradient descent on the objectiveF with respect to \u03b8 pulls the parameters \u03b8 i towards \u03c8 i+1 . Parameters \u03b8 are initialized to be equal to \u03c8 such that each gradient descent update pulls \u03b8 forward along the learning processes originating from it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "Algorithm 2 describes the training with Leap. Tasks T \u223c p(T ) are sampled from the distribution p(T ) and the model's parameters are updated to \u03b8 K T through K gradient steps for each task T as in MAML (lines 1-10; the learning trajectory is expanded in lines 8, 9 and 10, as it is needed in Leap). The gradient \u2207F is incrementally computed at each point (\u03b8 i T , L T (\u03b8 i T )) during the task training (line 11). \u2207F is always evaluated at \u03b8 = \u03c8, with the update term on line 11 pulling \u03b8 i towards \u03c8 i+1 = \u03b8 i+1 . This is performed in order to take into account each point in the learning trajectory, as opposed to the start and end points in MAML. The initialization \u03b8 is then updated according to the accumulated gradient \u2207F (line 14), and the algorithm continues with \u03c8 set to the updated \u03b8. This is done implicitly when \u03b8 is updated, which ensures that any future \u03b8 improves the task loss over the one already obtained, instead of just over the initial random initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap",
"sec_num": "2.2"
},
{
"text": "In this section we describe our application of Leap to learning embeddings for OOV words. This technique is generally applicable to any word embed-ding regression function H \u03b8 trainable by gradient descent, though in our experiments we use the HiCE architecture (Hu et al., 2019) .",
"cite_spans": [
{
"start": 262,
"end": 279,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "The regression function H \u03b8 is trained to predict a word's embedding given K contexts, and possibly morphological information, with words and their contexts sampled from some large training corpus D T . Each word (either a training target or in a context) is represented as a pre-trained embedding in the input to H \u03b8 . We note that H \u03b8 can be trained only on words with sufficient occurrences in D T such that their pre-trained embeddings can be accurately learned. The trained H \u03b8 can then be used to infer the embeddings for OOV words, for which we do not have a pre-trained embedding. However, OOV words often form part of domain-specific vocabularies, the semantics of which are not captured by word embeddings trained on generic large corpora (Kameswara Sarma et al., 2018) . To counteract this, we adapt the trained parameters \u03b8 T of the word embedding regression function to the domain in which we infer the OOV embeddings.",
"cite_spans": [
{
"start": 749,
"end": 779,
"text": "(Kameswara Sarma et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "When adapting \u03b8 T we consider both D T , and the corpus on which we wish to infer OOV embeddings, D N . We wish to transfer the knowledge encoded in \u03b8 T to the task of predicting OOV embeddings for words in D N . One approach would be to simply fine-tune \u03b8 T on D N ; that is, sample words and their contexts from D N which have their pre-trained embeddings from D T known and train H \u03b8 as before on these words. This ignores that D N is much smaller than a corpus usually used for word embedding training, and direct training on it is likely to lead to overfitting to the corpus rather than adapting to its domain, which in turn hurts the quality of inferred OOV embeddings, an instance of catastrophic forgetting (French, 1999) . Instead of fine-tuning, meta-learning algorithms can be applied. Hu et al. (2019) use MAML, and we extend this work with the use of Leap.",
"cite_spans": [
{
"start": 715,
"end": 729,
"text": "(French, 1999)",
"ref_id": "BIBREF8"
},
{
"start": 797,
"end": 813,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "We adapt \u03b8 T by applying Leap across two tasks; inferring OOV words in D T and D N . Optimizing the objective in equation 4 moves the adapted parameters \u03b8 to minimize the lengths of the two corresponding learning processes. The loss function L(\u2022|D) used throughout is the cosine distance between predicted and pre-trained embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 C\u2208{T,N }d (\u03b8; L(\u2022|D C ), \u03b8 T )",
"eq_num": "(4)"
}
],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "This pulls the model parameters \u03b8 along the learning processes for D T and D N originating from the pre-trained parameters \u03b8 T . The learning process for D T should already be short, as \u03b8 T is trained to convergence on D T . Optimization of the pull-forward objective must naturally pull the parameters away from this point of convergence to minimize the length of the learning process for D N . However, as each task is weighted equally, the model parameters cannot move towards the convergence point for D N if this results in a larger divergence from the convergence point for D T . It is thus necessary for the parameters to move into areas which encode knowledge sharing between D T and D N . This reduces overfitting on D N by ensuring \u03b8 does not move too far from the convergence point for D T . While each word's embedding is inferred from only a few contexts, the total number of words available for training in D N is large enough so \u03b8 can be adapted over a larger number of gradient steps. We posit that this, in conjunction with knowledge sharing that considers the entire learning trajectory rather than just the beginning and end points, results in higher quality OOV embeddings than those that can be obtained with MAML.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leap for OOV Embedding Learning",
"sec_num": "3"
},
{
"text": "To obtain an intrinsic evaluation of the methods proposed, we require a dataset that simulates the natural occurrences of OOV words in a real-world setting, and defines a notion for evaluating how close an embedding is to representing an OOV word's meaning. Following Hu et al. (2019), we use the 'Chimera' dataset for evaluation, a popular benchmark dataset for OOV words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.1"
},
{
"text": "The Chimera dataset (Lazaridou et al., 2017) is constructed specifically to simulate unseen words occurring naturally in text. Each unseen word is a chimera, which is a novel concept created by combining two related but distinct concepts; for example a gorilla and a bear. In total there are 33 chimeras, generated by first taking a base concept, called a pivot, and matching this with a compatible concept by traversing a list of terms ranked by similarity to the pivot. In the case of the 'gorilla/bear' chimera, the pivot is the gorilla and the compatible term is the bear. Each chimera is then associated with passages of 2, 4 and 6 sentences, with half containing the pivot and half containing the compatible concept. The occurrences of the pivot and the compatible term are replaced with a nonce word that represents the chimera; for example 'mohalk'.",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Lazaridou et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.1"
},
{
"text": "Each passage is then annotated by human subjects with similarity scores between the nonce word and six different words, called probes, specific to the chimera. These similarity scores are then averaged across human subjects, resulting in six scores for each passage indicating the similarity between the chimera and each probe word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.1"
},
{
"text": "For each of the 2-shot, 4-shot and 6-shot cases the chimera's embedding is inferred given each sentence (shot) as a context and the pivot's character sequence. Following Lazaridou et al. 2017we measure the performance of the embeddings inferred by looking at their cosine similarity to the probe embeddings and calculating the Spearman correlation to the similarity judgements by the human subjects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.1"
},
{
"text": "Throughout all experiments HiCE is trained on WikiText-103 (Merity et al., 2017) with pre-trained embeddings provided by SkipGram (Mikolov et al., 2013) on the same corpus. Where MAML and Leap are used, word embeddings are adapted to the Chimera dataset. Both approaches were implemented using PyTorch (Paszke et al., 2017) and the code will become publicly available. To implement MAML we use higher (Grefenstette et al., 2020), a PyTorch library for higher order optimization such as backpropagation through gradient descent updates, as is required for MAML. This allows us to compute the second order gradients that are required for MAML, rather than using a first order approximation. The learning rates \u03b1 and \u03b2 for each of MAML and Leap were chosen based on each algorithm's stability during training; for MAML we used \u03b1 = 5 \u00d7 10 \u22124 , \u03b2 = 1 \u00d7 10 \u22125 and for Leap we used \u03b1 = 5 \u00d7 10 \u22124 , \u03b2 = 1 \u00d7 10 \u22124 . These learning rates are in line with the publicly available code of Hu et al. (2019) . The number of gradient steps used for adaptation with MAML was 4, while with Leap we increased this to 64, taking advantage of its ability to train tasks over longer horizons. 1 Table 1 gives the average Spearman correlations for HiCE, its combination with MAML as proposed by Hu et al. (2019) , and its combination with Leap as proposed in this work, for the 2-shot, 4-shot and 6-shot cases. We also include the results for related works taken from their corresponding papers. These include fasttext (Bojanowski et al., 2017) ; the additive method (a simple averaging of context word embeddings) (Lazaridou et al., 2017 carte (a linear transformation-based modification of the averaging method) (Khodak et al., 2018) ; and nonce2vec (a modification of the Word2Vec algorithm for few-shot learning) (Herbelot and Baroni, 2017) . We also give the scores obtained when using the pivot's pre-trained embedding as the chimera's embedding to indicate a ceiling, following Hu et al. (2019) . However, we would expect the OOV embedding to differ from the pivot's pre-trained embedding, since the semantics of a chimera are a combination of the pivot and compatible concept. HiCE+Leap achieves the best results across all k-shot settings in our experiments. The results for HiCE, HiCE+MAML and HiCE+Leap are all obtained by averaging the results over 10 different random seeds, and we give a 95% confidence interval for each. We take this approach to highlight the known instability of training MAML across random seeds (Antoniou et al., 2019) , even with no hyperparameter changes. Leap consistently performs better than MAML and with a lower variance; in all cases, the average Spearman's correlation for MAML lies outside of the confidence interval range given for Leap. We also see that due to MAML's instability it can actually lower the performance of pre-trained HiCE in the 2-shot case. Outside of HiCE-based methods, a la carte performs best in the 2-and 4-shot cases, and additive performs best in the 6-shot case. These scores similarly lie outside of confidence interval ranges for HiCE+Leap, which overall performs best in each case. Hu et al. (2019) reported 0.3781, 0.4053 and 0.4307 with HiCE+MAML in the in the 2-, 4-and 6-shot cases respectively, but did not report experiments with multiple random seeds. While were able to obtain similar results for HiCE+MAML in some of our experiments, they were outside the confidence intervals we obtained, illustrating the relative instability in training with MAML (Antoniou et al., 2019) . The results of Hu et al. (2019) are also lower than the highest results we obtained with HiCE+Leap (0.3896, 0.4116 and 0.4395 for 2-, 4-and 6-shot).",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 130,
"end": 152,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 302,
"end": 323,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 976,
"end": 992,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1272,
"end": 1288,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1496,
"end": 1521,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1592,
"end": 1615,
"text": "(Lazaridou et al., 2017",
"ref_id": "BIBREF18"
},
{
"start": 1691,
"end": 1712,
"text": "(Khodak et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 1794,
"end": 1821,
"text": "(Herbelot and Baroni, 2017)",
"ref_id": "BIBREF11"
},
{
"start": 1962,
"end": 1978,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 2507,
"end": 2530,
"text": "(Antoniou et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 3134,
"end": 3150,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 3511,
"end": 3534,
"text": "(Antoniou et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 3552,
"end": 3568,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1173,
"end": 1180,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.1"
},
{
"text": "To gauge the quality of the OOV embeddings for downstream tasks we evaluate their performance when applied to NER. For this purpose we use the JNLPBA 2004 Bio-Entity Recognition Task dataset (Collier and Kim, 2004) . We choose this dataset as the biomedical domain differs significantly from the domain of Wikipedia that HiCE is pre-trained on, and contains many OOV technical terms. Hu et al. (2019) use this dataset also but did not provide their datasplits; thus. while we were able to confirm their results, we cannot compare against them directly.",
"cite_spans": [
{
"start": 191,
"end": 214,
"text": "(Collier and Kim, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 384,
"end": 400,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "The JNLPBA dataset is constructed from 2000 abstracts for training and 404 abstracts for testing, each extracted from a bibliographic database of biomedical information and hand annotated with 36 classes corresponding to chemical classifications. These classifications are simplified into 5 classes for the purpose of the bio-entity recognition task; protein, DNA, RNA, cell-line and cell-type. In total there are 18546 training and 3856 test sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "Following Hu et al. (2019) , HiCE is trained on the WikiText-103 corpus, and we adapt word embeddings to the biomedical domain by using the JNLPBA dataset as a corpus. Contrary to the Chimera dataset, we consider contexts at the abstract level rather than only the sentence level. We train different embeddings for OOV words for each of the 2-shot, 6-shot and 10-shot cases, considering only those OOV words with 2, 6 or 10 occurrences or more respectively. In total we infer embeddings 2-shot 6-shot 10-shot random 0.7226 0.7206 0.7209 HiCE 0.7116 0.7213 0.7232 +MAML 0.7135 0.7232 0.7269 +Leap 0.7141 0.7256 0.7282 \u2020 Table 2 : Micro-averaged F1 score for each of the 2shot, 6-shot and 10-shot settings. Bold indicates the best results and ( \u2020) indicates the result is better than random embeddings at a 0.05 significance level.",
"cite_spans": [
{
"start": 10,
"end": 26,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "for 4643 OOV distinct words (types) in the 2-shot case, 1310 in the 6-shot case and 702 in the 10-shot case. 2 The contexts used to infer the embedding of a word are chosen at random from the contexts that word appears. The inferred embeddings, alongside the embeddings for in-vocabulary words, are used as input to train the LSTM-CRF architecture of Lample et al. (2016) . For each of the k-shot cases a separate test set is created by subsampling, such that the respective test set contains only those sentences with an OOV word whose embeddings has been inferred. This ensures that the test sets focus on the quality of the inferred OOV embeddings. For the 2-shot case there are 2876 test sentences; 2451 for 6-shot; and 2134 for 10-shot. The results for each k-shot setting are given in Table 2 , reported in micro-averaged F1 score. The results obtained using random OOV embeddings as input are given as a baseline.",
"cite_spans": [
{
"start": 109,
"end": 110,
"text": "2",
"ref_id": null
},
{
"start": 351,
"end": 371,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 791,
"end": 798,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "Results are marginally improved with any of the proposed methods for the 6-shot and 10-shot cases, with HiCE+Leap producing the best results. We perform a paired t-test on each pair of results within a k-shot case. However, we do not find the differences between methods to be significant, with only HiCE+Leap in the 10-shot case performing significantly better than random embeddings, at a 0.05 significance level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "We find that performance increases across all methods as we use more contexts. However, for the 2-shot case we find that results are lower than if random embeddings are used. With fewer contexts the quality of the learned OOV embeddings is naturally lower, and inaccuracies in the embeddings add noise which hurts the performance of the downstream NER tagger. This highlights the need for a sufficient number of occurrences to effectively learn word embeddings, even with models specifically designed to handle the lack of data, and that using inaccurate embeddings can lower the performance of the entire downstream system. Our findings corroborate contemporary research which suggests that random embeddings can perform comparably to pre-trained and contextual embeddings on benchmark tasks (Arora et al., 2020) .",
"cite_spans": [
{
"start": 793,
"end": 813,
"text": "(Arora et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "Apart from comparing different meta-learning approaches for word embedding learning, we seek to find which contexts are invariably most informative to a word's meaning. For this purpose, we return to the Chimera dataset. Considering the 2shot case, we rank passages by the performance of the chimera embeddings inferred from them. That is, we infer the chimera embeddings, obtain the cosine similarities against the probe embeddings, and calculate the Spearman correlation against the human scores. We then calculate the pairwise Spearman correlation between these rankings across random seeds and methods experimented with in this paper, i.e. HiCE, HiCE with MAML, and HiCE with Leap. The average Spearman correlation is 0.89 \u00b1 0.0047 for a 95% confidence interval, indicating that the rankings of passages are largely similar across methods. Thus we conclude that the contexts which each method finds most useful to infer a chimera's meaning are largely invariable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informative Contexts",
"sec_num": "4.3"
},
{
"text": "Ordering passages by their cumulative rank across methods, we observe that passages which consistently perform poorly are composed of sentences with more ambiguity and fewer content words. The top section in table 3 gives the lowest and highest performing passages as examples. The lowest performing passage contains little in the way of content words indicating the meaning of the chimera 'refrigerator/closet' besides the presence of 'freezers', and the second sentence is highly ambiguous. Naturally, human annotators would also struggle to pinpoint the semantic meaning of this chimera based on the two sentences given. In contrast the highest performing passage very clearly relates to organic produce and cooking, providing far more hints as to the semantic meaning of the chimera 'broccoli/spinach'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informative Contexts",
"sec_num": "4.3"
},
{
"text": "We also look at which passages perform best for a given chimera. The bottom section in table 3 gives the lowest and highest performing passages for the 'drum/tuba' chimera. We observe the same trend within the chimera, with the lowest is there a better way to start a record or a song than with a thumpin intro they add considerably to the tone of the however when used on their low notes 0.91 Table 3 : Each section gives a higher and lower performing 2-shot passage with its average Spearman correlation across all methods and random seeds. The nonce word is replaced by ' '.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 401,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Informative Contexts",
"sec_num": "4.3"
},
{
"text": "performing passage consisting of more ambiguous sentences, while the highest performing passage contains many content words which hint at the semantics of the chimera, such as 'song'. These two passages differ greatly in their performance, with the lowest passage averaging a Spearman correlation of \u22120.63 while the highest passage achieves an average of 0.91. However, if we look across methods the difference in performance for each of these passages is no higher than 0.2. This suggests that one of the most important factors to inferring an OOV word's embedding is the choice of contexts, perhaps more so than the meta-learning employed. To quantify this further, we calculate the Spearman correlation between the proportion of content words in a passage and its score, which we find to be 0.12\u00b10.0083 for a 95% confidence interval. We do this by using a standard part-of-speech tagger (Honnibal and Montani, 2017) on each passage; taking all nouns, adjectives, verbs and adverbs to be content words. While the correlation is weak, it is significant at a 95% confidence interval and further suggests that context informativeness is a suitable future area of work.",
"cite_spans": [
{
"start": 890,
"end": 918,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informative Contexts",
"sec_num": "4.3"
},
{
"text": "We investigated the use of meta-learning for fewshot learning of OOV word embeddings. We built on the work of Hu et al. (2019) , formulating OOV word embedding learning as a few-shot regression problem and training their proposed architecture HiCE to predict OOV embeddings given K contexts and morphological features. We proposed the use of Leap as a meta-learning algorithm to adapt HiCE to a new semantic domain and compared it to the popular MAML as used by Hu et al. (2019) . Experiments on a benchmark dataset show that Leap is more stable and achieves comparably higher performance than MAML in the context of OOV embedding learning. Further experimentation shows that there is little variation in which contexts perform well across both random seeds and meta-learning approaches, and a qualitative analysis indicates that performance is lower on ambiguous sentences with fewer content words. Our findings suggest a future avenue of work which focuses on the selection of contexts from which to learn an OOV embedding, such as prioritising contexts based on a notion of informativeness.",
"cite_spans": [
{
"start": 110,
"end": 126,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 462,
"end": 478,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The code used in our experiments is available here: http://github.com/Gordonbuck/ml-oov-we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Hu et al. (2019) do not distinguish between different number of shots/contexts per word in their results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How to train your MAML",
"authors": [
{
"first": "Antreas",
"middle": [],
"last": "Antoniou",
"suffix": ""
},
{
"first": "Harrison",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Storkey",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antreas Antoniou, Harrison Edwards, and Amos Storkey. 2019. How to train your MAML. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual embeddings: When are they worth it?",
"authors": [
{
"first": "Simran",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Avner",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2650--2663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simran Arora, Avner May, Jian Zhang, and Christopher R\u00e9. 2020. Contextual embeddings: When are they worth it? In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2650-2663, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empirical estimates of adaptation: The chance of two noriegas is closer to p/2 than p2",
"authors": [
{
"first": "W",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 2000,
"venue": "The 18th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth W. Church. 2000. Empirical estimates of adaptation: The chance of two noriegas is closer to p/2 than p2. In COLING 2000 Volume 1: The 18th International Conference on Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Introduction to the bio-entity recognition task at JNLPBA",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)",
"volume": "",
"issue": "",
"pages": "73--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1126--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning -Volume 70, ICML'17, page 1126-1135. JMLR.org.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transferring knowledge across learning processes",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Flennerhag",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"Garcia"
],
"last": "Moreno",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Damianou",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Flennerhag, Pablo Garcia Moreno, Neil Lawrence, and Andreas Damianou. 2019. Transfer- ring knowledge across learning processes. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Catastrophic forgetting in connectionist networks",
"authors": [
{
"first": "Robert",
"middle": [
"M"
],
"last": "French",
"suffix": ""
}
],
"year": 1999,
"venue": "Trends in Cognitive Sciences",
"volume": "3",
"issue": "4",
"pages": "128--135",
"other_ids": {
"DOI": [
"10.1016/S1364-6613(99)01294-2"
]
},
"num": null,
"urls": [],
"raw_text": "Robert M. French. 1999. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sci- ences, 3(4):128 -135.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predicting and interpreting embeddings for out of vocabulary words in downstream tasks",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Garneau",
"suffix": ""
},
{
"first": "Jean-Samuel",
"middle": [],
"last": "Leboeuf",
"suffix": ""
},
{
"first": "Luc",
"middle": [],
"last": "Lamontagne",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "331--333",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5439"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Garneau, Jean-Samuel Leboeuf, and Luc Lam- ontagne. 2018. Predicting and interpreting embed- dings for out of vocabulary words in downstream tasks. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 331-333, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. 2020. Generalized inner loop meta-learning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Amos",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chin- tala. 2020. Generalized inner loop meta-learning. In International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "High-risk learning: acquiring new word vectors from tiny data",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "304--309",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Marco Baroni. 2017. High-risk learning: acquiring new word vectors from tiny data. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 304-309, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Few-shot representation learning for out-ofvocabulary words",
"authors": [
{
"first": "Ziniu",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4102--4112",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1402"
]
},
"num": null,
"urls": [],
"raw_text": "Ziniu Hu, Ting Chen, Kai-Wei Chang, and Yizhou Sun. 2019. Few-shot representation learning for out-of- vocabulary words. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4102-4112, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Domain adapted word embeddings for improved sentiment classification",
"authors": [
{
"first": "Yingyu",
"middle": [],
"last": "Prathusha Kameswara Sarma",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sethares",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3407"
]
},
"num": null,
"urls": [],
"raw_text": "Prathusha Kameswara Sarma, Yingyu Liang, and Bill Sethares. 2018. Domain adapted word embeddings for improved sentiment classification. In Proceed- ings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 51-59, Melbourne. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A la carte embedding: Cheap but effective induction of semantic feature vectors",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "12--22",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, and Sanjeev Arora. 2018. A la carte embedding: Cheap but effective induction of semantic feature vectors. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 12-22, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 2741-2749. AAAI Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multimodal word meaning induction from minimal exposure to natural text",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1111/cogs.12481"
]
},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2017. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science, 41 Suppl 4.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On first-order meta-learning algorithms",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Nichol",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Achiam",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schulman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mimicking word embeddings using subword RNNs",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "102--112",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1010"
]
},
"num": null,
"urls": [],
"raw_text": "Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword RNNs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 102-112, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Optimization as a model for few-shot learning",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Ravi and Hugo Larochelle. 2017. Optimiza- tion as a model for few-shot learning. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": ";",
"middle": [
"I"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "U",
"middle": [
"V"
],
"last": "Luxburg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "4077--4087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4077-4087. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Matching networks for one shot learning",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "3630--3638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillicrap, ko- ray kavukcuoglu, and Daan Wierstra. 2016. Match- ing networks for one shot learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 29, pages 3630-3638. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Understanding short-horizon bias in stochastic meta-optimization",
"authors": [
{
"first": "Yuhuai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mengye",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"B"
],
"last": "Grosse",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger B. Grosse. 2018. Understanding short-horizon bias in stochastic meta-optimization. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.",
"links": null
}
},
"ref_entries": {}
}
}