ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-main.102.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:17:04.295314Z"
},
"title": "Dynamic Memory Induction Networks for Few-Shot Text Classification",
"authors": [
{
"first": "Ruiying",
"middle": [],
"last": "Geng",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {
"settlement": "Beijing"
}
},
"email": "ruiying.gry@alibaba-inc.com"
},
{
"first": "Binhua",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {
"settlement": "Beijing"
}
},
"email": "binhua.lbh@alibaba-inc.com"
},
{
"first": "Yongbin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {
"settlement": "Beijing"
}
},
"email": "jian.sun@alibaba-inc.com"
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {
"settlement": "Beijing"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification. The model utilizes dynamic routing to provide more flexibility to memory-based few-shot learning in order to better adapt the support sets, which is a critical capacity of fewshot classification models. Based on that, we further develop induction models with query information, aiming to enhance the generalization ability of meta-learning. The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset, improving the best performance (accuracy) by 2\u223c4%. Detailed analysis is further performed to show the effectiveness of each component.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification. The model utilizes dynamic routing to provide more flexibility to memory-based few-shot learning in order to better adapt the support sets, which is a critical capacity of fewshot classification models. Based on that, we further develop induction models with query information, aiming to enhance the generalization ability of meta-learning. The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset, improving the best performance (accuracy) by 2\u223c4%. Detailed analysis is further performed to show the effectiveness of each component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Few-shot text classification, which requires models to perform classification with a limited number of training instances, is important for many applications but yet remains to be a challenging task. Early studies on few-shot learning (Salamon and Bello, 2017) employ data augmentation and regularization techniques to alleviate overfitting caused by data sparseness. More recent research leverages meta-learning (Finn et al., 2017; to extract transferable knowledge among meta-tasks in meta episodes.",
"cite_spans": [
{
"start": 235,
"end": 260,
"text": "(Salamon and Bello, 2017)",
"ref_id": "BIBREF30"
},
{
"start": 413,
"end": 432,
"text": "(Finn et al., 2017;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A key challenge for few-shot text classification is inducing class-level representation from support sets (Gao et al., 2019) , in which key information is often lost when switching between meta-tasks. Recent solutions (Gidaris and Komodakis, 2018) leverage a memory component to maintain models' learning experience, e.g., by finding from a supervised stage the content that is similar to the unseen classes, leading to the state-of-the-art performance. However, the memory weights are static during inference and the capability of the model is still limited when adapted to new classes. Another prominent challenge is the instance-level diversity caused by various reasons (Gao et al., 2019; Geng et al., 2019) , resulting in the difficulty of finding a fixed prototype for a class (Allen et al., 2019) . Recent research has shown that models can benefit from query-aware methods (Gao et al., 2019) .",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "(Gao et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 218,
"end": 247,
"text": "(Gidaris and Komodakis, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 674,
"end": 692,
"text": "(Gao et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 693,
"end": 711,
"text": "Geng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 783,
"end": 803,
"text": "(Allen et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 881,
"end": 899,
"text": "(Gao et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose Dynamic Memory Induction Networks (DMIN) to further tackle the above challenges. DMIN utilizes dynamic routing (Sabour et al., 2017; Geng et al., 2019) to render more flexibility to memory-based few-shot learning (Gidaris and Komodakis, 2018) in order to better adapt the support sets, by leveraging the routing component's capacity in automatically adjusting the coupling coefficients during and after training. Based on that, we further develop induction models with query information to identify, among diverse instances in support sets, the sample vectors that are more relevant to the query. These two modules are jointly learned in DMIN.",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Sabour et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 158,
"end": 176,
"text": "Geng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 238,
"end": 267,
"text": "(Gidaris and Komodakis, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed model achieves new state-of-theart results on the miniRCV1 and ODIC datasets, improving the best performance by 2\u223c4% accuracy. We perform detailed analysis to further show how the proposed network achieves the improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Few-shot learning has been studied in early work such as (Fe-Fei et al., 2003; Fei-Fei et al., 2006) and more recent work (Ba et al., 2016; Santoro et al., 2016; Munkhdalai and Yu, 2017; Ravi and Larochelle, 2016; Mishra et al., 2017; Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Allen et al., 2019) . Researchers have also investigated few-shot learning in various NLP tasks (Dou et al., 2019; Gu et al., 2018; Obamuyide and Vlachos, 2019; Hu et al., 2019) , including text classification (Yu et al., 2018; Rios and Kavuluru, 2018; Geng et al., 2019; Gao et al., 2019; Ye and Ling, 2019) .",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "(Fe-Fei et al., 2003;",
"ref_id": "BIBREF8"
},
{
"start": 79,
"end": 100,
"text": "Fei-Fei et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 122,
"end": 139,
"text": "(Ba et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 140,
"end": 161,
"text": "Santoro et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 162,
"end": 186,
"text": "Munkhdalai and Yu, 2017;",
"ref_id": null
},
{
"start": 187,
"end": 213,
"text": "Ravi and Larochelle, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 214,
"end": 234,
"text": "Mishra et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 235,
"end": 253,
"text": "Finn et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 254,
"end": 275,
"text": "Vinyals et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 276,
"end": 295,
"text": "Snell et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 296,
"end": 314,
"text": "Sung et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 315,
"end": 334,
"text": "Allen et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 411,
"end": 429,
"text": "(Dou et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 430,
"end": 446,
"text": "Gu et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 447,
"end": 475,
"text": "Obamuyide and Vlachos, 2019;",
"ref_id": "BIBREF23"
},
{
"start": 476,
"end": 492,
"text": "Hu et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 525,
"end": 542,
"text": "(Yu et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 543,
"end": 567,
"text": "Rios and Kavuluru, 2018;",
"ref_id": "BIBREF28"
},
{
"start": 568,
"end": 586,
"text": "Geng et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 587,
"end": 604,
"text": "Gao et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 605,
"end": 623,
"text": "Ye and Ling, 2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Memory mechanism has shown to be very effective in many NLP tasks (Tang et al., 2016; Das et al., 2017; Madotto et al., 2018) . In the fewshot learning scenario, researchers have applied memory networks to store the encoded contextual information in each meta episode (Santoro et al., 2016; Cai et al., 2018; . Specifically Qi et al. (2018) and Gidaris and Komodakis (2018) build a two-stage training procedure and regard the supervisely learned class representation as a memory component.",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "(Tang et al., 2016;",
"ref_id": "BIBREF36"
},
{
"start": 86,
"end": 103,
"text": "Das et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 104,
"end": 125,
"text": "Madotto et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 268,
"end": 290,
"text": "(Santoro et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 291,
"end": 308,
"text": "Cai et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 324,
"end": 340,
"text": "Qi et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 345,
"end": 373,
"text": "Gidaris and Komodakis (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An overview of our Dynamic Memory Induction Networks (DMIN) is shown in Figure 1 , which is built on the two-stage few-shot framework Gidaris and Komodakis (2018) . In the supervised learning stage (upper, green subfigure), a subset of classes in training data are selected as the base sets, consisting of C base number of base classes, which is used to finetune a pretrained sentence encoder and to train a classifier.",
"cite_spans": [
{
"start": 134,
"end": 162,
"text": "Gidaris and Komodakis (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "In the meta-learning stage (bottom, orange subfigure), we construct an \"episode\" to compute gradients and update our model in each training iteration. For a C-way K-shot problem, a training episode is formed by randomly selecting C classes from the training set and choosing K examples within each selected class to act as the support set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "S = \u222a C c=1 {x c,s , y c,s } K s=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "A subset of the remaining examples serve as the query set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "Q = {x q , y q } L q=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "Training on such episodes is conducted by feeding the support set S to the model and updating its parameters to minimize the loss in the query set Q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "3.1"
},
{
"text": "We expect that developing few-shot text classifier should benefit from the recent advance on pretrained models (Peters et al., 2018; Devlin et al., 2019; Radford et al.) . Unlike recent work (Geng et al., 2019) , we employ BERT-base (Devlin et al., 2019) for sentence encoding , which has been used in recent few-shot learning models (Bao et al., 2019; Soares et al., 2019) . The model architecture of BERT (Devlin et al., 2019 ) is a multi-layer bidi- ",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 133,
"end": 153,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 154,
"end": 169,
"text": "Radford et al.)",
"ref_id": null
},
{
"start": 191,
"end": 210,
"text": "(Geng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 233,
"end": 254,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 334,
"end": 352,
"text": "(Bao et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 353,
"end": 373,
"text": "Soares et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 407,
"end": 427,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Encoder",
"sec_num": "3.2"
},
{
"text": "K c M a N t D B + G u p q E n + U p 6 t o K W o 0 M I = \" > A A A C y 3 i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l w 4 0 a o Y B 9 Q S 0 m m 0 x q a F 5 m J U G u X / o B b / S / x D / Q v v D N O Q S 2 i E 5 K c O f e c O 3 P v 9 d M w E N J x X g v W w u L S 8 k p x t b S 2 v r G 5 V d 7 e a Y o k z x h v s C R M s r b v C R 4 G M W / I Q I a 8 n W b c i / y Q t / z R m Y q 3 b n k m g i S + k u O U d y N v G A e D g H m S q H a r N 1 H u a a 9 c c a q O X v Y 8 c A 2 o w K x 6 U n 7 B N f p I w J A j A k c M S T i E B 0 F P B y 4 c p M R 1 M S E u I x T o O M c U J f L m p O K k 8 I g d 0 X d I u 4 5 h Y 9 q r n E K 7 G Z 0 S 0 p u R 0 8 Y B e R L S Z Y T V a b a O 5 z q z Y n / L P d E 5 1 d 3 G 9 P d N r o h Y i R t i / / L N l P / 1 q V o k B j j V N Q R U U 6 o Z V R 0 z W X L d F X V z + 0 t V k j K k x C n c p 3 h G m G n n r M + 2 9 g h d u + q t p + N v W q l Y t W d G m + N d 3 Z I G 7 P 4 c 5 z x o H l V d p + p e H l d q j h l 1 E X v Y x y H N 8 w Q 1 n K O O h p 7 j I 5 7 w b F 1 Y w r q z 7 j + l V s F 4 d v F t W Q 8 f K e y S g g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" K c M a N t D B + G u p q E n + U p 6 t o K W o 0 M I = \" > A A A C y 3 i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l w 4 0 a o Y B 9 Q S 0 m m 0 x q a F 5 m J U G u X / o B b / S / x D / Q v v D N O Q S 2 i E 5 K c O f e c O 3 P v 9 d M w E N J x X g v W w u L S 8 k p x t b S 2 v r G 5 V d 7 e a Y o k z x h v s C R M s r b v C R 4 G M W / I Q I a 8 n W b c i / y Q t / z R m Y q 3 b n k m g i S + k u O U d y N v G A e D g H m S q H a r N 1 H u a a 9 c c a q O X v Y 8 c A 2 o w K x 6 U n 7 B N f p I w J A j A k c M S T i E B 0 F P B y 4 c p M R 1 M S E u I x T o O M c U J f L m p O K k 8 I g d 0 X d I u 4 5 h Y 9 q r n E K 7 G Z 0 S 0 p u R 0 8 Y B e R L S Z Y T V a b a O 5 z q z Y n / L P d E 5 1 d 3 G 9 P d N r o h Y i R t i / / L N l P / 1 q V o k B j j V N Q R U U 6 o Z V R 0 z W X L d F X V z + 0 t V k j K k x C n c p 3 h G m G n n r M + 2 9 g h d u + q t p + N v W q l Y t W d G m + N d 3 Z I G 7 P 4 c 5 z x o H l V d p + p e H l d q j h l 1 E X v Y x y H N 8 w Q 1 n K O O h p 7 j I 5 7 w b F 1 Y w r q z 7 j + l V s F 4 d v F t W Q 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Encoder",
"sec_num": "3.2"
},
{
"text": "f K e y S g g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" rectional Transformer encoder based on the original Transformer model (Vaswani et al., 2017) . A special classification embedding ([CLS]) is inserted as the first token and a special token ([SEP]) is added as the final token. We use the d-dimensional hidden vector output from the [CLS] as the representation e of a given text x: e = E(x|\u03b8). The pretrained BERT model provides a powerful contextdependent sentence representation and can be used for various target tasks, and it is suitable for the few-shot text classification task (Bao et al., 2019; Soares et al., 2019) .",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 612,
"end": 630,
"text": "(Bao et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 631,
"end": 651,
"text": "Soares et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Encoder",
"sec_num": "3.2"
},
{
"text": "K c M a N t D B + G u p q E n + U p 6 t o K W o 0 M I = \" > A A A C y 3 i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l w 4 0 a o Y B 9 Q S 0 m m 0 x q a F 5 m J U G u X / o B b / S / x D / Q v v D N O Q S 2 i E 5 K c O f e c O 3 P v 9 d M w E N J x X g v W w u L S 8 k p x t b S 2 v r G 5 V d 7 e a Y o k z x h v s C R M s r b v C R 4 G M W / I Q I a 8 n W b c i / y Q t / z R m Y q 3 b n k m g i S + k u O U d y N v G A e D g H m S q H a r N 1 H u a a 9 c c a q O X v Y 8 c A 2 o w K x 6 U n 7 B N f p I w J A j A k c M S T i E B 0 F P B y 4 c p M R 1 M S E u I x T o O M c U J f L m p O K k 8 I g d 0 X d I u 4 5 h Y 9 q r n E K 7 G Z 0 S 0 p u R 0 8 Y B e R L S Z Y T V a b a O 5 z q z Y n / L P d E 5 1 d 3 G 9 P d N r o h Y i R t i / / L N l P / 1 q V o k B j j V N Q R U U 6 o Z V R 0 z W X L d F X V z + 0 t V k j K k x C n c p 3 h G m G n n r M + 2 9 g h d u + q t p + N v W q l Y t W d G m + N d 3 Z I G 7 P 4 c 5 z x o H l V d p + p e H l d q j h l 1 E X v Y x y H N 8 w Q 1 n K O O h p 7 j I 5 7 w b F 1 Y w r q z 7 j + l V s F 4 d v F t W Q 8 f K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Encoder",
"sec_num": "3.2"
},
{
"text": "We finetune the pre-trained BERT encoder in the supervised learning stage. For each input document x, the encoder E(x|\u03b8) (with parameter \u03b8) will output a vector e of d dimension. W base is a matrix that maintains a class-level vector for each base class, serving as a base memory for meta-learning. Both E(x|\u03b8) and W base will be further tuned in the meta training procedure. We will show in our experiments that replacing previous models with pre-trained encoder outperforms the corresponding state-of-the-art models, and the proposed DMIN can further improve over that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Encoder",
"sec_num": "3.2"
},
{
"text": "At the meta-learning stage, to induce class-level representations from given support sets, we develop a dynamic memory module (DMM) based on knowledge learned from the supervised learning stage through the memory matrix W base . Unlike static memory (Gidaris and Komodakis, 2018), DMM utilizes dynamic routing (Sabour et al., 2017) to render more flexibility to the memory learned from base classes to better adapt support sets. The routing component can automatically adjust the coupling coefficients during and after training, which inherently suits for the need of fewshot learning.",
"cite_spans": [
{
"start": 310,
"end": 331,
"text": "(Sabour et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Module",
"sec_num": "3.3"
},
{
"text": "Specifically, the instances in the support sets are first encoded by the BERT into sample vectors {e c,s } K s=1 and then fed to the following dynamic memory routing process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Module",
"sec_num": "3.3"
},
{
"text": "The algorithm of the dynamic memory routing process, denoted as DMR, is presented in Algorighm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "Given a memory matrix M (here W base ) and sample vector q \u2208 R d , the algorithm aims to adapt the sample vector based on memory M learned in the supervised learning stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q = DM R(M, q).",
"eq_num": "(1)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "First, for each entry m i \u2208 M , the standard matrix-transformation and squash operations in dynamic routing (Sabour et al., 2017) are applied on the inputs:",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Sabour et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m ij = squash(W j m i + b j ), (2) q j = squash(W j q + b j ),",
"eq_num": "(3)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "where the transformation weights W j and bias b j are shared across the inputs to fit the few-shot learning scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "We then calculate the Pearson Correlation Coefficients (PCCs) (Hunt, 1986; betweenm i andq j . p ij = tanh(P CCs(m ij ,q j )), (4)",
"cite_spans": [
{
"start": 62,
"end": 74,
"text": "(Hunt, 1986;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P CCs = Cov(x 1 , x 2 ) \u03c3 x 1 \u03c3 x 2 .",
"eq_num": "(5)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "where the general formula of PCCs is given above for vectors x 1 and x 2 . Since PCCs values are in the range of [-1, 1], they can be used to encourage or penalize the routing parameters. The routing iteration process can now adjust coupling coefficients, denoted as d i , with regard to the input capsules m i , q and higher level capsules v j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i = sof tmax (\u03b1 i ) ,",
"eq_num": "(6)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = \u03b1 ij + p ijmi v j .",
"eq_num": "(7)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "Since our goal is to develop dynamic routing mechanism over memory for few-shot learning, we add the PCCs with the routing agreements in every routing iteration as shown in Eq. 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v j = n i=1 (d ij + p ij )m ij ,",
"eq_num": "(8)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v j = squash(v j ).",
"eq_num": "(9)"
}
],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "Algorithm 1 Dynamic Memory Routing Process",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "Require: r, q and memory M = {m 1 , m 2 , ..., m n } Ensure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "v = v 1 , v 2 , ..., v l , q 1: for all m i , v j do 2:m ij = squash(W j m i + b j ) 3:q j = sqush(W j q + b j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "4: \u03b1 ij = 0 5: p ij = tanh(P CCs(m ij ,q j )) 6: end for 7: for r iterations do 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "d i = sof tmax (\u03b1 i ) 9:v j = n i=1 (d ij + p ij )m ij 10: v j = squash(v j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "11: for all i, j: \u03b1 ij = \u03b1 i,j + p ijmij v j 12: for all j:q j =q j +v j 2 13: for all i, j: p ij = tanh(P CCs(m ij ,q j )) 14: end for 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "q = concat[v] 16: Return q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "We update the coupling coefficients \u03b1 ij and p ij with Eq. 6 and Eq. 7, and finally output the adapted vector q as in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "The Dynamic Memory Module (DMM) aims to use DMR to adapt sample vectors e c,s , guided by the memory W base . That is, the resulting adapted sample vector is computed with e c,s = DM R(W base , e c,s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Memory Routing Process",
"sec_num": null
},
{
"text": "After the sample vectors e c,s s=1,...,K are adapted and query vectors {e q } L q=1 are encoded by the pretrained encoder, we now incorporate queries to build a Query-guided Induction Module (QIM). The aim is to identify, among (adapted) sample vectors of support sets, the vectors that are more relevant to the query, in order to construct classlevel vectors to better classify the query. Since dynamic routing can automatically adjusts the coupling coefficients to help enhance related (e.g., similar) queries and sample vectors, and penalizes unrelated ones, QIM reuses the DMR process by treating adapted sample vectors as memory of background knowledge about novel classes, and induces class-level representation from the adapted sample vectors that are more relevant/similar to the query under concern. e c = DM R( e c,s s=1,...,K , e q ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-enhanced Induction Module",
"sec_num": "3.4"
},
{
"text": "(10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-enhanced Induction Module",
"sec_num": "3.4"
},
{
"text": "In the final classification stage, we then feed the novel class vector e c and query vector e q to the classifier discussed above in the supervised training stage and get the classification score. The standard setting for neural network classifiers is, after having extracted the feature vector e \u2208 R d , to estimate the classification probability vector p by first computing the raw classification score s k of each category k \u2208 [1, K * ] using the dot-product operator s k = e T w * k , and then applying softmax operator across all the K * classification scores. However, this type of classifiers do not fit few-shot learning due to completely novel categories. In this work, we compute the raw classification scores using a cosine similarity operator:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s k = \u03c4 \u2022 cos(e, w * k ) = \u03c4 \u2022 e T w * k ,",
"eq_num": "(11)"
}
],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "where e = e e and w * k =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "w * k w * k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "are l 2 \u2212normalized vectors, and \u03c4 is a learnable scalar value. After the base classifier is trained, all the feature vectors that belong to the same class must be very closely matched with the single classification weight vector of that class. So the base classification weights W base = {w b } C base b=1 trained in the 1st stage can be seen as the base classes' feature vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "In the few-shot classification scenario, we feed the query vector e q and novel class vector e c to the classifier and get the classification scores in a unified manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s q,c = \u03c4 \u2022 cos(e q , e c ) = \u03c4 \u2022 e T q e c .",
"eq_num": "(12)"
}
],
"section": "Similarity Classifier",
"sec_num": "3.5"
},
{
"text": "In the supervised learning stage, the training objective is to minimize the cross-entropy loss on C base number of base classes given an input text x and its label y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L 1 (x, y,\u0177) = \u2212 C base k=1 y k log(\u0177 k ),",
"eq_num": "(13)"
}
],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "where y is one-hot representation of the ground truth label, and\u0177 is the predicted probabilities of base classes with\u0177 k = sof tmax(s k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "In the meta-training stage, for each meta episode, given the support set S and query set Q = {x q , y q } L q=1 , the training objective is to minimize the cross-entropy loss on C novel classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L 2 (S, Q) = \u2212 1 C C c=1 1 L L q=1 y q log(\u0177 q ),",
"eq_num": "(14)"
}
],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "where\u0177 q = sof tmax(s q ) is the predicted probabilities of C novel classes in this meta episode, with s q = {s q,c } C c=1 from Equation 12. We feed the support set S to the model and update its parameters to minimize the loss in the query set Q in each meta episode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.6"
},
{
"text": "We evaluate our model on the miniRCV1 (Jiang et al., 2018) and ODIC dataset (Geng et al., 2019) . Following previous work (Snell et al., 2017; Geng et al., 2019) , we use few-shot classification accuracy as the evaluation metric. We average over 100 and 300 randomly generated meta-episodes from the testing set in miniRCV1 and ODIC, respectively. We sample 10 test texts per class in each episode for evaluation in both the 1-shot and 5-shot scenarios.",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 76,
"end": 95,
"text": "(Geng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 122,
"end": 142,
"text": "(Snell et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 143,
"end": 161,
"text": "Geng et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "We use Google pre-trained BERT-Base model as our text encoder, and fine-tune the model in the training procedure. The number of base classes C base on ODIC and miniRCV1 is set to be 100 and 20, respectively. The number of DMR interaction is 3. We build episode-based meta-training models with C = [5, 10] and K = [1, 5] for comparison. In addition to using K sample texts as the support set, the query set has 10 query texts for each of the C sampled classes in every training episode. For example, there are 10 \u00d7 5 + 5 \u00d7 5 = 75 texts in one training episode for a 5-way 5-shot experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "We compare DMIN with various baselines and state-of-the-art models: BERT (Devlin et al., 2019) finetune, ATAML (Jiang et al., 2018) , Rel. Net (Sung et al., 2018) , Ind. Net (Geng et al., 2019) , HATT (Gao et al., 2019) , and LwoF (Gidaris and Komodakis, 2018) . Note that we re-implement them with the BERT sentence encoder for direct comparison.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 111,
"end": 131,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 143,
"end": 162,
"text": "(Sung et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 174,
"end": 193,
"text": "(Geng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 201,
"end": 219,
"text": "(Gao et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 231,
"end": 260,
"text": "(Gidaris and Komodakis, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Overall Performance The accuracy and standard deviations of the models are shown in Table 1 and 2. We can see that DMIN consistently outperform all existing models and achieve new state-of-the-art results on both datasets. The differences between DMIN and all the other models are statistically significant under the one-tailed paired t-test at the 95% significance level. Note that LwoF builds a two-stage training procedure with a memory module learnt from the supervised learning and used in the meta-learning stage, but the memory mechanism is static after training, while DMIN uses dynamic memory routing to automatically adjust the coupling coefficients after training to generalize to novel classes, and outperform LwoF significantly. Note also that the performance of some of the baseline models (Rel. Net and Ind. Net) reported in Table 1 and 2 is higher than that in Geng et al. (2019) since we used BERT to replace BiLSTM-based encoders. The BERT encoder improves the baseline models by a powerful context meaning representation ability, and our model can further outperform these models with a dynamic memory routing method. Even with these stronger baselines, the proposed DMIN consistently outperforms them on both dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 840,
"end": 847,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We analyze the effect of different components of DMIN on ODIC in Table 3 . Specifically, we remove DMM and QIM, and vary the number of DMR iterations. We see that the best performance is achieved with 3 iterations. The results show the effectiveness of both the dynamic memory module and the induction module with query information. Figure 2 is the t-SNE visualization (Maaten and Hinton, 2008) for support sample vectors before Table 3 : Ablation study of accuracy (%) on ODIC in a 5-way setup. and after DMM under a 10-way 5-shot setup on ODIC. We randomly select a support set with 50 texts (10 texts per class) from the ODIC testing set, and obtain the sample vectors before and after DMM, i.e., {e c,s } c=1,...5,s=1...10 and e c,s c=1,...5,s=1...10 . We can see that the support vectors produced by the DMM are better separated, demonstrating the effectiveness of DMM in leveraging the supervised learning experience to encode semantic relationships between lower level instance features and higher level class features for few-shot text classification.",
"cite_spans": [
{
"start": 369,
"end": 394,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 3",
"ref_id": null
},
{
"start": 333,
"end": 341,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 429,
"end": 436,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": null
},
{
"text": "We propose Dynamic Memory Induction Networks (DMIN) for few-shot text classification, which builds on external working memory with dynamic routing, leveraging the latter to track previous learning experience and the former to adapt and generalize better to support sets and hence to unseen classes. The model achieves new state-of-the-art results on the miniRCV1 and ODIC datasets. Since dynamic memory can be a learning mechanism more general than what we have used here for fewshot learning, we will investigate this type of models in other learning problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The authors would like to thank the organizers of ACL-2020 and the reviewers for their helpful suggestions. The research of the last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Infinite mixture prototypes for few-shot learning",
"authors": [
{
"first": "Kelsey",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Shelhamer",
"suffix": ""
},
{
"first": "Hanul",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "232--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelsey Allen, Evan Shelhamer, Hanul Shin, and Joshua Tenenbaum. 2019. Infinite mixture prototypes for few-shot learning. In Proceedings of the 36th In- ternational Conference on Machine Learning, vol- ume 97 of Proceedings of Machine Learning Re- search, pages 232-241, Long Beach, California, USA. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using fast weights to attend to the recent past",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"Z"
],
"last": "Leibo",
"suffix": ""
},
{
"first": "Catalin",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4331--4339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. 2016. Using fast weights to attend to the recent past. In Advances in Neural Information Processing Systems, pages 4331-4339.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Few-shot text classification with distributional signatures",
"authors": [
{
"first": "Yujia",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Menghua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.06039"
]
},
"num": null,
"urls": [],
"raw_text": "Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2019. Few-shot text classification with distributional signatures. arXiv preprint arXiv:1908.06039.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Memory matching networks for oneshot image recognition",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chenggang",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4080--4088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Cai, Yingwei Pan, Ting Yao, Chenggang Yan, and Tao Mei. 2018. Memory matching networks for one- shot image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4080-4088.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Meta relational learning for few-shot link prediction in knowledge graphs",
"authors": [
{
"first": "Mingyang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Huajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4208--4217",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1431"
]
},
"num": null,
"urls": [],
"raw_text": "Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4208- 4217, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Question answering on knowledge bases and text using universal schema and memory networks",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "358--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowl- edge bases and text using universal schema and memory networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 358- 365, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Investigating meta-learning algorithms for low-resource natural language understanding tasks",
"authors": [
{
"first": "Zi-Yi",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Keyi",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1192--1197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1192- 1197.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A bayesian approach to unsupervised one-shot learning of object categories",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Fe-Fei",
"suffix": ""
}
],
"year": 2003,
"venue": "Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "1134--1141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Fe-Fei et al. 2003. A bayesian approach to unsu- pervised one-shot learning of object categories. In Computer Vision, 2003. Proceedings. Ninth IEEE In- ternational Conference on, pages 1134-1141. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Oneshot learning of object categories. IEEE transactions on pattern analysis and machine intelligence",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "28",
"issue": "",
"pages": "594--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Fei-Fei, Rob Fergus, and Pietro Perona. 2006. One- shot learning of object categories. IEEE transac- tions on pattern analysis and machine intelligence, 28(4):594-611.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1126--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR. org.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hybrid attention-based prototypical networks for noisy few-shot relation classification",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,(AAAI-19)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Pro- ceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,(AAAI-19), New York, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Induction networks for few-shot text classification",
"authors": [
{
"first": "Ruiying",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Binhua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongbin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Jian",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3895--3904",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1403"
]
},
"num": null,
"urls": [],
"raw_text": "Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3895-3904, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dynamic few-shot visual learning without forgetting",
"authors": [
{
"first": "Spyros",
"middle": [],
"last": "Gidaris",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Komodakis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4367--4375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spyros Gidaris and Nikos Komodakis. 2018. Dynamic few-shot visual learning without forgetting. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 4367-4375.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Meta-learning for lowresource neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3622--3631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Victor OK Li, and Kyunghyun Cho. 2018. Meta-learning for low- resource neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622-3631.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Few-shot representation learning for out-ofvocabulary words",
"authors": [
{
"first": "Ziniu",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4102--4112",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1402"
]
},
"num": null,
"urls": [],
"raw_text": "Ziniu Hu, Ting Chen, Kai-Wei Chang, and Yizhou Sun. 2019. Few-shot representation learning for out-of- vocabulary words. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4102-4112, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Percent agreement, pearson's correlation, and kappa as measures of inter-examiner reliability",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hunt",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of Dental Research",
"volume": "65",
"issue": "2",
"pages": "128--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Hunt. 1986. Percent agreement, pearson's correlation, and kappa as measures of inter-examiner reliability. Journal of Dental Research, 65(2):128- 130.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attentive task-agnostic meta-learning for few-shot text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Havaei",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chartrand",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Chouaib",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Jesson",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Chapados",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, and Stan Matwin. 2018. Atten- tive task-agnostic meta-learning for few-shot text classification.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to remember rare events",
"authors": [
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Ofir",
"middle": [],
"last": "Nachum",
"suffix": ""
},
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.03129"
]
},
"num": null,
"urls": [],
"raw_text": "\u0141ukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to remember rare events. arXiv preprint arXiv:1703.03129.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1468--1478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1468-1478.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A simple neural attentive metalearner",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Rohaninejad",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.03141"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2017. A simple neural attentive meta- learner. arXiv preprint arXiv:1707.03141.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Model-agnostic meta-learning for relation classification with limited supervision",
"authors": [
{
"first": "Abiola",
"middle": [],
"last": "Obamuyide",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5873--5879",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1589"
]
},
"num": null,
"urls": [],
"raw_text": "Abiola Obamuyide and Andreas Vlachos. 2019. Model-agnostic meta-learning for relation classifica- tion with limited supervision. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5873-5879, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Low-shot learning with imprinted weights",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "David G",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "5822--5830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Qi, Matthew Brown, and David G Lowe. 2018. Low-shot learning with imprinted weights. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 5822-5830.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Optimization as a model for few-shot learning",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fewshot and zero-shot multi-label learning for structured label spaces",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3132--3142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Rios and Ramakanth Kavuluru. 2018. Few- shot and zero-shot multi-label learning for structured label spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3132-3142.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dynamic routing between capsules",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Sabour",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Frosst",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3856--3866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Ad- vances in Neural Information Processing Systems, pages 3856-3866.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deep convolutional neural networks and data augmentation for environmental sound classification",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Salamon",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Pablo"
],
"last": "Bello",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Signal Processing Letters",
"volume": "24",
"issue": "3",
"pages": "279--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Salamon and Juan Pablo Bello. 2017. Deep con- volutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279-283.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Metalearning with memory-augmented neural networks",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Bartunov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2016,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1842--1850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Meta- learning with memory-augmented neural networks. In International conference on machine learning, pages 1842-1850.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4077--4087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Ad- vances in Neural Information Processing Systems, pages 4077-4087.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.03158"
]
},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. arXiv preprint arXiv:1906.03158.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Meta-transfer learning for few-shot learning",
"authors": [
{
"first": "Qianru",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yaoyao",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "403--412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 403-412.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning to compare: Relation network for few-shot learning",
"authors": [
{
"first": "Flood",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Yongxin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Philip",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"M"
],
"last": "Torr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hospedales",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1199--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Aspect level sentiment classification with deep memory network",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "214--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory net- work. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 214-224.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Matching networks for one shot learning",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3630--3638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630-3638.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Learning to learn and predict: A metalearning approach for multi-label classification",
"authors": [
{
"first": "Jiawei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4345--4355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiawei Wu, Wenhan Xiong, and William Yang Wang. 2019. Learning to learn and predict: A meta- learning approach for multi-label classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4345- 4355.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Openworld learning and application to product classification",
"authors": [
{
"first": "Hu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "3413--3419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu Xu, Bing Liu, Lei Shu, and P Yu. 2019. Open- world learning and application to product classifi- cation. In The World Wide Web Conference, pages 3413-3419. ACM.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Enhancing context modeling with a query-guided capsule network for document-level translation",
"authors": [
{
"first": "Zhengxin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Shuhao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1527--1537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, and Jie Zhou. 2019. En- hancing context modeling with a query-guided cap- sule network for document-level translation. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1527-1537.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Multi-level matching and aggregation network for few-shot relation classification",
"authors": [
{
"first": "Zhen-Hua",
"middle": [],
"last": "Zhi-Xiu Ye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2872--2881",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1277"
]
},
"num": null,
"urls": [],
"raw_text": "Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Multi-level matching and aggregation network for few-shot re- lation classification. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2872-2881, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Diverse few-shot text classification with multiple metrics",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Tesauro",
"suffix": ""
},
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1206--1215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1206-1215.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Metagan: An adversarial approach to few-shot learning",
"authors": [
{
"first": "Ruixiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2365--2374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. 2018. Metagan: An adversarial approach to few-shot learning. In Advances in Neural Information Processing Systems, pages 2365-2374.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "t e x i t s h a 1 _ b a s e 6 4 = \"",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "e y S g g = = < / l a t e x i t > An overview of Dynamic Memory Induction Network with a 3-way 2-shot example.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Effect of the Dynamic Memory Module in a 10-way 5-shot setup.",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>5-way Acc. 1-shot 5-shot</td><td>10-way Acc. 1-shot 5-shot</td></tr><tr><td colspan=\"3\">BERT 30.Table 1: Comparison of accuracy (%) on miniRCV1</td></tr><tr><td colspan=\"2\">with standard deviations.</td><td/></tr><tr><td>Model</td><td>5-way Acc. 1-shot 5-shot</td><td>10-way Acc. 1-shot 5-shot</td></tr><tr><td>BERT</td><td>38.</td><td/></tr></table>",
"type_str": "table",
"text": "79\u00b10.68 63.31\u00b10.73 23.48\u00b10.53 61.18\u00b10.82 ATAML 54.05\u00b10.14 72.79\u00b10.27 39.48\u00b10.23 61.74\u00b10.36 Rel. Net 59.19\u00b10.12 78.35\u00b10.27 44.69\u00b10.19 67.49\u00b10.23 Ind. Net 60.97\u00b10.16 80.91\u00b10.19 46.15\u00b10.26 69.42\u00b10.34 HATT 60.40\u00b10.17 79.46\u00b10.32 47.09\u00b10.28 68.58\u00b10.37 LwoF 63.35\u00b10.26 78.83\u00b10.38 48.61\u00b10.21 69.57\u00b10.35 DMIN 65.72\u00b10.28 82.39\u00b10.24 49.54\u00b10.31 72.52\u00b10.25 06\u00b10.27 64.24\u00b10.36 29.24\u00b10.19 64.53\u00b10.35 ATAML 79.60\u00b10.42 88.53\u00b10.57 63.52\u00b10.34 77.36\u00b10.57 Rel. Net 79.41\u00b10.42 87.93\u00b10.31 64.36\u00b10.58 78.62\u00b10.54 Ind. Net 81.28\u00b10.26 89.67\u00b10.28 64.53\u00b10.38 80.48\u00b10.25 HATT 81.57\u00b10.47 89.27\u00b10.58 65.75\u00b10.61 81.53\u00b10.56 LwoF 79.52\u00b10.29 87.34\u00b10.34 65.04\u00b10.43 80.69\u00b10.37 DMIN 83.46\u00b10.36 91.75\u00b10.23 67.31\u00b10.25 82.84\u00b10.38"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Comparison of accuracy(%) on ODIC with standard deviations."
}
}
}
}