Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:06:26.298325Z"
},
"title": "Hierarchical Attention Prototypical Networks for Few-Shot Text Classification",
"authors": [
{
"first": "Shengli",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Qingfeng",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": "sunqingfeng@pku.edu.cn"
},
{
"first": "Kevin",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": "kezhou@microsoft.com"
},
{
"first": "Tengchao",
"middle": [],
"last": "Lv",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": "lvtengchao@pku.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most of the current effective methods for text classification task are based on large-scale labeled data and a great number of parameters, but when the supervised training data are few and difficult to be collected, these models are not available. In this paper, we propose a hierarchical attention prototypical networks (HAPN) for few-shot text classification. We design the feature level, word level, and instance level multi cross attention for our model to enhance the expressive ability of semantic space. We verify the effectiveness of our model on two standard benchmark fewshot text classification datasets-FewRel and CSID, and achieve the state-of-the-art performance. The visualization of hierarchical attention layers illustrates that our model can capture more important features, words, and instances separately. In addition, our attention mechanism increases support set augmentability and accelerates convergence speed in the training stage.",
"pdf_parse": {
"paper_id": "D19-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "Most of the current effective methods for text classification task are based on large-scale labeled data and a great number of parameters, but when the supervised training data are few and difficult to be collected, these models are not available. In this paper, we propose a hierarchical attention prototypical networks (HAPN) for few-shot text classification. We design the feature level, word level, and instance level multi cross attention for our model to enhance the expressive ability of semantic space. We verify the effectiveness of our model on two standard benchmark fewshot text classification datasets-FewRel and CSID, and achieve the state-of-the-art performance. The visualization of hierarchical attention layers illustrates that our model can capture more important features, words, and instances separately. In addition, our attention mechanism increases support set augmentability and accelerates convergence speed in the training stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The dominant text classification models in deep learning (Kim, 2014; Zhang et al., 2015a; Yang et al., 2016; require a considerable amount of labeled data to learn a large number of parameters. However, such methods may have difficulty in learning the semantic space in the case that only few data are available. Few-shot learning has became an effective approach to solve this challenge, it can train a neural network with a few parameters using few data but achieve good performance. A typical example of this approach is prototypical networks (Snell et al., 2017) , which averages the vector of few support instances as the class prototype and computes distance between target query and each prototype, then classify the query to the nearest prototype's class. However, prototypical networks is rough and does not consider the adverse effects of various noises in the data, which weakens the discrimination and expressiveness of the prototype.",
"cite_spans": [
{
"start": 57,
"end": 68,
"text": "(Kim, 2014;",
"ref_id": "BIBREF7"
},
{
"start": 69,
"end": 89,
"text": "Zhang et al., 2015a;",
"ref_id": "BIBREF23"
},
{
"start": 90,
"end": 108,
"text": "Yang et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 546,
"end": 566,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a hierarchical attention prototypical networks for few-shot text classification by using attention mechanism in three levels. For feature level attention, we use convolutional neural networks to get the feature scores which is different for various classes. For word level attention, we adopt an attention mechanism to learn the importance of each word hidden state in an instance. For instance level multi cross attention, with the help of multi cross attention between support set and target query, we can determine the importance of different instances in the same class and enable the model to get a more discriminative prototype of each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the actual scenario, we apply HAPN on intention detection of our open domain chatbots with different character. If we create a chatbot for old people, the user intentions will focus on children, health or expectation, so we can define specific intentions and supply related responses. And because of only few data are needed, we can expand the number of classes quickly. The model helps chatbot to identify user intentions precisely, makes the dialogue process smoother, more knowledgeable and more controllable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are three main parts of our contribution: first of all, we propose a hierarchical attention prototypical networks for few-shot text classification, then we achieve state-of-the-art performance on FewRel and CSID datasets, and the experiments prove our model is faster and more extensible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text Classification is an important task in Natural Language Processing, and many models are proposed to solve it. The traditional methods mainly focus on feature engineerings such as bagof-words or n-grams (Wang and Manning, 2012) or SVMs (Tang et al., 2015) . The neural network based methods like Kim (2014) applies convolutional neural networks for sentence classification. Then, Johnson and Zhang (2015) use a one-hot word order CNN, and Zhang et al. (2015b) apply a character level CNN. C-LSTM (Zhou et al., 2015) combines CNN and RNN for sentence representation and text classification. Yang et al. (2016) explore the hierarchical structure of documents classification, they use a GRU-based attention to build representations of sentences and another GRU-based attention to aggregate them into a document representation. But above supervised learning methods require large-scale labeled data and can't classify unseen classes.",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Wang and Manning, 2012)",
"ref_id": "BIBREF19"
},
{
"start": 240,
"end": 259,
"text": "(Tang et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 300,
"end": 310,
"text": "Kim (2014)",
"ref_id": "BIBREF7"
},
{
"start": 500,
"end": 519,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 594,
"end": 612,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "2.1"
},
{
"text": "Few-Shot Learning (FSL) aims to solve classification problems by training a classifier with few instances in each class, and it can apply to unseen classes. The early works aim to use transfer learning approaches, Caruana (1994) and Bengio (2011) adopt the target task from the pre-trained models. Then Koch et al. (2015) explore a method for learning siamese neural networks which employs an unique structure to rank similarity between inputs. Vinyals et al. (2016) use matching networks to map a small labeled support set and an unlabelled example to its label, and obviate the need for fine-tuning to adapt to new class types. Prototypical networks (Snell et al., 2017 ) learns a metric space in which the model can perform well by computing distance between query and prototype representations of each class and classify the query to the nearest prototype's class. Sung et al. (2018) propose a two-branch relation networks, which learns to compare query against few-shot labeled sample support data. Dual TriNet structure can efficiently and directly augment multi-layer visual features to boost the few-shot classification.But all of the above works mainly concentrate on computer vision field, the research and applications in NLP field are extremely limited. Recently, propose an adaptive metric learning approach that automatically determines the best weighted combination from a set of metrics obtained from meta-training tasks for a newly seen few-shot task such as intention classification, Han et al. (2018) present a relation classification dataset -FewRel, and adapt most recent state-of-the-art few-shot learning methods for it, Gao et al. (2019) propose a hybrid attention-based prototypical networks for noisy few-shot relation classification. However, these methods do not consider mining semantic information or reducing the impact of noise more precisely. And in most of the realistic settings, we may increase the number of instances gradually, so model capacity needs more attention.",
"cite_spans": [
{
"start": 214,
"end": 228,
"text": "Caruana (1994)",
"ref_id": "BIBREF1"
},
{
"start": 233,
"end": 246,
"text": "Bengio (2011)",
"ref_id": "BIBREF0"
},
{
"start": 303,
"end": 321,
"text": "Koch et al. (2015)",
"ref_id": "BIBREF8"
},
{
"start": 445,
"end": 466,
"text": "Vinyals et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 652,
"end": 671,
"text": "(Snell et al., 2017",
"ref_id": "BIBREF13"
},
{
"start": 869,
"end": 887,
"text": "Sung et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1502,
"end": 1519,
"text": "Han et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 1644,
"end": 1661,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Few-Shot Learning",
"sec_num": "2.2"
},
{
"text": "In few-shot text classification task, our goal is to learn a function : G(D, S, x) \u2192 y. D is the labeled data, we divide D into three parts: D train , D validation , and D test , and each part has specific label space. We use D train to optimize parameters, D validation to select best hyper parameters, and D test to evaluate the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "The \"episode\" training strategy that Vinyals et al. (2016) proposed has proved to be effective. For each training episode, we first sample a label set L from D train , then use L to sample the support set S and the query set Q, finally, we feed S and Q to the model and minimize the loss. If L includes N different classes and each class of S contains K instances, we call the target problem N -way K-shot learning. For this paper, we consider N = 5 or 10, and K = 5 or 10.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "Vinyals et al. (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "For exactly, in an episode, we are given a support set S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "S ={(x 1 1 , l 1 ), (x 2 1 , l 1 ), . . . , (x n 1 1 , l 1 ), \u22ef, (x 1 m , l m ), (x 2 m , l m ), . . . , (x nm m , l m )}, l 1 , l 2 , \u22ef, l m \u2208 L (1) consists of n i text instances for each class l i \u2208 L, x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "i means it is the j support instance belonging to calss l i , and instance x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "i includes T i,j words {w 1 , w 2 , . . . , w T i,j }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "Then x is an unlabeled instance of query set Q to classify, and y \u2208 L is the output label followed by the prediction of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "The overall architecture of the Hierarchical Attention Prototypical Networks is shown in Figure 1 . We introduce different components in the following subsections: Instance Encoder Each instance in support set or query set will be first represented to a input vector by transforming each word into embeddings. Considering the lightweight and speed of the model, we achieve this part with one layer convolutional neural networks (CNN). For ease of comparison, its details are the same as Han et al. (2018) proposed. Hierarchical Attention In order to get more important information from rare data, we adopt a hi-erarchical attention mechanism. Feature level attention enhances or reduces the importance of different feature in each class, word level attention highlight the important words for meaning of the instance, and instance level multi cross attention can extract the important support instances for different query instances, these three attention mechanisms work together to improve the classification performance of our model. Prototypical Networks Prototypical networks compute a prototype vector as the representation of each class, and this vector is the mean vector of the embedded support instances belonging to its class. We compare the distance between all prototype vectors and a target query vector, then classify this query to the nearest one.",
"cite_spans": [
{
"start": 487,
"end": 504,
"text": "Han et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "4.1"
},
{
"text": "The instance encoder part consists of two layers: embedding layer and instance encoding layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Encoder",
"sec_num": "4.2"
},
{
"text": "Given an instance x = {w t , w 2 , . . . , w T } with T words. We use an embedding matrix W E ,w t = W E w t to embed each word to a vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "4.2.1"
},
{
"text": "{w 1 , w 2 , . . . , w T }, w t \u2208 R d (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "4.2.1"
},
{
"text": "where d is the word embedding dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "4.2.1"
},
{
"text": "Following we apply a convolutional neural network Zeng et al. (2014) as encoding layer to get the hidden annotations of each word by a convolution kernel with the window size m",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = CNN(w t\u2212 m\u22121 2 , . . . , w t\u2212 m+1 2 )",
"eq_num": "(3)"
}
],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "Especially, if the word w t has a position embedding p t , we should concat w t and p t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wp t = [w t \u2295 p t ]",
"eq_num": "(4)"
}
],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "where \u2295 is a concatation, the h t will be as follow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = CNN(wp t\u2212 m\u22121 2 , . . . , wp t\u2212 m+1 2 )",
"eq_num": "(5)"
}
],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "Then, we aggregate all h t to get the overall representation of instance x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = {h 1 , h 2 , . . . , h t }",
"eq_num": "(6)"
}
],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "Finally, we define those two layers as a comprehensive function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = g \u03b8 (x)",
"eq_num": "(7)"
}
],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "\u03b8 in this function are the networks parameters to be learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2.2"
},
{
"text": "The prototypical networks (Snell et al., 2017) has achieved excellent performance in few-shot image classification and few-shot text classification (Han et al., 2018; Gao et al., 2019) tasks respectively, so our model is based on prototypical networks and aims to get promotion. The fundamental idea of prototypical networks is simple but efficient: we can use a prototype vector c i as the representative feature of class l i , each prototype vector can be calculated by averaging all the embedded instances in its support set",
"cite_spans": [
{
"start": 26,
"end": 46,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 148,
"end": 166,
"text": "(Han et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 167,
"end": 184,
"text": "Gao et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prototypical Networks",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = 1 n i n i j=1 g \u03b8 (x j i )",
"eq_num": "(8)"
}
],
"section": "Prototypical Networks",
"sec_num": "4.3"
},
{
"text": "Then the probability distribution over the classes in L can be produced by a softmax function over distances between all prototypes vector and the target query q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prototypical Networks",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (y = l i q) = exp(\u2212d(g \u03b8 (q), c i ) \u03a3 L l=1 exp(\u2212d(g \u03b8 (q), c l )",
"eq_num": "(9)"
}
],
"section": "Prototypical Networks",
"sec_num": "4.3"
},
{
"text": "As Snell et al. 2017mentioned, squared Euclidean distance is a reasonable choice, however, we will introduce a more effective method in section 4.4.1, which combines squared Euclidean distance with class feature scores, and achieves definite improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prototypical Networks",
"sec_num": "4.3"
},
{
"text": "We focus on sentence-level text classification in this work. The proposed model gets a feature scores vector and transfers the support set of each class into a vector representation, on which we build a classifier to perform few-shot text classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Attention",
"sec_num": "4.4"
},
{
"text": "Obviously, the same dimension belonging to different classes has different importance when we calculate the euclidean distance. In other words, some feature dimensions are more discriminative for distinguishing specific class in the feature level space, and other features are confusing and useless at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "So we apply a CNN-based feature attention mechanism similar to Gao et al. (2019) proposed as a class feature extractor. It depends on all the instances in the support set of each class and will dynamiclly change with different classes. Given a support set S i \u2208 R n i \u00d7T \u00d7d of class l i as the output of above instance encoder part",
"cite_spans": [
{
"start": 63,
"end": 80,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "S i = {x 1 , x 2 , . . . , x n i } (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "we apply a max pooling layer over each instance in S i to get a new feature map S ci \u2208 R n i \u00d7d . Then we use three convolution layers to obtain \u03bb i \u2208 R d , which is the scores vector of class l i . The specific structure of above class feature extractor is shown in Table 1. layer name kernel size stride output size So we get a new distance calculation method as follow",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Table 1.",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "pool T \u00d7 1 1 \u00d7 1 K \u00d7 d \u00d7 1 conv 1 K \u00d7 1 1 \u00d7 1 K \u00d7 d \u00d7 32 ReLU conv 2 K \u00d7 1 1 \u00d7 1 K \u00d7 d \u00d7 64 ReLU conv 3 K \u00d7 1 K \u00d7 1 1 \u00d7 d \u00d7 1 ReLU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "d(c i , q \u2032 ) = (c i \u2212 q \u2032 ) 2 \u22c5 \u03bb i (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "where q \u2032 is the query vector passed through the word level attention mechanism which will be introduced in the next subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Level Attention",
"sec_num": "4.4.1"
},
{
"text": "The importance of different words to the meanings of an instance is unequal, thus it is worth pointing out which words are useful and which words are useless. Therefore, we apply an attention mechanism (Yang et al., 2016) to get those important words and assemble them to compose a more informative instance vector s j , and the definitions are as follows",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u j t = tanh(W w h j t + b w )",
"eq_num": "(12)"
}
],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v j t = u j t \u22ba u w (13) \u03b1 j t = exp(v j t ) \u03a3 t exp(v j t )",
"eq_num": "(14)"
}
],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j = t \u03b1 j t h j t",
"eq_num": "(15)"
}
],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "where h j t is the t hidden word embedding of instance x j , it was encoded through the instance encoder, and has the same hidden size with x j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "Firstly, the W w and b w followed by activation function tanh make up a MLP layer to transform h j t to the new hidden representation u j t . Immediately, we apply a dot product operation between u j t and a word level weight vector u w to compute similarity v j t as the importance weight of u j t . Then we use a softmax function to normalize v j t to \u03b1 j t . Finally, we calculate the instance level vector s j through the weighted sum of \u03b1 j t and h j t . As memory networks (Sukhbaatar et al., 2015) proposed, u w can help us to select the important words in each instance, it will be randomly initialized at the beginning of the training stage, and be optimized together with the networks parameters \u03b8.",
"cite_spans": [
{
"start": 479,
"end": 504,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Attention",
"sec_num": "4.4.2"
},
{
"text": "The previous prototypical networks use the mean vector of support instances as the class prototype. Because of the diversity and lack of the support instances, the gap between each support vector and prototype maybe wide, meanwhile, different query instances can be expressed in several ways, so not every instance in a support set contributes equally to the class prototype when they face a target query instance. To highlight the importance of support instances which are useful clues to classify a query instance correctly, we propose a multi cross attention mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "Given a support set S \u2032 i \u2208 R n i \u00d7d for class l i and a query vector q \u2032 \u2208 R d , they are all encoded through the instance encoder and word level attention. We consider each support vector s j i in S \u2032 i has its own weight \u03b2 j i to query q \u2032 . So the formula (8) will be rewritten as follow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = n i j=1 \u03b2 j i s j i",
"eq_num": "(16)"
}
],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "where we define r j i = \u03b2 j i s j i as the weighted prototype vector and the definitions of \u03b2 j i are as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 j i = exp(\u03b3 j i ) \u03a3 n i j=1 exp(\u03b3 j i )",
"eq_num": "(17)"
}
],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 j i = sum{\u03c3(f \u03d5 (mca))} (18) mca = [s j i\u03c6 \u2295 q \u2032 \u03c6 \u2295 \u03c4 1 \u2295 \u03c4 2 ]",
"eq_num": "(19)"
}
],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 1 = s j i\u03c6 \u2212 q \u2032 \u03c6 , \u03c4 2 = s j i\u03c6 \u2299 q \u2032 \u03c6",
"eq_num": "(20)"
}
],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j i\u03c6 = f \u03c6 (s j i ), q \u2032 \u03c6 = f \u03c6 (q \u2032 )",
"eq_num": "(21)"
}
],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "where f \u03c6 is a linear layer, \u22c5 is element-wise absolute value and \u2299 is element-wise product, we use these two operation to get the difference information \u03c4 1 and \u03c4 2 between s j i and q \u2032 , then concatenate them all as the multi cross attention information mca, then f \u03d5 (\u22c5) is a linear layer, \u03c3(\u22c5) is a tanh activation function, sum{\u22c5} means a sum operation of all elements in the vector. Finally, \u03b3 j i is the weight of j instance in support set s i , and we use a softmax function to nomalize it to \u03b2 j i . Through the multi cross attention mechanism, the prototype can pay more attention to those query-related support instances and improve the capacity of support set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Level Multi Cross Attention",
"sec_num": "4.4.3"
},
{
"text": "In this section, we will introduce the experiment results of our model. Firstly, we evaluate our model on FewRel dataset and CSID dataset, and achieve state-of-the-art results, our model outperforms the best baselines models by 1.11% and 1.64% respectively on 10 way 5 shot setting. Then we will show how our model works by case study and visualization of attention layers. We further demonstrate that the hierarchical attention increases the augmentability of support set and the convergence speed of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "FewRel Few-Shot Relation Classification (Han et al., 2018 ) is a new large-scale supervised dataset 1 . It consists of 70000 instances on 100 relations derived from Wikipedia, and each relation includes 700 instances. It also marks the head and tail entities in each instance, and the average number of tokens is 24.99. FewRel has 64 relations for training, 16 relations for validation, and 20 relations for test separately. CSID Character Studio Intention Detection is a dataset extracted from a real-world open domain chatbot. In character studio platform, this chatbot should transform its character style sometime so it can adapt to different user group and environment, thus dialog query intention detection turns into an important task. CSID consists of 24596 instances for 128 intentions, and each intention includes 30 to 260 instances, the average number of tokens in each instance is 11.52. We use 80, 18 and 30 intentions for training, validation, and test respectively. Table 2 : Accuracies (%) of different models on the CSID dataset on four different settings.",
"cite_spans": [
{
"start": 40,
"end": 57,
"text": "(Han et al., 2018",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 982,
"end": 989,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "Firstly, we compare our model with several traditional models such as Finetune and kNN, Then we compare our model with five state-of-the-art fewshot learning models based on neural networks, they are MetaN (Munkhdalai and Yu, 2017) , GNN (Garcia and Bruna, 2018) , SNAIL (Mishra et al., 2018) , Proto (Snell et al., 2017) and PHATT (Gao et al., 2019) respectively.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Munkhdalai and Yu, 2017)",
"ref_id": "BIBREF11"
},
{
"start": 238,
"end": 262,
"text": "(Garcia and Bruna, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 271,
"end": 292,
"text": "(Mishra et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 301,
"end": 321,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 332,
"end": 350,
"text": "(Gao et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "We compare our models with seven baselines, and the implementation details are as follows. For FewRel dataset, we cite the results reported by Snell et al. (2017) which includes Finetune, kNN, MetaN, GNN, and SNAIL, then we cite the results reported by Gao et al. (2019) which includes Proto and PHATT. For a fair comparison, in our model, we use the same word embeddings and hyperparameters of instance encoder as PHATT proposed. In detail, we use the Glove (Pennington et al., 2014) consisting of 6B tokens and 400K vocabulary as our initialized word representation, and each word has a 50 dimensions vector. In addition, the position embedding dimension of a word is 10, the max length of each instance is 40. Finally, we evaluate all models on 5 way 5 shot and 10 way 5 shot settings.",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "Snell et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 253,
"end": 270,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "5.3"
},
{
"text": "For CSID dataset, we implement all above seven baseline models and our models. we use the Baidu Encyclopedia as our initialized word representation, it includes 745M tokens and 5422K vocabulary, and each word has a 300d dimensions vector, the max length of each instance is 20. Finally, we evaluate all models on 5 way 5 shot, 5 way 10 shot, 10 way 5 shot and 10 way 10 shot settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "5.3"
},
{
"text": "For the Finetune and kNN baselines, they learn the parameters on the support set with the CNN encoder. For the neural networks based baselines, we use the same hyper parameters as Han et al. (2018) proposed.",
"cite_spans": [
{
"start": 180,
"end": 197,
"text": "Han et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "5.3"
},
{
"text": "For our hierarchical attention prototypical networks, the window size of the CNN instance encoder is 3, the dimension of the hidden layer is 230, the learning rate is 0.1, the learning rate decay step is 3000 and the decay rate is 0.1. In addition, we train our model 12000 episodes and each episode consists of 20 classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "5.3"
},
{
"text": "In order to study the effects of different components, we refer to our models as HAPN-{FA,WA, IMCA}, FA indicates feature level attention, WA indicates word level attention and IMCA indicates instance level multi cross attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "5.3"
},
{
"text": "The experimental accuracies on CSID and FewRel are shown in Tabel 2 and Table 4 respectively. In this subsection, we will show the effects of hierarchical attention and support set augmentability of three Proto-based models and the convergence speed comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 79,
"text": "Tabel 2 and Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and analysis",
"sec_num": "5.4"
},
{
"text": "Benefit from hierarchical attention, our model achieves excellent performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of hierarchical attention",
"sec_num": "5.4.1"
},
{
"text": "The case study of word level attention and instance level multi cross attention are shown in Table 3, this is a 2 way 3 shot task on FewRel dataset. The query instance is an instance of \"mother\" class in fact, and our model should classify it into \"mother\" class or \"child\" class. It is a difficult Table 4 : Accuracies (%) for 5 way 5 shot and 10 way 5 shot settings on FewRel test set. * reported by Han et al. (2018) and \u25c7 reported by Gao et al. (2019) .",
"cite_spans": [
{
"start": 402,
"end": 419,
"text": "Han et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 438,
"end": 455,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of hierarchical attention",
"sec_num": "5.4.1"
},
{
"text": "task because of there are many similarities between the expressions of two classes. With the help of word level attention, we highlight the importance of the word \"daughter\", which appears in the query instance and the first support instance of class \"mother\" at the same time, then this support instance get the highest attention score and contributes more to the prototype vector of \"mother\" class, finally our model can classify the query instance into the correct class in this confusing task. As shown in Figure 2 , by using the feature level attention, we also get the feature attention scores of \"mother\" class and \"child\" class respectively. The features with high scores have deep color, and the features with low scores have light color. Obviously, different classes may have different feature score vector, in other words, the same feature of different classes have different importance. So our feature level attention can highlight importance of the useful features and weaken the importance of the noise features, then the distance between the prototype vector and the query vector will measure the difference between them more efficiently. We treat the final prototype embedding vector as the features of each instance, then we can get the distribution of features by principal pomponent analysis in feature space as shown in Figure 3 . As we can see, the instances without hierarchical attention are more distributed and may cross with each other, but the instances with hierarchical attention are more centralized and discriminative, which proves that our model learns a better semantic space, which helps to distinguish confus-ing data.. ",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 518,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1340,
"end": 1349,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effects of hierarchical attention",
"sec_num": "5.4.1"
},
{
"text": "More support instances can contribute more useful information to the prototype vector, meanwhile, more noise will be added in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentability of support set",
"sec_num": "5.4.2"
},
{
"text": "In this section, we define the support set augmentability (SSA) as the additive value of accuracy when we increase the same number of the support set for different models. So we compare our model's SSA with other models such as Proto and PHATT on the 10 way FewRel task, and the shot number ranges from 5 to 25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentability of support set",
"sec_num": "5.4.2"
},
{
"text": "By using the hierarchical attention, our model obtaines a strong robustness and can pay more attention to the important information of support set and reduce those negative effects of noisy data, thus as shown in Figure 4 , the support set augmentability of our model is larger than other models. Benefit from the above advantages, we can deploy our model in the cold start stage, and gradually accumulate labeled support data in practical applications, then improve the performance of the model day by day, and thus improve the utilization rate of few data in realistic settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Augmentability of support set",
"sec_num": "5.4.2"
},
{
"text": "At the training stage, we also compare the convergence speed between Proto, PHATT, and HAPN on the 10 way 5 shot and 10 way 15 shot FewRel task. As shown in Figure 5 , our model can be optimized more quickly than the other models. From 10 way 5 shot task to 10 way 15 shot settings, the Proto model takes almost twice time to achieve 70% accuracy on validation set, in other words, the convergence speed will decrease sharply when we increase the number of support instances, but ",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convergence speed comparison",
"sec_num": "5.4.3"
},
{
"text": "Previous few-shot learning models for text classification roughly apply text representations or neglect the noisy information. We propose to do hierarchical attention prototypical networks consisting of feature level, word level and instance level multi cross attention, which highlight the important information of few data and learn a more discriminative prototype representation. In the experiments, our model achieves the state-of-theart performance on FewRel and CSID datasets. HAPN not only increases support set augmentability but also accelerates convergence speed in the training stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we will contribute new text dataset to few-shot learning, explore better feature extrac-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/thunlp/FewRel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Sawyer Zeng and Yue Liu for providing valuable hardware support and useful advice, and thank Xuexiang Xu and Yang Bai for helping us test online FewRel dataset. This work is also supported by the National Key Research and Development Program of China (No. 2018YFB1402902 and No. 2018YFB1403002) and the Natural Science Foundation of Jiangsu Province (No. BK20151132).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning of representations for unsupervised and transfer learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Unsupervised and Transfer Learning -Workshop held at ICML 2011",
"volume": "",
"issue": "",
"pages": "17--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2011. Deep learning of representa- tions for unsupervised and transfer learning. In Un- supervised and Transfer Learning -Workshop held at ICML 2011, Bellevue, Washington, USA, July 2, 2011, pages 17-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning many related tasks at the same time with backpropagation",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1994,
"venue": "Advances in Neural Information Processing Systems",
"volume": "7",
"issue": "",
"pages": "657--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1994. Learning many related tasks at the same time with backpropagation. In Advances in Neural Information Processing Systems 7, [NIPS Conference, Denver, Colorado, USA, 1994], pages 657-664.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic feature augmentation in few-shot learning",
"authors": [
{
"first": "Zitian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yanwei",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Yinda",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yu-Gang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Sigal",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zitian Chen, Yanwei Fu, Yinda Zhang, Yu-Gang Jiang, Xiangyang Xue, and Leonid Sigal. 2018. Semantic feature augmentation in few-shot learning. volume abs/1804.05298.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hybrid attention-based prototypical networks for noisy few-shot relation classification",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu Xu Han",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for the Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Zhiyuan Liu Xu Han, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Pro- ceedings of the Association for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Few-shot learning with graph neural networks",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Garcia and Joan Bruna. 2018. Few-shot learn- ing with graph neural networks. In Proceedings of ICLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4803--4809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classifica- tion dataset with state-of-the-art evaluation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, Brussels, Bel- gium, October 31 -November 4, 2018, pages 4803- 4809.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Effective use of word order for text categorization with convolutional neural networks",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "103--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categorization with convolu- tional neural networks. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 -June 5, 2015, pages 103-112.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5822"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv:1408.5822.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Siamese neural networks for one-shot image recognition",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML Deep Learning workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhut- dinov. 2015. Siamese neural networks for one-shot image recognition. In ICML Deep Learning work- shop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Analogical reasoning on chinese morphological and semantic relations",
"authors": [
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Renfen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Wensi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyong",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "138--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on chi- nese morphological and semantic relations. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 138-143. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A simple neural attentive metalearner",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Rohaninejad",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2018. A simple neural attentive meta- learner. In 6th International Conference on Learn- ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Meta networks",
"authors": [
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2554--2563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Syd- ney, NSW, Australia, 6-11 August 2017, pages 2554- 2563.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4080--4090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Sys- tems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4080-4090.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Yara parser: A fast and accurate dependency parser",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "End-to-end memory networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.08895"
]
},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. Yara parser: A fast and ac- curate dependency parser. End-to-end memory net- works, arXiv:1503.08895.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to compare: Relation network for few-shot learning",
"authors": [
{
"first": "Flood",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Yongxin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Philip",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"M"
],
"last": "Torr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hospedales",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1199--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In 2018 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 1199-1208.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1422-1432.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Matching networks for one shot learning",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016",
"volume": "",
"issue": "",
"pages": "3630--3638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Match- ing networks for one shot learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Sys- tems 2016, December 5-10, 2016, Barcelona, Spain, pages 3630-3638.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Joint embedding of words and labels for text classification",
"authors": [
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xinyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "2321--2331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Joint embed- ding of words and labels for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2321-2331.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Baselines and bigrams: Simple, good sentiment and topic classification",
"authors": [
{
"first": "I",
"middle": [],
"last": "Sida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida I. Wang and Christopher D. Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In The 50th Annual Meeting of the As- sociation for Computational Linguistics, Proceed- ings of the Conference, July 8-14, 2012, Jeju Island, Korea -Volume 2: Short Papers, pages 90-94.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classifi- cation. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12- 17, 2016, pages 1480-1489.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Diverse few-shot text classification with multiple metrics",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Tesauro",
"suffix": ""
},
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1206--1215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Vol- ume 1 (Long Papers), pages 1206-1215.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In COLING 2014, 25th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ire- land, pages 2335-2344.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [
"Jake"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015a. Character-level convolutional networks for text classification. In Advances in Neural Infor- mation Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [
"Jake"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015b. Character-level convolutional networks for text classification. In Advances in Neural Infor- mation Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A C-LSTM neural network for text classification",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chonglin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"C M"
],
"last": "Lau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Fran- cis C. M. Lau. 2015. A C-LSTM neural network for text classification. volume abs/1511.08630.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Hierarchical Attention Prototypical Networks architecture",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "(a) Feature attention scores of \"mother\" class (b) Feature attention scores of \"child\" class",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Feature attention scores of different classes",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Instances distribution of embedding vector without hierarchical attention (a) and with hierarchical attention (b). The left blue points marked \u00d7 are instances of \"mother\" class and the right orange points marked \u2022 are instances of \"child\" class.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Support set augmentability of Proto, PHATT and HAPN on FewRel validation set.this problem can be effectively alleviated when we use hierarchical attention mechanism. Training Proto, PHATT and HAPN on FewRel dataset. Lines marked denote loss on the training set and lines marked \u25b3 denote accuracy on the validation set.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "Cherie Gil is the daughter of Filipino actors Eddie Mesa and Rosemarie Gil, and sister of fellow actors, Michael de Mesa and the late Mark Gil.When they reachedadulthood, Pelias and Neleus found their mother Tyro and then killed her stepmother, Sidero, for having mistreated her. Mehmed died and his son Murad II refused to honour his father's obligations to the Byzantines.Henry Norreys was a lifelong friend of Queen Elizabeth and was the father of six sons, who included Sir John Norreys, a famous English soldier. Princess Dagmar of Demark, the daughter of Frederick VIII of Denmark and Louise of Sweden, lived on Kongestlund.",
"content": "<table><tr><td>Class</td><td>Word Attention</td><td>IMCAS</td></tr><tr><td/><td>Support Set</td><td/></tr><tr><td colspan=\"2\">(1) mother It was here that the Queen Consort Jetsun Pema gave birth to a son on 5 February 2016,</td><td/></tr><tr><td/><td>Jigme Namgyel Wangchuck.</td><td/></tr><tr><td>(2) child</td><td>In 1421 Jim Henson and his son Brian were impressed enough with Barron's style to offer him a</td><td/></tr><tr><td/><td>job directing the pilot episode of \"The Storyteller\".</td><td/></tr><tr><td/><td>Query</td><td/></tr><tr><td>(1) or (2)</td><td>From 1922 to 1963,</td><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"content": "<table><tr><td colspan=\"3\">: Visualization of word level and instance level multi cross attention scores (IMCAS) for 2 way 3 shot</td></tr><tr><td colspan=\"3\">setting, the bold words are head entities and tail entities.</td></tr><tr><td>Model</td><td colspan=\"2\">5 Way 5 Shot 10 Way 5 Shot</td></tr><tr><td>Finetune * kNN * MetaN * GNN * SNAIL * Proto \u25c7 PHATT \u25c7</td><td>68.66 \u00b1 0.41 68.77 \u00b1 0.41 80.57 \u00b1 0.48 81.28 \u00b1 0.62 79.40 \u00b1 0.22 89.05 \u00b1 0.09 90.12 \u00b1 0.04</td><td>55.04 \u00b1 0.31 55.87 \u00b1 0.31 69.23 \u00b1 0.52 64.02 \u00b1 0.77 68.33 \u00b1 0.25 81.46 \u00b1 0.13 83.05 \u00b1 0.05</td></tr><tr><td>HAPN-FA</td><td>89.79 \u00b1 0.13</td><td>82.47 \u00b1 0.20</td></tr><tr><td>HAPN-WA</td><td>90.86 \u00b1 0.12</td><td>83.79 \u00b1 0.19</td></tr><tr><td colspan=\"2\">HAPN-IMCA 90.92 \u00b1 0.11</td><td>84.07 \u00b1 0.19</td></tr><tr><td>HAPN</td><td>91.02 \u00b1 0.11</td><td>84.16 \u00b1 0.18</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}