Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:57:31.536082Z"
},
"title": "Enhancing Local Feature Extraction with Global Representation for Neural Text Classification",
"authors": [
{
"first": "Guocheng",
"middle": [],
"last": "Niu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "niuguocheng@baidu.com"
},
{
"first": "Hengru",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing University of Posts and Telecommunications",
"location": {}
},
"email": "xuhengru@bupt.edu.cn"
},
{
"first": "Bolei",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "hebolei@baidu.com"
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "xiaoxinyan@baidu.com"
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "wuhua@baidu.com"
},
{
"first": "Sheng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing University of Posts and Telecommunications",
"location": {}
},
"email": "gaosheng@bupt.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "For text classification, traditional local feature driven models learn long dependency by deeply stacking or hybrid modeling. This paper proposes a novel Encoder1-Encoder2 architecture, where global information is incorporated into the procedure of local feature extraction from scratch. In particular, En-coder1 serves as a global information provider, while Encoder2 performs as a local feature extractor and is directly fed into the classifier. Meanwhile, two modes are also designed for their interactions. Thanks to the awareness of global information, our method is able to learn better instance specific local features and thus avoids complicated upper operations. Experiments conducted on eight benchmark datasets demonstrate that our proposed architecture promotes local feature driven models by a substantial margin and outperforms the previous best models in the fully-supervised setting.",
"pdf_parse": {
"paper_id": "D19-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "For text classification, traditional local feature driven models learn long dependency by deeply stacking or hybrid modeling. This paper proposes a novel Encoder1-Encoder2 architecture, where global information is incorporated into the procedure of local feature extraction from scratch. In particular, En-coder1 serves as a global information provider, while Encoder2 performs as a local feature extractor and is directly fed into the classifier. Meanwhile, two modes are also designed for their interactions. Thanks to the awareness of global information, our method is able to learn better instance specific local features and thus avoids complicated upper operations. Experiments conducted on eight benchmark datasets demonstrate that our proposed architecture promotes local feature driven models by a substantial margin and outperforms the previous best models in the fully-supervised setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text classification is a fundamental task in natural language processing, which is widely used in various applications such as spam detection, sentiment analysis and topic classification. One of the mainstream approaches firstly utilizes explicit local extractors to identity key local patterns and classifies based on them afterwards. In this paper, we call this line of research as local feature driven models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lots of proposed methods can be grouped into this scope. Ngrams have been traditionally exploited in statistical machine learning approaches (Pang et al., 2002; Wang and Manning, 2012) . For deep neural networks, encoding local features into low-dimensional distributed ngrams ? These authors contributed equally to this work. \u2020 This work was done while the author was an intern at Baidu Inc.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Pang et al., 2002;",
"ref_id": "BIBREF16"
},
{
"start": 161,
"end": 184,
"text": "Wang and Manning, 2012)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Case1: Apple is really amazing! I am fed up to carry my clunky camera. Case2: Apple is famous around world and deserves to be called \"nutritional powerhouses\". embeddings (Joulin et al., 2016; Qiao et al., 2018) and simply bagging of them have been proved effective and highly efficient. Convolutional Neural Networks (CNN) (LeCun et al., 2010) are promising methods for their strong capacities in capturing local invariant regularities (Kim, 2014) . More recently, Wang (2018) proposes the Disconnected Recurrent Neural Network (DRNN), which utilizes RNN to extract local features for larger windows and has reported best results on several benchmarks.",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "(Joulin et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 193,
"end": 211,
"text": "Qiao et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 324,
"end": 344,
"text": "(LeCun et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 437,
"end": 448,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 466,
"end": 477,
"text": "Wang (2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite having good interpretability and remarkable performance, current local feature extraction still has one shortcoming. As shown in Table 1 , the real meaning of Apple can only be correctly recognized from overall view instead of narrow window. If the local extractor in charge of Apple cannot receive camera and nutritional from the very beginning, it would require complicated and costly upper structures to help revise the imprecisely local representation and create newer high-level features, such as deeply stacking (Johnson and Zhang, 2017; Conneau et al., 2016) and hybrid integration (Xiao and Cho, 2016) . To a certain extend, it is inefficient and hard to train especially in the case of insufficient corpus.",
"cite_spans": [
{
"start": 526,
"end": 551,
"text": "(Johnson and Zhang, 2017;",
"ref_id": "BIBREF7"
},
{
"start": 552,
"end": 573,
"text": "Conneau et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 597,
"end": 617,
"text": "(Xiao and Cho, 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this issue, we believe a more efficient approach is to optimize the local extraction process directly. In this paper, we propose a novel architecture named Encoder1-Encoder2 1 , which innovatively contains two encoders for the identical input sequence respectively, instead of using only one single encoder in previous work. Concretely, the Encoder1 can be any kind of neural network models designed for briefly grasping global background, while the Encoder2 should be a typical local feature driven model. The key point is, the earlier generated global representations from Encoder1 is then incorporated into the local extraction procedure of Encoder2. In this way, local extractors can notice more long-range information on the basis of its natural advantages. As a result, better instance specific local features can be captured and directly utilized for classification owing to global awareness, which means further upper complicated operations can be avoided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct experiments on eight public text classification datasets introduced by Zhang et al. (2015) .",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experimental results show that our proposed architecture promotes local feature driven models by a substantial margin. In fullysupervised settings, our best models achieves new state-of-the-art performances on all benchmark datasets. We further demonstrate the ability and generalization of our architecture in the semisupervised domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be concluded as follows: 1. We propose a novel Encoder1-Encoder2 architecture, where better instance specific local features are captured by incorporating global representations into local extraction procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Our architecture has great flexibility. Different associations among Encoder1, Encoder2 and Interaction Modes are studied, where any kind of combination promotes vanilla CNN or DRNN significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Our architecture is more robust to the window size of local extractors and the corpus scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Local Feature Driven Models FastText uses bag of n-grams embeddings as text representation (Joulin et al., 2016) , which has been proved effective and efficient. Qiao et al. (2018) propose a new method of learning and utilizing task specific n-grams embeddings to conquer data sparsity. CNN (LeCun et al., 2010) are representative methods of this category. Convolution operators are performed at every window based location to extract local features, interleaved with pooling layer for capturing invariant regularities. From Kim (2014) , CNN are widely used in text classification. In addition to shallow structure, very deep and more complex CNN based models have also been studied to establish long distance association. Examples are deep characterlevel CNNs Zhang et al. (2015) ; Conneau et al. (2016) , deep pyramid CNN Johnson and Zhang (2017) and convolution-recurrent networks Xiao and Cho (2016) , in which recurrent layers are designed on top of convolutional layers for learning long-term dependencies between local features. CNN use simple linear operations on n-gram vectors of each window, which enlightens researchers to capture higher order local non-linear feature using RNN. first replace convolution filters with LSTM for query classification. Wang (2018) proposes DRNN, which exploits large window size equipped with GRU.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 162,
"end": 180,
"text": "Qiao et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 287,
"end": 311,
"text": "CNN (LeCun et al., 2010)",
"ref_id": null
},
{
"start": 525,
"end": 535,
"text": "Kim (2014)",
"ref_id": "BIBREF9"
},
{
"start": 761,
"end": 780,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF34"
},
{
"start": 783,
"end": 804,
"text": "Conneau et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 820,
"end": 848,
"text": "CNN Johnson and Zhang (2017)",
"ref_id": null
},
{
"start": 884,
"end": 903,
"text": "Xiao and Cho (2016)",
"ref_id": "BIBREF29"
},
{
"start": 1262,
"end": 1273,
"text": "Wang (2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To make full use of local and global information, Zhao et al. (2018) propose a sandwich network by carding a CNN in the middle of two LSTM layers, where the output of CNN provides local semantic representations while the top LSTM supplies global structure representations. However, the global information they mainly focus on is the syntax part, which is produced by reorganizing the already obtained local features. Besides, both of them are directly used for final classification, while we use pre-acquired global representations to help capture better local features. To the best of our knowledge, we are the first to incorporate global representation into the extraction procedure of local features for text classification.",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "Zhao et al. (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other Neural Network Models Recurrent Neural Networks (RNN) are naturally good at modeling variable-length sequential data and capturing long-term dependencies (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) . Global features are encoded by semantically synthesizing each word in the sequence in turn and there is no explicit small regions feature extraction procedure in this process. Lai et al. (2015) equip RNN with max-pooling to tackle the bias problem where later words are more dominant than earlier words. Tang et al. (2015) utilize LSTM to encode semantics of sentences and their relations in doc-",
"cite_spans": [
{
"start": 160,
"end": 194,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF6"
},
{
"start": 195,
"end": 214,
"text": "Chung et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 393,
"end": 410,
"text": "Lai et al. (2015)",
"ref_id": "BIBREF10"
},
{
"start": 521,
"end": 539,
"text": "Tang et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "CNN/RNN/ATTENTION CNN/DRNN CNN/DRNN CNN/DRNN S/A S/A S/A Encoder1 Encoder2 Softmax ! \" ! # ! $ ! % ! \" ! # ! $ ! # ! $ ! & ! %'# ! %'\" ! % ( \" ( # ( %'# Input sequence \u22ef \u22ef \u22ef \u22ef",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Figure 1: Encoder1-Encoder2 architecture mainly contains three components. (1) Encoder1 serves as a global information provider. (2) Encoder2 is a local feature driven model whose output is directly fed into the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modes",
"sec_num": null
},
{
"text": "(3) Mode is the interaction manner between them. S and A are abbreviation of SAME and ATTEND respectively. ument representation. Tai et al. (2015) introduce a tree-structured LSTM for sentiment classification. The attention mechanism proposed by Bahdanau et al. (2014) has achieved great success in machine translation (Vaswani et al., 2017) . For text classification which only has single input sequence, attention based models mainly focus on applying attention mechanism on top of CNN or RNN for selecting the more important information (Yang et al., 2016; Er et al., 2016) . Letarte et al. (2018) and Shen et al. (2018) also explore self-attention networks which is CNN/RNN free.",
"cite_spans": [
{
"start": 129,
"end": 146,
"text": "Tai et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 319,
"end": 341,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 540,
"end": 559,
"text": "(Yang et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 560,
"end": 576,
"text": "Er et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 579,
"end": 600,
"text": "Letarte et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 605,
"end": 623,
"text": "Shen et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modes",
"sec_num": null
},
{
"text": "3 Encoder1-Encoder2 Architecture",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modes",
"sec_num": null
},
{
"text": "In this paper, we propose a novel neural network architecture named Encoder1-Encoder2 for text classification, which is illustrated in Figure 1 . The identical input sequence will be encoded twice by two encoders respectively, but only the output of Encoder2 is used directly for the classifier. In particular, the Encoder1 serves as a pioneer for providing global information, while the Encoder2 focuses on extracting better local features by incorporating the former into the local extraction procedure. Besides, two Interaction Modes are developed for more targeted absorption of global information.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Without loss of generality, we introduce three types of models for Encoder1 in our architecture, each of which can be an independent global information provider and they are compared in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "CNN Let x t be the d-dimensional word vector corresponding to the t-th word in a sequence of length n, x t h+1:t refers to the concatenation of words x t h+1 , x t h+2 , . . . , x t with size h and k number of filters are applied to the input sequence to generate features. Formally, filters W f are applied to window x t h+1:t to compute h t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = Conv(x t h+1 , x t h+2 , . . . , x t ) (1) = relu(W f x t h+1:t + b f )",
"eq_num": "(2)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "By same padding, filters are applied to n possible windows in the sequence and the global representation can be represented as enc 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "enc 1 = [h 1 ; h 2 ; . . . ; h n ]",
"eq_num": "(3)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "GRU Gated recurrent units (GRU) are a gating mechanism in RNN . Two types of gates are used in GRU: reset gate decides how much new information is updated, while update gate controls the flow of previous information. The hidden state h t is computed iteratively based on h t 1 and x t . As a result, the all previous information can be encoded. For saving space, here we abbreviate it as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = GRU (x 1 , x 2 , . . . , x t )",
"eq_num": "(4)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "The global representation produced by GRU is hidden states of all time steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "enc 1 = [h 1 ; h 2 ; . . . ; h n ]",
"eq_num": "(5)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "Attention We also introduce attention mechanism on GRU for enhancing valuable information following Zhou et al. (2016) . Define a context vector u w to measure the importance of each hidden state h t in GRU, which is randomly initialized and learned during training. A normalized importance weight \u21b5 t is obtained through a softmax function:",
"cite_spans": [
{
"start": 100,
"end": 118,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u21b5 t = exp(tanh(h t ) > u w ) P t exp(tanh(h t ) > u w )",
"eq_num": "(6)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "The global representation produced by this attention mechanism is expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "enc 1 = [\u21b5 1 h 1 ; \u21b5 2 h 2 ; . . . ; \u21b5 n h n ]",
"eq_num": "(7)"
}
],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder1: Global Information Provider",
"sec_num": "3.2"
},
{
"text": "Vanilla local feature extractor strictly focuses on a limited size region. Here we propose a variant method. Apart from the expected local context, global information distilled by Encoder1 is also absorbed by a local extractor. In this way, the local features extracted by Encoder2 can notice the global background while still maintaining the position-invariant local patterns. For Encoder2, we introduce two kinds of local feature driven models, i.e., CNN and DRNN. The former is good at capturing local spatial structure, while the latter is highlighted in capturing local temporal part. Set g t as the required global information for a certain size window starting from x t , which will be introduced in 3.4 in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "CNN Here we treat each g t 2 R d as a faked extra global word, and do convolution with window words together. Based on Equation 1, features produced by filters for window x t h+1:t can be represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "h t = Conv(g t , x t h+1 , x t h+2 , . . . , x t ) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "DRNN Different from CNN, DRNN utilizes RNN to extract local features for each window (Wang, 2018) . To introduce global information into DRNN, faked global word g t is filled in the head of each window like CNN does. Because of the sequential nature of RNN, even for a limited window, global information can be encoded into RNN from scratch and motivate the latter words. Here we use GRU as the local feature extractor, and features produced for window x t h+1:t can be represented as:",
"cite_spans": [
{
"start": 85,
"end": 97,
"text": "(Wang, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "h t = GRU (g t , x t h+1 , x t h+2 , . . . , x t ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "To maintain translation invariant, a max-overtime pooling layer is then applied to CNN or DRNN layer, the pooling result is regarded as the output of Encoder2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "enc 2 = maxpool([h 1 ; h 2 ; . . . ; h n ])",
"eq_num": "(10)"
}
],
"section": "Encoder2: Variant Local Extractor",
"sec_num": "3"
},
{
"text": "Set enc 1 as the global representation produced by Encoder1, required information for a certain window x t h+1:t with size h is defined as g t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "g t = G(enc 1 , x t h+1 , x t h+2 , . . . , x t ) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "where G is a function of interaction mode. Two modes are devised from different point of views. SAME Treat enc 1 as a \"reference book\" provided by Encoder1. The basic idea of SAME Mode is each window in Encoder2 will get indiscriminate guidance regardless of the local information itself. For this purpose, max-over-time pooling is operated on enc 1 directly to extract the most important information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g t = maxpool(enc 1 )",
"eq_num": "(12)"
}
],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "ATTEND Mode ATTEND utilizes global information from another perspective. According to different local contexts, the guidances from En-coder1 can be be more targeted. Specifically, we use attention mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "For window x t:t+h 1 with size h, the context vector is the average pooling of local words embeddings and the importance weight \u21b5 t for each hidden state h t in Encoder1 can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u21b5 t = exp(tanh(h t ) > avg(x t:t+h 1 )) P t exp(tanh(h t ) > avg(x t:t+h 1 ))",
"eq_num": "(13)"
}
],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "To maximize the profits obtained from En-coder1, we concatenate both of maxpooling results and attention results. Then\u011d t in ATTEND mode can be represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "g t = Concat(maxpool(enc 1 ), X t \u21b5 t h t ) (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "Finally, to keep consistent dimensions with words in the text, we compress\u011d t using MLP and formalized as g t , which can be easily embedded into the local feature extraction in Encoder2. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes between Encoders",
"sec_num": "3.4"
},
{
"text": "After incorporating the global information obtained from Encoder1 into the local feature extraction of Encoder2, the output vector of latter can be regarded as the representation of the entire text. The vector is then fed into a softmax classifier to predict the probability of each category and cross entropy is used as loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Layer",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = sof tmax(W c enc 2 + b c ) (15) H(y,\u0177) = X i y i log\u0177 i",
"eq_num": "(16)"
}
],
"section": "Classification Layer",
"sec_num": "3.5"
},
{
"text": "where\u0177 i is the predicted probability and y i is the true probability of class i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Layer",
"sec_num": "3.5"
},
{
"text": "We report experiments with proposed models in comparison with previous methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Datasets Publicly available datasets from Zhang et al. (2015) are used to evaluate our models. These datasets contain various domains and sizes, corresponding to sentiment analysis, news classification, question answering, and ontology extraction, which are summarized in Table 2 .",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments Settings",
"sec_num": "4.1"
},
{
"text": "Model Settings For data preprocessing, all the texts of datasets are tokenized by NLTKs tokenizer (Loper and Bird, 2002 Table 3 , all trainable parameters including embeddings of words are initialized randomly without any pre-trained techniques (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2018) .",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Loper and Bird, 2002",
"ref_id": "BIBREF14"
},
{
"start": 245,
"end": 267,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 268,
"end": 288,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 289,
"end": 309,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments Settings",
"sec_num": "4.1"
},
{
"text": "Training and Validation For each dataset, we randomly split the full training corpus into training and validation set, where the validation size is the same as the corresponding test size. Then the validation set is fixed for all models for fair comparison. The reported test accuracy is evaluated in the model which has lowest validation error. AdaDelta (Zeiler, 2012) with \u21e2 = 0.95 and \u270f = 1e 6 is chosen to optimize all the trainable parameters. Gradient norm clipping is employed to avoid the gradient explosion problem. L2 normalization is used in all models which include RNN structures. The batch size is set to 64 for Yelp P. and Yelp F. while 128 for other datasets. We train all the models using early stopping with timedelay 10. Table 4 is the summary of the experimental results. We use underscores to represent the best published models, and bold the best records. Best models in our proposed architecture beat previous state-of-the-art models on all eight text classification benchmarks. For published models, best results are achieved almost all by local feature driven models including Region-emb, VDCNN and DRNN. Self-Attention model SANet performs well, but does not achieve advantageous results as in sequence to sequence Table 5 . For compared previous models, first block lists n-grams based models including bigram-FastText (Joulin et al., 2016) and region embedding (Qiao et al., 2018) . Self-attention Networks SANet (Letarte et al., 2018) is reported in the second block. RNN based models LSTM (Zhang et al., 2015) , D-LSTM (Yogatama et al., 2017) and CNN based models char-CNN (Zhang et al., 2015) and VDCNN (Conneau et al., 2016) are listed in third and forth block respectively. Strong local feature driven models CNN (Kim, 2014) and DRNN (Wang, 2018) are chosen as base model and directly compared with our architecture in last two blocks.",
"cite_spans": [
{
"start": 1346,
"end": 1367,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1389,
"end": 1408,
"text": "(Qiao et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1441,
"end": 1463,
"text": "(Letarte et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 1519,
"end": 1539,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 1549,
"end": 1572,
"text": "(Yogatama et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 1603,
"end": 1623,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 1634,
"end": 1656,
"text": "(Conneau et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 1746,
"end": 1757,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1767,
"end": 1779,
"text": "(Wang, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 740,
"end": 747,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1241,
"end": 1248,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiments Settings",
"sec_num": "4.1"
},
{
"text": "scenes, neither do RNN based methods. We argue that it is because key phrases and word order play an important role in text classification. For our models, the experimental results show that enhanced local extractors with global encoder outperform vanilla local models by a advantageous margin. When CNN is chosen as local extractor, the performance gains are particularly significant for relatively difficult tasks such as Amz. F.(+2.4%) and Yah. A.(+2.0%). Encoder1-CNN performs even better than VDCNN with 29 convolutional layers. The results are satisfying considering that our CNN used as local extractor here is a shallow model with only one layer. Moreover, complicated VDCNN performs best among published models on larger datasets Amz. P.(95.7%) and Amz. F.(63.0%) but not as expected on smaller AG(91.3%), while our Encoder1-CNN has stable superior performance on all datasets. When DRNN is chosen as local extractor, the bonus from the global encoder is not so big like CNN, but still considerable and stable. Encoder1-DRNN beats DRNN on all datasets with a highest gain up to 0.7%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "4.2"
},
{
"text": "To better analyze the impact of specific En-coder1 (global encoder) and different Interaction Modes on architecture performance, Table 5 details all combinations results of Encoder1-Encoder2-Mode on three datasets. We find the local extractor benefits quite a lot for any introduced global encoders. Overall, RNN and Attention based global encoders perform well-matched for local extractor, and both of them often perform better than CNN based global encoder. For example, Attention-CNN wins CNN-CNN 1.0% on Yelp F. and RNN-DRNN wins CNN-DRNN 0.4% on Yah. A. This is intuitive since RNN and Attention are more appropriate in capturing global information compared with CNN, which is critical for local extractor. The structures which specialize in modeling long-term dependency are more recommended as the global encoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "4.2"
},
{
"text": "For two Interaction Modes, we find ATTEND performs slightly better than SAME up to 0.4%, which can verify the differentiated motivation. En-coder1 (global encoder) can be viewed as a \"reference book\" about the whole text. Two Modes utilize the information from different perspectives to (Kim, 2014) . \"Concat\" simply concatenates the output of Encoder1 and Encoder2 and directly classifies according to the concatenated result. \"Same\" indicates the Interaction Mode in our architecture. assist the local extractor. SAME Mode selects the most important information of global encoder and provides same guidance for each window in En-coder2, while the ATTEND Mode tends to make use of the \"reference\" with purpose based on different local contexts as if we refer to a reference book with initiative questions.",
"cite_spans": [
{
"start": 287,
"end": 298,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "4.2"
},
{
"text": "In addition to introducing another encoder into vanilla local feature driven models, the greatest novelty of our architecture lies in that the global encoding is used to generate local features directly. Based on this motivation, the local features have the global awareness from the very beginning. To verify that our novel architecture makes key contribution to the performance improvement, we carry out model ablation experiments. Without loss of generality, we use CNN as local extractor here and validate on Yelp F. and Yah. A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ablation",
"sec_num": "4.3"
},
{
"text": "datasets. The results are illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Ablation",
"sec_num": "4.3"
},
{
"text": "Firstly, we list the results of Vanilla CNN, which is regarded as the most primitive state. Secondly, another additional encoder is introduced but they both process inputs independently and then their output representations are concatenated for classification. We call it \"Concat\", abbreviated as \"C\". For example, RNN-CNN-C stands for concatenating another RNN. Finally, we upgrade the way to use the introduced encoder as our proposed architecture. Here we list Mode SAME, abbreviated as \"S\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ablation",
"sec_num": "4.3"
},
{
"text": "We find CNN-CNN-C loses 0.5% on Yelp P. but wins 0.4% on Yah. A. compared with vanilla CNN. CNN-CNN-C can be viewed as doubling convolution filters and we can observe that introducing more parameters does not always perform better. Meanwhile, RNN-CNN-C wins vanilla CNN 0.7% on Yelp P. and 0.9% on Yah. A. It makes sense since the classifier could use features from CNN and RNN simultaneously and different model structures complement each other for classification. In particular, our architecture performs best for both cases. CNN-CNN-S wins CNN-CNN-C 1.3% and 0.8%, and RNN-CNN-S wins RNN-CNN-C 0.6% and 1.0% on Yelp P. and Yah. A. respectively. In fact, CNN-CNN-S does not introduce new model structure or complicated operations and the number of parameters are almost the same. We attribute the great improvement to our novel mechanism where the global representation conduces to the local extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ablation",
"sec_num": "4.3"
},
{
"text": "As an important hyperparameter, window size determines how much information can be seen in a specific window and often requires carefully tun- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Window Size",
"sec_num": "4.4"
},
{
"text": "CNN \"Commission backs 5bn British Energy deal \"\" British Energy, the nuclear generator yesterday welcomed a decision by the European commission to approve a government-backed 5bn rescue plan .\" World 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Sentence Samples",
"sec_num": null
},
{
"text": "ATT-CNN \"Commission backs 5bn British :::::: Energy :::: deal \"\" British :::::: Energy, the :::::: nuclear generator, yesterday welcomed a decision by the European commission to approve a government-backed 5bn :::::",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Sentence Samples",
"sec_num": null
},
{
"text": "rescue ::::: plan .\" Business 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Sentence Samples",
"sec_num": null
},
{
"text": "The mac and cheese sticks were amazing ... highly recommend them . Overall, for the high price price pay here, I would rather be across the casino with at least a great view of the fountains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": null
},
{
"text": "Positive 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": null
},
{
"text": "The mac and cheese sticks were ::::::: amazing ... highly :::::::::: recommend them . ::::::: Overall, for the :::: high ::::: price you pay here, I :::::: would ::::: rather be across the casino with at least a :::: great view of the fountains. Negative 3 Table 6 : Visualization of chosen samples on AG News and Yelp Review Polarity dataset. We use SAME Interaction Mode in ATT-CNN where ATT is the abbreviation of Attention.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "ATT-CNN",
"sec_num": null
},
{
"text": "ing in traditional method. Small window sizes may result in the loss of some critical information whereas large windows result in an enormous parameter space, which could be difficult to train (Lai et al., 2015) .",
"cite_spans": [
{
"start": 193,
"end": 211,
"text": "(Lai et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ATT-CNN",
"sec_num": null
},
{
"text": "In this section, we analyze the impact of different window sizes on model performances. As shown in Figure 3 , both CNN and DRNN are very sensitive to window size, the optimal window size in DRNN can be much larger than CNN due to the sequential memory in RNN structure. Tuning these models is often challenging. In contrast, our Encoder1-Encoder2 architecture is insensitive to the parameter and achieves stable satisfied performance in various window sizes. We believe it is because the local extraction has been enhanced by global information and not strictly dependent on large windows to capture long range information.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "ATT-CNN",
"sec_num": null
},
{
"text": "To investigate how our architecture makes a difference in details, we visualize the attending phrases by the neural model in Table 6 . Qualitatively, we display the contribution of phrases in Encoder2 to classification via max-pooling. The most important phrases are highlighted red where the intensity of the color indicates the contribution. Meanwhile, we use waves to roughly indicate the key phrases with high attention scores in Encoder1. Detailed visualization techniques have been introduced in Li et al. (2015) .",
"cite_spans": [
{
"start": 502,
"end": 518,
"text": "Li et al. (2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study and Visualization",
"sec_num": "4.5"
},
{
"text": "The first two lines compare CNN with our Attention-CNN on an example from AG News. CNN wrongly captures key phrases British Energy and the nuclear generator and thus misclassifies the example into World. In contrast, our Attention-CNN is able to correctly classify it into Business. The Encoder1 firstly captures the global description by Energy deal, nuclear, and rescue plan. Informed with these global information, Encoder2 reduces its attention to nuclear, which implies label World while captures key phrases British Energy deal and 5bn rescue plan. Accordingly the model makes a correct prediction labeled as Business. For the second example, the global representations include phrases high price and conjunction Overall, making Encoder2 activate I would rather while reduce its sensitivity to highly recommend them compared with CNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study and Visualization",
"sec_num": "4.5"
},
{
"text": "In short, the global representations learned by Encoder1 provide a brief overall grasp of the whole text, which includes both semantic and structure information. It effectively helps En-coder2 capture better instance specific local features and improve model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study and Visualization",
"sec_num": "4.5"
},
{
"text": "As shown above, our paper mainly focuses on fully-supervised domain where all model parameters are trained from scratch. Alternatively, substantial work has shown that pre-trained models are beneficial for various NLP tasks. Typically, they first pre-train neural networks on large-scale unlabeled text corpora, and then finetune the models or representations on downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "One kind of pre-trained models is the word embeddings, such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) . More recently, by utilizing larger-scale unsupervised corpus and deeper architecture, pre-trained language models have shown to be effective in learning common language representations and have achieved great success. Among them, OpenAI GPT (Radford et al., 2018) , BERT (Devlin et al., 2018) , XLNet (Yang et al., 2019) and ERNIE 2.0 (Sun et al., 2019) are the most remarkable examples.",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 105,
"end": 130,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 374,
"end": 396,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 404,
"end": 425,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 434,
"end": 453,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 468,
"end": 486,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "In this section, we generalize our architecture to semi-supervised domain which is equipped with pre-trained word embeddings and then compare with popular pre-trained based models. Specifically, we use GloVe vectors 2 with 300 dimensions to initialize the word embeddings in our architecture. BERT BASE 3 and ERNIE 2.0 BASE 4 with 12layer Transformer (Vaswani et al., 2017) are chosen for comparison. Here we report best model for each specialized Encoder2 with SAME Mode. Results on three datasets are listed in Table 7 .",
"cite_spans": [
{
"start": 351,
"end": 373,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "Overall, our architecture can be further boosted a lot by utilizing pre-trained word embeddings. For example, Encoder1-DRNN-S obtains a new score of 76.2%(+1.4%) on Yah. A. and Encoder1-CNN-S gets 94.1%(+1.6%) on AG. Vanilla local extractors also achieve better performance as expected in most instances while our models are still much better than them. Encoder1-CNN-S outperforms CNN by 0.9%, 1.8% and 2.0% on three datasets respectively, and Encoder1-DRNN-S outperforms DRNN by 0.4%, 0.6% and 0.7%. It shows that our architecture is well generalized and compatible with pre-training techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "It is interesting to compare with stronger pretrained models. Although we obtain close scores on AG, BERT and ERNIE 2.0 indeed achieve 2 http://nlp.stanford.edu/projects/glove 3 https://github.com/google-research/bert 4 https://github.com/PaddlePaddle/ERNIE Table 7 : Semi-supervised generalization of our architecture and comparison with popular pre-trained models. Here \"n\" and \"y\" stand for initializing word embeddings randomly and with pre-trained GloVe vectors separately.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "more advanced results on others and the latter performs best on all three datasets. Despite their superb accuracy, we argue that the huge models are resource-hungry in practice. Lightweight models still have advantages under some circumstances such as limited memory, longer text data to be processed and requirements of faster inference time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Pre-trained Models",
"sec_num": "4.6"
},
{
"text": "In this work, we demonstrate the local feature extraction can be significantly enhanced with global information. Instead of traditionally exploiting deeper and complicated operations in upper neural layers, our work innovatively provides another lightweight way for improving the ability of neural model. Specifically, we propose a novel architecture named Encoder1-Encoder2 with two Interaction Modes for their interacting. The architecture has high flexibility and our best models achieve new state-of-the-art performance in fullysupervised setting on all benchmark datasets. We also find that our architecture is insensitive to window size and enjoy a better robustness. In future work, we plan to validate its effectiveness for multi-label classification. Besides, we are interested in incorporating more powerful unsupervised methods into our architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our code will be available at https://github.com/PaddleP addle/models/tree/develop/PaddleNLP/Research/EMNLP2019 -GELE. \"GELE\" is the abbreviation for Global Encoder and Local Encoder, i.e., Encoder1 and Encoder2 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the Natural Science Foundation of China (No. 61533018). We gratefully thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Very deep convolutional networks for text classification",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01781"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2016. Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Attention pooling-based convolutional neural network for sentence modelling",
"authors": [
{
"first": "Meng Joo",
"middle": [],
"last": "Er",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mahardhika",
"middle": [],
"last": "Pratama",
"suffix": ""
}
],
"year": 2016,
"venue": "formation Sciences",
"volume": "373",
"issue": "",
"pages": "388--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Joo Er, Yong Zhang, Ning Wang, and Mahard- hika Pratama. 2016. Attention pooling-based convo- lutional neural network for sentence modelling. In- formation Sciences, 373:388-403.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep pyramid convolutional neural networks for text categorization",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "562--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categoriza- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 562-570.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recurrent convolutional neural networks for text classification",
"authors": [
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-ninth AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Convolutional networks and applications in vision",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Farabet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 2010 IEEE International Symposium on Circuits and Systems",
"volume": "",
"issue": "",
"pages": "253--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Koray Kavukcuoglu, and Cl\u00e9ment Fara- bet. 2010. Convolutional networks and applications in vision. In Proceedings of 2010 IEEE Interna- tional Symposium on Circuits and Systems, pages 253-256. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Importance of selfattention for sentiment analysis",
"authors": [
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Letarte",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9rik",
"middle": [],
"last": "Paradis",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Gigu\u00e8re",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "267--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ga\u00ebl Letarte, Fr\u00e9d\u00e9rik Paradis, Philippe Gigu\u00e8re, and Fran\u00e7ois Laviolette. 2018. Importance of self- attention for sentiment analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 267-275.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visualizing and understanding neural models in nlp",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.01066"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Nltk: the natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79-86. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Anew method of region embedding for text classification",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Guocheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Daren",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Qiao, Bo Huang, Guocheng Niu, Daren Li, Dax- iang Dong, Wei He, Dianhai Yu, and Hua Wu. 2018. Anew method of region embedding for text classi- fication. In International Conference on Learning Representations.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Disan: Directional self-attention network for rnn/cnn-free language understanding",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding. In Thirty-Second AAAI Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep lstm based feature mapping for query classification",
"authors": [
{
"first": "Yangyang",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1501--1511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep lstm based feature mapping for query classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1501-1511.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ernie 2.0: A continual pre-training framework for language understanding",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.12412"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language un- derstanding. arXiv preprint arXiv:1907.12412.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00075"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Docu- ment modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1422-1432.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Disconnected recurrent neural networks for text categorization",
"authors": [
{
"first": "Baoxin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2311--2320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baoxin Wang. 2018. Disconnected recurrent neural networks for text categorization. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), volume 1, pages 2311-2320.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Baselines and bigrams: Simple, good sentiment and topic classification",
"authors": [
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida Wang and Christopher D Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics: Short Papers-Volume 2, pages 90-94. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient character-level document classification by combining convolution and recurrent layers",
"authors": [
{
"first": "Yijun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.00367"
]
},
"num": null,
"urls": [],
"raw_text": "Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08237"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. arXiv preprint arXiv:1906.08237.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Generative and discriminative text classification with recurrent neural networks",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.01898"
]
},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blun- som. 2017. Generative and discriminative text clas- sification with recurrent neural networks. arXiv preprint arXiv:1703.01898.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Adaptive learning of local semantic and global structure representations for text classification",
"authors": [
{
"first": "Jianyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiqiang",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Qichuan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changjian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhensheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2033--2043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianyu Zhao, Zhiqiang Zhan, Qichuan Yang, Yang Zhang, Changjian Hu, Zhensheng Li, Liuxin Zhang, and Zhiqiang He. 2018. Adaptive learning of lo- cal semantic and global structure representations for text classification. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2033-2043.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "207--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), volume 2, pages 207-212.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "Model ablation experiments. \"Vanilla\" is the traditional CNN",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "(a) The impact of window size on CNN.(b) The impact of window size on DRNN.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Window size experiments on Yelp F. ATT is the abbreviation of Attention.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Topic classification examples for Technology and Health, where Apple is ambiguous within local context.",
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Datasets summary. C: Number of target classes. L: Average sentence length. N: Dataset size. Test: Test set size. In tasks, SA refers to sentiment analysis, and QA refers to question answering.",
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>extractor, corresponding to CNN and DRNN re-</td></tr><tr><td>spectively. In CNN (Encoder2), window sizes of</td></tr><tr><td>filters are of [3, 5, 7] with 128 feature maps each.</td></tr></table>",
"num": null,
"html": null,
"text": "Model settings. We limit the vocabulary size and set maximum sequence length. We also show the window size in DRNN followingWang (2018).",
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"num": null,
"html": null,
"text": "",
"type_str": "table"
},
"TABREF8": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Effect of Encoder1 and Interaction Mode.",
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>Model</td><td>Yelp F.</td><td>AG</td><td>Yah. A.</td></tr><tr><td colspan=\"2\">CNN(n/y) 64.7BERTBASE 67.9</td><td>94.2</td><td>76.4</td></tr><tr><td>ERNIE 2.0BASE</td><td>69.1</td><td>94.3</td><td>77.0</td></tr></table>",
"num": null,
"html": null,
"text": "/64.5 91.9/92.3 72.6/73.7 Encoder1-CNN-S(n/y) 66.2/66.6 92.5/94.1 74.5/75.7 DRNN(n/y) 66.4/66.8 92.9/93.6 74.3/75.5 Encoder1-DRNN-S(n/y) 66.8/67.2 93.0/94.2 74.8/76.2",
"type_str": "table"
}
}
}
}