{ "paper_id": "I17-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:37:59.744638Z" }, "title": "A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Texas at Austin", "location": {} }, "email": "yezhang@utexas.edu" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.", "pdf_parse": { "paper_id": "I17-1026", "_pdf_hash": "", "abstract": [ { "text": "Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Convolutional Neural Networks (CNNs) have recently been shown to achieve impressive results on the practically important task of sentence categorization (Kim, 2014; Kalchbrenner et al., 2014; Wang et al., 2015; Goldberg, 2015; Iyyer et al., 2015; Zhang et al., 2016 Zhang et al., , 2017 . CNNs can capitalize on distributed representations of words by first converting the tokens comprising each sentence into a vector, forming a matrix to be used as input (e.g., see Fig. 1 ). The models need not be complex to realize strong results: Kim (2014) , for example, proposed a simple one-layer CNN that achieved state-of-the-art (or comparable) results across several datasets. The very strong results achieved with this comparatively simple CNN architecture suggest that it may serve as a drop-in replacement for well-established baseline models, such as SVM (Joachims, 1998) or logistic regression. While more complex deep learning models for text classification will undoubtedly continue to be developed, those deploying such technologies in practice will likely be attracted to simpler variants, which afford fast training and prediction times.", "cite_spans": [ { "start": 153, "end": 164, "text": "(Kim, 2014;", "ref_id": "BIBREF22" }, { "start": 165, "end": 191, "text": "Kalchbrenner et al., 2014;", "ref_id": "BIBREF21" }, { "start": 192, "end": 210, "text": "Wang et al., 2015;", "ref_id": "BIBREF36" }, { "start": 211, "end": 226, "text": "Goldberg, 2015;", "ref_id": "BIBREF14" }, { "start": 227, "end": 246, "text": "Iyyer et al., 2015;", "ref_id": "BIBREF17" }, { "start": 247, "end": 265, "text": "Zhang et al., 2016", "ref_id": "BIBREF40" }, { "start": 266, "end": 286, "text": "Zhang et al., , 2017", "ref_id": "BIBREF39" }, { "start": 536, "end": 546, "text": "Kim (2014)", "ref_id": "BIBREF22" }, { "start": 856, "end": 872, "text": "(Joachims, 1998)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 468, "end": 474, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, a downside to CNN-based models -even simple ones -is that they require practitioners to specify the exact model architecture to be used and to set the accompanying hyperparameters. In practice, tuning all of these hyperparameters is simply not feasible, especially because parameter estimation is computationally intensive. Emerging research has begun to explore hyperparameter optimization methods, including random search (Bengio, 2012) , and Bayesian optimization (Yogatama and Smith, 2015; Bergstra et al., 2013) . However, these sophisticated search methods still require knowing which hyperparameters are worth exploring to begin with (and reasonable ranges for each).", "cite_spans": [ { "start": 439, "end": 453, "text": "(Bengio, 2012)", "ref_id": "BIBREF1" }, { "start": 482, "end": 508, "text": "(Yogatama and Smith, 2015;", "ref_id": "BIBREF38" }, { "start": 509, "end": 531, "text": "Bergstra et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work our aim is to identify empirically the settings that practitioners should expend effort tuning, and those that are either inconsequential with respect to performance or that seem to have a 'best' setting independent of the specific dataset, and provide a reasonable range for each hyperpa-rameter. We take inspiration from previous empirical analyses of neural models due to Coates et al. (2011) and Breuel (2015) , which investigated factors in unsupervised feature learning and hyperparameter settings for Stochastic Gradient Descent (SGD), respectively. Here we report the results of a large number of experiments exploring different configurations of CNNs run over nine sentence classification datasets. Most previous work in this area reports only mean accuracies calculated via cross-validation. But there is substantial variance in the performance of CNNs, even on the same folds and with model configuration held constant. Therefore, in our experiments we perform replications of cross-validation and report accuracy/Area Under Curve (AUC) score means and ranges over these.", "cite_spans": [ { "start": 388, "end": 408, "text": "Coates et al. (2011)", "ref_id": "BIBREF9" }, { "start": 413, "end": 426, "text": "Breuel (2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Deep and neural learning methods are now well established in machine learning (LeCun et al., 2015; Bengio, 2009) . They have been especially successful for image and speech processing tasks. More recently, such methods have begun to overtake traditional sparse, linear models for NLP (Goldberg, 2015; Bengio et al., 2003; Mikolov et al., 2013; Collobert and Weston, 2008; Collobert et al., 2011; Kalchbrenner et al., 2014; Socher et al., 2013) .", "cite_spans": [ { "start": 78, "end": 98, "text": "(LeCun et al., 2015;", "ref_id": "BIBREF24" }, { "start": 99, "end": 112, "text": "Bengio, 2009)", "ref_id": "BIBREF0" }, { "start": 284, "end": 300, "text": "(Goldberg, 2015;", "ref_id": "BIBREF14" }, { "start": 301, "end": 321, "text": "Bengio et al., 2003;", "ref_id": "BIBREF2" }, { "start": 322, "end": 343, "text": "Mikolov et al., 2013;", "ref_id": "BIBREF27" }, { "start": 344, "end": 371, "text": "Collobert and Weston, 2008;", "ref_id": "BIBREF10" }, { "start": 372, "end": 395, "text": "Collobert et al., 2011;", "ref_id": "BIBREF11" }, { "start": 396, "end": 422, "text": "Kalchbrenner et al., 2014;", "ref_id": "BIBREF21" }, { "start": 423, "end": 443, "text": "Socher et al., 2013)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Preliminaries", "sec_num": "2" }, { "text": "Recently, word embeddings have been exploited for sentence classification using CNN architectures. Kalchbrenner (2014) proposed a CNN architecture with multiple convolution layers, positing latent, dense and low-dimensional word vectors (initialized to random values) as inputs. Kim (2014) defined a one-layer CNN architecture that performed comparably. This model uses pre-trained word vectors as inputs, which may be treated as static or non-static. In the former approach, word vectors are treated as fixed inputs, while in the latter they are 'tuned' for a specific task. Elsewhere, Johnson and Zhang (2014) proposed a similar model, but swapped in high dimensional 'one-hot' vector representations of words as CNN inputs. Their focus was on classification of longer texts, rather than sentences (but of course the model can be used for sentence classification).", "cite_spans": [ { "start": 99, "end": 118, "text": "Kalchbrenner (2014)", "ref_id": "BIBREF21" }, { "start": 279, "end": 289, "text": "Kim (2014)", "ref_id": "BIBREF22" }, { "start": 587, "end": 611, "text": "Johnson and Zhang (2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Preliminaries", "sec_num": "2" }, { "text": "The relative simplicity of Kim's architecturewhich is largely the same as that proposed by Johnson and Zhang (2014) , modulo the word vec-tors -coupled with observed strong empirical performance makes this a strong contender to supplant existing text classification baselines such as SVM and logistic regression. But in practice one is faced with making several model architecture decisions and setting various hyperparameters. At present, very little empirical data is available to guide such decisions; addressing this gap is our aim here.", "cite_spans": [ { "start": 91, "end": 115, "text": "Johnson and Zhang (2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Preliminaries", "sec_num": "2" }, { "text": "We begin with a tokenized sentence which we then convert to a sentence matrix, the rows of which are word vector representations of each token. These might be, e.g., outputs from trained word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) models. We denote the dimensionality of the word vectors by d. If the length of a given sentence is s, then the dimensionality of the sentence matrix is s \u00d7 d. Suppose that there is a filter matrix w with region size h; w will contain h \u2022 d parameters to be estimated. We denote the sentence matrix by A \u2208 R s\u00d7d , and use A[i : j] to represent the sub-matrix of A from row i to row j. The output sequence o \u2208 R s\u2212h+1 of the convolution operator is obtained by repeatedly applying the filter on sub-matrices of A:", "cite_spans": [ { "start": 196, "end": 218, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF27" }, { "start": 228, "end": 253, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "CNN Architecture", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o i = w \u2022 A[i : i + h \u2212 1],", "eq_num": "(1)" } ], "section": "CNN Architecture", "sec_num": "2.1" }, { "text": "where i = 1 . . . s \u2212 h + 1, and \u2022 is the dot product between the sub-matrix and the filter (a sum over element-wise multiplications). We add a bias term b \u2208 R and an activation function f to each o i , inducing the feature map c \u2208 R s\u2212h+1 for this filter:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CNN Architecture", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i = f (o i + b).", "eq_num": "(2)" } ], "section": "CNN Architecture", "sec_num": "2.1" }, { "text": "One may use multiple filters for the same region size to learn complementary features from the same regions. One may also specify multiple kinds of filters with different region sizes (i.e., 'heights'). The dimensionality of the feature map generated by each filter will vary as a function of the sentence length and the filter region size. A pooling function is thus applied to each feature map to induce a fixed-length vector. A common strategy is 1-max pooling (Boureau et al., 2010b) , which extracts a scalar from each feature map. Together, the outputs generated from each filter map can be concatenated into a fixed-length, 'top-level' feature vector, which is then fed through a softmax function to generate the final classification. At this softmax layer, one may apply 'dropout as a means of regularization. This entails randomly setting values in the weight vector to 0. One may also impose an l2 norm constraint, i.e., linearly scale the l2 norm of the vector to a pre-specified threshold when it exceeds this. Fig. 1 provides a schematic illustrating the model architecture just described. The training objective to be minimized is the categorical cross-entropy loss. The parameters to be estimated include the weight vector(s) of the filter(s), the bias term in the activation function, and the weight vector of the softmax function. In the 'non-static' approach, one also tunes the word vectors. Optimization is performed using SGD and back-propagation (Rumelhart et al., 1988) .", "cite_spans": [ { "start": 464, "end": 487, "text": "(Boureau et al., 2010b)", "ref_id": "BIBREF5" }, { "start": 1469, "end": 1493, "text": "(Rumelhart et al., 1988)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 1023, "end": 1030, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "CNN Architecture", "sec_num": "2.1" }, { "text": "We use nine sentence classification datasets in all; seven of which were also used by Kim (2014) . Briefly, these are summarized as follows. (1) MR: sentence polarity dataset from (Pang and Lee, 2005) . (2) SST-1: Stanford Sentiment Treebank (Socher et al., 2013) . To make input representations consistent across tasks, we only train and test on sentences, in contrast to the use in (Kim, 2014) , wherein models were trained on both phrases and sentences. (3) SST-2: Derived from SST-1, but pared to only two classes. We again only train and test models on sentences, excluding phrases. (4) Subj: Subjectivity dataset (Pang and Lee, 2005 ). (5) TREC: Question classification dataset (Li and Roth, 2002) . (6) CR: Customer review dataset (Hu and Liu, 2004) . 7MPQA: Opinion polarity dataset (Wiebe et al., 2005) . Additionally, we use (8) Opi: Opinosis Dataset, which comprises sentences extracted from user reviews on a given topic, e.g. \"sound quality of ipod nano\". There are 51 such topics and each topic contains approximately 100 sentences (Ganesan et al., 2010) . (9) Irony (Wallace et al., 2014): this contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. 1 For this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.", "cite_spans": [ { "start": 86, "end": 96, "text": "Kim (2014)", "ref_id": "BIBREF22" }, { "start": 180, "end": 200, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF28" }, { "start": 242, "end": 263, "text": "(Socher et al., 2013)", "ref_id": "BIBREF32" }, { "start": 384, "end": 395, "text": "(Kim, 2014)", "ref_id": "BIBREF22" }, { "start": 619, "end": 638, "text": "(Pang and Lee, 2005", "ref_id": "BIBREF28" }, { "start": 684, "end": 703, "text": "(Li and Roth, 2002)", "ref_id": "BIBREF25" }, { "start": 738, "end": 756, "text": "(Hu and Liu, 2004)", "ref_id": "BIBREF16" }, { "start": 791, "end": 811, "text": "(Wiebe et al., 2005)", "ref_id": "BIBREF37" }, { "start": 1046, "end": 1068, "text": "(Ganesan et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "1 Empirically, under-sampling outperformed oversampling in mitigating imbalance, see also Wallace (2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "We give a baseline CNN configuration described in Table 1 . We argue that it is critical to assess the variance due strictly to the parameter estimation procedure. Most prior work, unfortunately, has not reported such variance, despite a highly stochastic learning procedure. This variance is attributable to estimation via SGD, random dropout, and random weight parameter initialization.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Baseline Configuration", "sec_num": "4.1" }, { "text": "Values input word vectors Google word2vec filter region size (3,4,5) feature maps 100 activation function ReLU pooling 1-max pooling dropout rate 0.5 l2 norm constraint 3 Then we consider the effect of different architecture decisions and hyperparameter settings. To this end, we hold all other settings constant (as per Table 1 ) and vary only the component of interest. For every configuration that we consider, we replicate the experiment 10 times, where each replication constitutes a run of 10-fold CV. We report average CV means and associated ranges achieved over the replicated CV runs.", "cite_spans": [], "ref_spans": [ { "start": 321, "end": 328, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Description", "sec_num": null }, { "text": "A nice property of sentence classification models that start with distributed representations of words as inputs is the flexibility such architectures afford to swap in different pre-trained word vectors during model initialization. Therefore, we first explore the sensitivity of CNNs for sentence classification with respect to the input representations used. Specifically, we replaced word2vec with GloVe representations. Google word2vec uses a local context window model trained on 100 billion words from Google News (Mikolov et al., 2013) , while GloVe is a model based on global wordword co-occurrence statistics (Pennington et al., 2014) . We used a GloVe model trained on a cor- Figure 1: Illustration of a CNN architecture for sentence classification. We depict three filter region sizes: 2, 3 and 4, each of which has 2 filters. Filters perform convolutions on the sentence matrix and generate (variable-length) feature maps; 1-max pooling is performed over each map, i.e., the largest number from each feature map is recorded. Thus a univariate feature vector is generated from all six maps, and these 6 features are concatenated to form a feature vector for the penultimate layer. The final softmax layer then receives this feature vector as input and uses it to classify the sentence; here we assume binary classification and hence depict two possible output states. pus of 840 billion tokens of web data. For both word2vec and GloVe we induce 300-dimensional word vectors. We report results achieved using GloVe representations in Table 2 . Here we only report non-static GloVe results (which uniformely outperformed the static variant).", "cite_spans": [ { "start": 520, "end": 542, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF27" }, { "start": 618, "end": 643, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 1544, "end": 1551, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Effect of input word vectors", "sec_num": "4.2" }, { "text": "We also experimented with concatenating word2vec and GloVe representations, thus creating 600-dimensional word vectors to be used as input to the CNN. Pre-trained vectors may not always be available for specific words (either in word2vec or GloVe, or both); in such cases, we randomly initialized the corresponding sub-vectors. Results are reported in the final column of Table 2 . The relative performance achieved using GloVe versus word2vec depends on the dataset, and, unfortunately, simply concatenating these representations does necessarily seem helpful. For how to better utilize multiple sets of embeddings, we refer to (Zhang et al., 2016) .", "cite_spans": [ { "start": 629, "end": 649, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 372, "end": 379, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Effect of input word vectors", "sec_num": "4.2" }, { "text": "We also experimented with using long, sparse one-hot vectors as input word representations, in the spirit of Johnson and Zhang (2014) . In this strategy, each word is encoded as a one-hot vector, with dimensionality equal to the vocabulary size. Though this representation combined with one-layer CNN achieves good results on document classification, it is still unknown whether this is useful for sentence classification. We keep the other settings the same as in the basic con- We first explore the effect of filter region size when using only one region size, and we set the number of feature maps for this region size to 100 (as in the baseline configuration). We consider region sizes of 1, 3, 5, 7, 10, 15, 20, 25 and 30, and record the means and ranges over 10 replications of 10-fold CV for each. We report results in Table 3 and Fig. 2 . Because we are only interested in the trend of the accuracy as we alter the region size (rather than the absolute performance on each Figure 2 : Effect of the region size (using only one). task), we show only the percent change in accuracy (AUC for Irony) from an arbitrary baseline point (here, a region size of 3).", "cite_spans": [ { "start": 109, "end": 133, "text": "Johnson and Zhang (2014)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 838, "end": 844, "text": "Fig. 2", "ref_id": null }, { "start": 981, "end": 989, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Effect of input word vectors", "sec_num": "4.2" }, { "text": "From the results, one can see that each dataset has its own optimal filter region size. Practically, this suggests performing a coarse grid search over a range of region sizes; the figure here suggests that a reasonable range for sentence classification might be from 1 to 10. However, for datasets comprising longer sentences, such as CR (maximum sentence length is 105, whereas it ranges from 36-56 on the other sentiment datasets used here), the optimal region size may be larger. We also explored the effect of combining different filter region sizes, while keeping the number of feature maps for each region size fixed at 100. We found that combining several filters with region sizes close to the optimal single region size can improve performance, but adding region sizes far from the optimal range may hurt performance. For example, when using a single filter size, one can observe that the optimal single region size for the MR dataset is 7. We therefore combined several different filter region sizes close to this optimal range, and compared this to approaches that use region sizes outside of this range. From Table 5 , one can see that using (5,6,7),and (7,8,9) and (6,7,8,9) -sets near the best single region sizeproduce the best results. The difference is especially pronounced when comparing to the baseline setting of (3,4,5). Note that even only using a single good filter region size (here, 7) results in better performance than combining different sizes (3,4,5). The best performing strategy is to simply use many feature maps (here, 400) all with region size equal to 7, i.e., the single best region size. However, we note that in some cases (e.g., for the TREC dataset), using multiple different, but nearoptimal, region sizes performs best. We report its results in table 4.", "cite_spans": [], "ref_spans": [ { "start": 1122, "end": 1130, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Effect of input word vectors", "sec_num": "4.2" }, { "text": "Accuracy ( In light of these observations, we believe it advisable to first perform a coarse line-search over a single filter region size to find the 'best' size for the dataset under consideration, and then explore the combination of several region sizes nearby this single best size, including combining both different region sizes and copies of the optimal sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple region size", "sec_num": null }, { "text": "We again hold other configurations constant, and thus have three filter region sizes: 3, 4 and 5. We change only the number of feature maps for each of these relative to the baseline of 100; we consider values \u2208 {10, 50, 100, 200, 400, 600, 1000, 2000}. We report results in Fig. 3 . The 'best' number of feature maps for each filter region size depends on the dataset. However, it would seem that increasing the number of maps beyond 600 yields at best very marginal returns, and often hurts performance (likely due to overfitting). Another salient practical point is that it takes a longer time to train the model when the number of feature maps is increased.", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 281, "text": "Fig. 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Effect of number of feature maps for each filter region size", "sec_num": "4.4" }, { "text": "In practice, the evidence here suggests perhaps searching over a range of 100 to 600. Note that this range is only provided as a possible standard trick when one is faced with a new similar sentence classification problem; it is of course possible that in some cases more than 600 feature maps will be beneficial, but the evidence here suggests expending the effort to explore this is probably not worth it. In practice, one should consider whether the best observed value falls near the border of the range searched over; if so, it is probably worth exploring beyond that border, as suggested in (Bengio, 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of number of feature maps for each filter region size", "sec_num": "4.4" }, { "text": "We consider seven different activation functions in the convolution layer, including: ReLU (as per the baseline configuration), hyperbolic tangent (tanh), Sigmoid function (Maas et al., 2013) , SoftPlus function (Dugas et al., 2001) , Cube function (Chen and Manning, 2014) , and tanh cube function (Pei et al., 2015) . We use 'Iden' to denote the identity function, which means not using any activation function.", "cite_spans": [ { "start": 172, "end": 191, "text": "(Maas et al., 2013)", "ref_id": "BIBREF26" }, { "start": 212, "end": 232, "text": "(Dugas et al., 2001)", "ref_id": "BIBREF12" }, { "start": 249, "end": 273, "text": "(Chen and Manning, 2014)", "ref_id": "BIBREF8" }, { "start": 299, "end": 317, "text": "(Pei et al., 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of activation function", "sec_num": "4.5" }, { "text": "We show the numerical results of tanh, Softplus, Iden and ReLU in table 6. For 8 out of 9 datasets, the best activation function is one of Iden, ReLU and tanh. The SoftPlus function outperform these on only one dataset (MPQA). Sigmoid, Cube, and tanh cube all consistently performed worse than alternative activation functions. The performance of the tanh function may be due to its zero centering property (compared to Sigmoid). ReLU has the merits of a non-saturating form compared to Sigmoid, and it has been observed to accelerate the convergence of SGD . One interesting result is that not applying any activation function (Iden) sometimes helps. This indicates that on some datasets, a linear transformation is enough to capture the correlation between the word embedding and the output label. However, if there are multiple hidden layers, Iden may be less suitable than non-linear activation functions. Practically, with respect to the choice of the activation function in one-layer CNNs, our results suggest experimenting with ReLU and tanh, and perhaps also Iden.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of activation function", "sec_num": "4.5" }, { "text": "We next investigated the effect of the pooling strategy and the pooling region size. We fixed the filter region sizes and the number of feature maps as in the baseline configuration, thus changing only the pooling strategy or pooling region size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of pooling strategy", "sec_num": "4.6" }, { "text": "In the baseline configuration, we performed 1max pooling globally over feature maps, inducing a feature vector of length 1 for each filter. However, pooling may also be performed over small equal sized local regions rather than over the entire feature map (Boureau et al., 2011) . Each small local region on the feature map will generate a single number from pooling, and these numbers can be concatenated to form a feature vector for one feature map. The following step is the same as 1max pooling: we concatenate all the feature vectors together to form a single feature vector for the classification layer. We experimented with local region sizes of 3, 10, 20, and 30, and found that 1-max pooling outperformed all local max pooling configurations. This result held across all datasets.", "cite_spans": [ { "start": 256, "end": 278, "text": "(Boureau et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of pooling strategy", "sec_num": "4.6" }, { "text": "We also considered a k-max pooling strategy similar to (Kalchbrenner et al., 2014) , in which the maximum k values are extracted from the entire feature map, and the relative order of these values is preserved. We explored the k \u2208 {5, 10, 15, 20}, and again found 1-max pooling fared best, consistently outperforming k-max pooling.", "cite_spans": [ { "start": 55, "end": 82, "text": "(Kalchbrenner et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of pooling strategy", "sec_num": "4.6" }, { "text": "Next, we considered taking an average, rather than the max, over regions (Boureau et al., 2010a) . We experimented with local average pooling region sizes {3, 10, 20, 30}. We found that average pooling uniformly performed (much) worse than max pooling, at least on the CR and TREC datasets.", "cite_spans": [ { "start": 73, "end": 96, "text": "(Boureau et al., 2010a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of pooling strategy", "sec_num": "4.6" }, { "text": "Our analysis of pooling strategies shows that 1max pooling consistently performs better than alternative strategies for the task of sentence classification. This may be because the location of predictive contexts does not matter, and certain n-grams in the sentence can be more predictive on their own than the entire sentence considered jointly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of pooling strategy", "sec_num": "4.6" }, { "text": "Two common regularization strategies for CNNs are dropout and l2 norm constraints; we explore the effect of these here. 'Dropout' is applied to the input to the penultimate layer. We experimented with varying the dropout rate from 0.0 to 0.9, fixing the l2 norm constraint to 3, as per the baseline configuration. The results for non-static CNN are shown in in Fig. 4 , with 0.5 designated as the baseline. We also report the accuracy achieved when we remove both dropout and the l2 norm constraint (i.e., when no regularization is performed), denoted by 'None'.", "cite_spans": [], "ref_spans": [ { "start": 361, "end": 367, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Effect of regularization", "sec_num": "4.7" }, { "text": "Separately, we considered the effect of the l2 norm imposed on the weight vectors that parametrize the softmax function. Recall that the l2 norm of a weight vector is linearly scaled to a constraint c when it exceeds this threshold, so a smaller c implies stronger regularization. (Like dropout, this strategy is applied only to the penultimate layer.) We show the relative effect of varying c on non-static CNN in Figure 5 , where we have fixed the dropout rate to 0.5; 3 is the baseline here (again, arbitrarily).", "cite_spans": [], "ref_spans": [ { "start": 415, "end": 423, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Effect of regularization", "sec_num": "4.7" }, { "text": "From Figures 4 and 5, one can see that non-zero dropout rates can help (though very little) at some points from 0.1 to 0.5, depending on datasets. But", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of regularization", "sec_num": "4.7" }, { "text": "Softplus Iden ReLU MR 81.28 (81.07, 81.52) 80.58 (80.17, 81.12) 81.30 (81.09, 81.52) 81.16 (80.81, 47.73) 46.95 (46.43, 47.45) 46.73 (46.24, 47.18) 47.13 (46.39, 85.85) 84.61 (84.19, 84.94) 85.26 (85.11, 85.45) 85.31 (85.93, 85.66) Subj 93.15 (92.93, 93.34) 92.43 (92.21, 92.61) 93.11 (92.92, 93.22) 93.13 (92.93, 93.23) TREC 91.18 (90.91, 91.47) 91.05 (90.82, 91.29) 91.11 (90.82, 91.34) 91.54 (91.17, 91.84) CR 84.28 (83.90, 85.11) 83.67 (83.16, 84.26) 84.55 (84.21, 84.69) 83.83 (83.18, 84.21) MPQA 89.48 (89.16, 89.84) 89.62 (89.45, 89.77) 89.57 (89.31, 89.88) 89.35 (88.88, 89.58 Figure 4 : Effect of dropout rate. The accuracy when the dropout rate is 0.9 on the Opi dataset is about 10% worse than baseline, and thus is not visible on the figure at this point.", "cite_spans": [ { "start": 14, "end": 35, "text": "ReLU MR 81.28 (81.07,", "ref_id": null }, { "start": 36, "end": 56, "text": "81.52) 80.58 (80.17,", "ref_id": null }, { "start": 57, "end": 77, "text": "81.12) 81.30 (81.09,", "ref_id": null }, { "start": 78, "end": 98, "text": "81.52) 81.16 (80.81,", "ref_id": null }, { "start": 99, "end": 119, "text": "47.73) 46.95 (46.43,", "ref_id": null }, { "start": 120, "end": 140, "text": "47.45) 46.73 (46.24,", "ref_id": null }, { "start": 141, "end": 161, "text": "47.18) 47.13 (46.39,", "ref_id": null }, { "start": 162, "end": 182, "text": "85.85) 84.61 (84.19,", "ref_id": null }, { "start": 183, "end": 203, "text": "84.94) 85.26 (85.11,", "ref_id": null }, { "start": 204, "end": 224, "text": "85.45) 85.31 (85.93,", "ref_id": null }, { "start": 225, "end": 250, "text": "85.66) Subj 93.15 (92.93,", "ref_id": null }, { "start": 251, "end": 271, "text": "93.34) 92.43 (92.21,", "ref_id": null }, { "start": 272, "end": 292, "text": "92.61) 93.11 (92.92,", "ref_id": null }, { "start": 293, "end": 313, "text": "93.22) 93.13 (92.93,", "ref_id": null }, { "start": 314, "end": 339, "text": "93.23) TREC 91.18 (90.91,", "ref_id": null }, { "start": 340, "end": 360, "text": "91.47) 91.05 (90.82,", "ref_id": null }, { "start": 361, "end": 381, "text": "91.29) 91.11 (90.82,", "ref_id": null }, { "start": 382, "end": 402, "text": "91.34) 91.54 (91.17,", "ref_id": null }, { "start": 403, "end": 426, "text": "91.84) CR 84.28 (83.90,", "ref_id": null }, { "start": 427, "end": 447, "text": "85.11) 83.67 (83.16,", "ref_id": null }, { "start": 448, "end": 468, "text": "84.26) 84.55 (84.21,", "ref_id": null }, { "start": 469, "end": 489, "text": "84.69) 83.83 (83.18,", "ref_id": null }, { "start": 490, "end": 515, "text": "84.21) MPQA 89.48 (89.16,", "ref_id": null }, { "start": 516, "end": 536, "text": "89.84) 89.62 (89.45,", "ref_id": null }, { "start": 537, "end": 557, "text": "89.77) 89.57 (89.31,", "ref_id": null }, { "start": 558, "end": 578, "text": "89.88) 89.35 (88.88,", "ref_id": null }, { "start": 579, "end": 584, "text": "89.58", "ref_id": null } ], "ref_spans": [ { "start": 585, "end": 593, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Dataset tanh", "sec_num": null }, { "text": "imposing an l2 norm constraint generally does not improve performance much (except on Opi), and even adversely effects performance on at least one dataset (CR). We then also explored dropout rate effect when increasing the number of feature maps. We increase the number of feature maps for each filter size from 100 to 500, and set max l2 norm constraint as 3. The effect of dropout rate is shown in Fig. 6 . We see that the effect of dropout rate is almost the same as when the number of feature maps is 100, and it does not help much. But we observe that for the dataset SST-1, dropout rate actually helps when it is 0.7. Referring to Fig. 3 , we can see that when the number of feature maps is larger than 100, it hurts the performance possibly due to overfitting, so it is reasonable that in this case dropout would mitigate this effect.", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 406, "text": "Fig. 6", "ref_id": null }, { "start": 637, "end": 643, "text": "Fig. 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dataset tanh", "sec_num": null }, { "text": "We also experimented with applying dropout only to the convolution layer, but still setting the max norm constraint on the classification layer to 3, keeping all other settings exactly the same. This means we randomly set elements of the sentence matrix to 0 during training with probability p, and Figure 6 : Effect of dropout rate when using 500 feature maps.", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Dataset tanh", "sec_num": null }, { "text": "then multiplied p with the sentence matrix at test time. The effect of dropout rate on the convolution layer is shown in Fig. 7 . Again we see that dropout on the convolution layer helps little, and large dropout rate dramatically hurts performance. To summarize, contrary to some of the existing literature (Srivastava et al., 2014) , we found that dropout had little beneficial effect on CNN performance. We attribute this observation to the fact that one-layer CNN has a smaller number parameters than multi-layer deep learning models. Another possible explanation is that using word em- Figure 7 : Effect of dropout rate on the convolution layer (The accuracy when the dropout rate is 0.9 on the Opi dataset is not visible on the figure at this point, as in Fig. 4) beddings helps to prevent overfitting (compared to bag of words based encodings). However, we are not advocating completely foregoing regularization. Practically, we suggest setting the dropout rate to a small value (0.0-0.5) and using a relatively large max norm constraint, while increasing the number of feature maps to see whether more features might help. When further increasing the number of feature maps seems to degrade performance, it is probably worth increasing the dropout rate.", "cite_spans": [ { "start": 308, "end": 333, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 121, "end": 127, "text": "Fig. 7", "ref_id": null }, { "start": 591, "end": 599, "text": "Figure 7", "ref_id": null }, { "start": 762, "end": 769, "text": "Fig. 4)", "ref_id": null } ], "eq_spans": [], "section": "Dataset tanh", "sec_num": null }, { "text": "We have conducted an extensive experimental analysis of CNNs for sentence classification. We conclude here by summarizing our main findings and deriving from these practical guidance for researchers and practitioners looking to use and deploy CNNs in real-world sentence classification scenarios. From our experimental analysis we draw several conclusions that we hope will guide future work and be useful for researchers new to using CNNs for sentence classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 We find that, even when tuning them to the task at hand, the choice of input word vector representation (e.g., between word2vec and GloVe) has an impact on performance, however different representations perform better for different tasks. At least for sentence classification, both seem to perform better than using one-hot vectors directly. Consider starting with the basic configuration described in Table 1 and using non-static word2vec or GloVe.", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 411, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 The filter region size can have a large effect on performance, and should be tuned. Line-search over the single filter region size to find the 'best' single region size. A reasonable range might be 1\u223c10. However, for datasets with very long sentences like CR, it may be worth exploring larger filter region sizes. Once this 'best' region size is identified, it may be worth exploring combining multiple filters using regions sizes near this single best size, given that empirically multiple 'good' region sizes always outperformed using only the single best region size. \u2022 1-max pooling uniformly outperforms other pooling strategies. \u2022 Consider different activation functions if possible: ReLU and tanh are the best overall candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "\u2022 Alter the number of feature maps for each filter region size from 100 to 600, and when this is being explored, use a small dropout rate (0.0-0.5) and a large max norm constraint. Pay attention whether the best value found is near the border of the range (Bengio, 2012) . If the best value is near 600, it may be worth trying larger values. \u2022 When assessing the performance of a model (or a particular configuration thereof), it is imperative to consider variance. Therefore, replications of the cross-fold validation procedure should be performed and variances and ranges should be considered.", "cite_spans": [ { "start": 256, "end": 270, "text": "(Bengio, 2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Of course, the above suggestions are applicable only to datasets comprising sentences with similar properties to the those considered in this work. And there may be examples that run counter to our findings here. Nonetheless, we believe these suggestions are likely to provide a reasonable starting point for researchers or practitioners looking to apply a simple one-layer CNN to real world sentence classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We recognize that manual and grid search over hyperparameters is sub-optimal, and note that our suggestions here may also inform hyperparameter ranges to explore in random search or Bayesian optimization frameworks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning deep architectures for ai", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2009, "venue": "Machine Learning", "volume": "2", "issue": "", "pages": "1--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and trends in Machine Learning, 2(1):1-127.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Practical recommendations for gradient-based training of deep architectures", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2012, "venue": "Neural Networks: Tricks of the Trade", "volume": "", "issue": "", "pages": "437--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade, pages 437- 478. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search, 3:1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures", "authors": [ { "first": "James", "middle": [], "last": "Bergstra", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Yamins", "suffix": "" }, { "first": "David", "middle": [ "Daniel" ], "last": "Cox", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Bergstra, Daniel Yamins, and David Daniel Cox. 2013. Making a science of model search: Hyperpa- rameter optimization in hundreds of dimensions for vision architectures.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning mid-level features for recognition", "authors": [ { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ponce", "suffix": "" } ], "year": 2010, "venue": "Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on", "volume": "", "issue": "", "pages": "2559--2566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-Lan Boureau, Francis Bach, Yann LeCun, and Jean Ponce. 2010a. Learning mid-level features for recognition. In Computer Vision and Pattern Recog- nition (CVPR), 2010 IEEE Conference on, pages 2559-2566. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A theoretical analysis of feature pooling in visual recognition", "authors": [ { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ponce", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 27th International Conference on Machine Learning (ICML-10)", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-Lan Boureau, Jean Ponce, and Yann LeCun. 2010b. A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th Interna- tional Conference on Machine Learning (ICML-10), pages 111-118.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ask the locals: multi-way local pooling for image recognition", "authors": [ { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Nicolas", "middle": [ "Le" ], "last": "Roux", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ponce", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2011, "venue": "Computer Vision (ICCV), 2011 IEEE International Conference on", "volume": "", "issue": "", "pages": "2651--2658", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-Lan Boureau, Nicolas Le Roux, Francis Bach, Jean Ponce, and Yann LeCun. 2011. Ask the locals: multi-way local pooling for image recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2651-2658. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The effects of hyperparameters on sgd training of neural networks", "authors": [ { "first": "M", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Breuel", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.02788" ] }, "num": null, "urls": [], "raw_text": "Thomas M Breuel. 2015. The effects of hyperparam- eters on sgd training of neural networks. arXiv preprint arXiv:1508.02788.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), volume 1, pages 740-750.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An analysis of single-layer networks in unsupervised feature learning", "authors": [ { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2011, "venue": "International conference on artificial intelligence and statistics", "volume": "", "issue": "", "pages": "215--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Coates, Andrew Y Ng, and Honglak Lee. 2011. An analysis of single-layer networks in unsuper- vised feature learning. In International conference on artificial intelligence and statistics, pages 215- 223.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th international conference on Machine learning, pages 160-167. ACM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Incorporating second-order functional knowledge for better option pricing", "authors": [ { "first": "Charles", "middle": [], "last": "Dugas", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "B\u00e9lisle", "suffix": "" }, { "first": "Claude", "middle": [], "last": "Nadeau", "suffix": "" }, { "first": "Ren\u00e9", "middle": [], "last": "Garcia", "suffix": "" } ], "year": 2001, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "472--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Dugas, Yoshua Bengio, Fran\u00e7ois B\u00e9lisle, Claude Nadeau, and Ren\u00e9 Garcia. 2001. Incorpo- rating second-order functional knowledge for bet- ter option pricing. Advances in Neural Information Processing Systems, pages 472-478.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "340--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstrac- tive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 340-348. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A primer on neural network models for natural language processing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.00726" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2015. A primer on neural network models for natural language processing. arXiv preprint arXiv:1510.00726.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving neural networks by preventing coadaptation of feature detectors", "authors": [ { "first": "Nitish", "middle": [], "last": "Geoffrey E Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ruslan R", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1207.0580" ] }, "num": null, "urls": [], "raw_text": "Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep unordered composition rivals syntactic methods for text classification", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Text categorization with support vector machines: Learning with many relevant features", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1998. Text categorization with sup- port vector machines: Learning with many relevant features. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Effective use of word order for text categorization with convolutional neural networks", "authors": [ { "first": "Rie", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.1058" ] }, "num": null, "urls": [], "raw_text": "Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semi-supervised convolutional neural networks for text categorization via region embedding", "authors": [ { "first": "Rie", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "919--927", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rie Johnson and Tong Zhang. 2015. Semi-supervised convolutional neural networks for text categoriza- tion via region embedding. In Advances in neural information processing systems, pages 919-927.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A convolutional neural network for modelling sentences", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "655--665", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 655-665, Baltimore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1097--1105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097-1105.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep learning", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "Nature", "volume": "521", "issue": "7553", "pages": "436--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436-444.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning question classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. Association for Computational Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Rectifier nonlinearities improve neural network acoustic models", "authors": [ { "first": "L", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "", "middle": [], "last": "Maas", "suffix": "" }, { "first": "Y", "middle": [], "last": "Awni", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proc. ICML", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural net- work acoustic models. In Proc. ICML, volume 30.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An effective neural network model for graph-based dependency parsing", "authors": [ { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2015, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based de- pendency parsing. In Proc. of ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "12", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532-1543.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning representations by backpropagating errors", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "David E Rumelhart", "suffix": "" }, { "first": "Ronald J", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1988, "venue": "Cognitive modeling", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back- propagating errors. Cognitive modeling, 5:3.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "1631", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Humans require context to infer ironic intent (so computers probably do, too)", "authors": [ { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "Laura Kertz Do Kook", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Choe", "suffix": "" }, { "first": "", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "512--516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron C Wallace, Laura Kertz Do Kook Choe, and Eu- gene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 512-516.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Class imbalance, redux", "authors": [ { "first": "C", "middle": [], "last": "Byron", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Carla", "middle": [ "E" ], "last": "Small", "suffix": "" }, { "first": "Thomas", "middle": [ "A" ], "last": "Brodley", "suffix": "" }, { "first": "", "middle": [], "last": "Trikalinos", "suffix": "" } ], "year": 2011, "venue": "Data Mining (ICDM), 2011 IEEE 11th International Conference on", "volume": "", "issue": "", "pages": "754--763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron C Wallace, Kevin Small, Carla E Brodley, and Thomas A Trikalinos. 2011. Class imbalance, re- dux. In Data Mining (ICDM), 2011 IEEE 11th In- ternational Conference on, pages 754-763. IEEE.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semantic clustering and convolutional neural network for short text categorization", "authors": [ { "first": "Peng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiaming", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chenglin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fangyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongwei", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "352--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Wang, Jiaming Xu, Bo Xu, Chenglin Liu, Heng Zhang, Fangyuan Wang, and Hongwei Hao. 2015. Semantic clustering and convolutional neural net- work for short text categorization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 352-357, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Annotating expressions of opinions and emotions in language. Language resources and evaluation", "authors": [ { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "", "volume": "39", "issue": "", "pages": "165--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language resources and evalua- tion, 39(2-3):165-210.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Bayesian optimization of text representations", "authors": [ { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.00693" ] }, "num": null, "urls": [], "raw_text": "Dani Yogatama and Noah A Smith. 2015. Bayesian optimization of text representations. arXiv preprint arXiv:1503.00693.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Exploiting domain knowledge via grouped weight sharing with application to text categorization", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Lease", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.02535" ] }, "num": null, "urls": [], "raw_text": "Ye Zhang, Matthew Lease, and Byron C Wallace. 2017. Exploiting domain knowledge via grouped weight sharing with application to text categoriza- tion. arXiv preprint arXiv:1702.02535.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Mgnc-cnn: A simple approach to exploiting multiple word embeddings for sentence classification", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Byron", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.00968" ] }, "num": null, "urls": [], "raw_text": "Ye Zhang, Stephen Roller, and Byron Wallace. 2016. Mgnc-cnn: A simple approach to exploiting mul- tiple word embeddings for sentence classification. arXiv preprint arXiv:1603.00968.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "text": "Effect of the number of feature maps.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Figure 5: Effect of the l2 norm constraint on weight vectors.", "uris": null }, "TABREF0": { "num": null, "text": "", "html": null, "type_str": "table", "content": "" }, "TABREF2": { "num": null, "text": "Dataset Non-static word2vec-CNN Non-static GloVe-CNN Non-static GloVe+word2vecCNN MR 81.24 (80.69, 81.56) 81.03 (80.68,81.48) 81.02 (80.75,81.32) SST-1 47.08 (46.42,48.01) 45.65 (45.09,45.94) 45.98 (45.49,46.65) SST-2 85.49 (85.03, 85.90) 85.22 (85.04,85.48) 85.45 (85.03,85.82) Subj 93.20 (92.97, 93.45) 93.64 (93.51,93.77) 93.66 (93.39,93.87) TREC 91.54 (91.15, 91.92) 90.38 (90.19,90.59) 91.37 (91.13,91.62) CR 83.92 (82.95, 84.56) 84.33 (84.00,84.67) 84.65 (84.21,84.96) MPQA 89.32 (88.84, 89.73) 89.57 (89.31,89.78) 89.55 (89.22,89.88)", "html": null, "type_str": "table", "content": "
Opi64.93 (64.23,65.58)65.68 (65.29,66.19)65.65 (65.15,65.98)
Irony67.07 (65.60,69.00)67.20 (66.45,67.96)67.11 (66.66,68.50)
" }, "TABREF3": { "num": null, "text": "Performance using non-static word2vec-CNN, non-static GloVe-CNN, and non-static GloVe+word2vec CNN, respectively. Each cell reports the mean (min, max) of summary performance measures calculated over multiple runs of 10-fold cross-validation. We will use this format for all tables involving replications figuration, and the one-hot vector is fixed during training. Compared to using embeddings as input to the CNN, we found the one-hot approach to perform poorly for sentence classification tasks.", "html": null, "type_str": "table", "content": "
4.3 Effect of filter region size
Region sizeMR
177.85 (77.47,77.97)
380.48 (80.26,80.65)
581.13 (80.96,81.32)
781.65 (81.45,81.85)
1081.43 (81.28,81.75)
1581.26 (81.01,81.43)
2081.06 (80.87,81.30)
2580.91 (80.73,81.10)
3080.91 (80.72,81.05)
" }, "TABREF4": { "num": null, "text": "Effect of single filter region size. Due to space constraints, we report results for only one dataset here, but these are generally illustrative.", "html": null, "type_str": "table", "content": "" }, "TABREF6": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
: Effect of filter region size with several
region sizes using non-static word2vec-CNN on
TREC dataset
" }, "TABREF8": { "num": null, "text": "", "html": null, "type_str": "table", "content": "" }, "TABREF10": { "num": null, "text": "Performance of different activation functions", "html": null, "type_str": "table", "content": "
MRSST-1SST-2SubjTRECCRMPQAOpiIrony
1
0
Change in accuracy (%)2 1
3
4None 0.0 0.10.30.5 Dropout rate0.70.9
" } } } }