Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:21.925044Z"
},
"title": "Text Sentiment Analysis based on Fusion of Structural Information and Serialization Information",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Gan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chongqing University of Posts and Telecommunications",
"location": {
"postCode": "400065",
"settlement": "Chongqing",
"country": "China"
}
},
"email": "ganling@cqupt.edu.cn"
},
{
"first": "Houyu",
"middle": [],
"last": "Gong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chongqing University of Posts and Telecommunications",
"location": {
"postCode": "400065",
"settlement": "Chongqing",
"country": "China"
}
},
"email": "gonghouyub103@163.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Tree-structured Long Short-Term Memory (Tree-LSTM) has been proved to be an effective method in the sentiment analysis task. It extracts structural information on text, and uses Long Short-Term Memory (LSTM) cell to prevent gradient vanish. However, though combining the LSTM cell, it is still a kind of model that extracts the structural information and almost not extracts serialization information. In this paper, we propose three new models in order to combine those two kinds of information: the structural information generated by the Constituency Tree-LSTM and the serialization information generated by Long-Short Term Memory neural network. Our experiments show that combining those two kinds of information can give contributes to the performance of the sentiment analysis task compared with the single Constituency Tree-LSTM model and the LSTM model.",
"pdf_parse": {
"paper_id": "I17-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Tree-structured Long Short-Term Memory (Tree-LSTM) has been proved to be an effective method in the sentiment analysis task. It extracts structural information on text, and uses Long Short-Term Memory (LSTM) cell to prevent gradient vanish. However, though combining the LSTM cell, it is still a kind of model that extracts the structural information and almost not extracts serialization information. In this paper, we propose three new models in order to combine those two kinds of information: the structural information generated by the Constituency Tree-LSTM and the serialization information generated by Long-Short Term Memory neural network. Our experiments show that combining those two kinds of information can give contributes to the performance of the sentiment analysis task compared with the single Constituency Tree-LSTM model and the LSTM model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text sentiment analysis, namely Opinion mining, is an important research direction in the field of Natural Language Processing (NLP). It aims to extract the author's subjective information from text and provide useful values for us. In recent years, there were more and more researchers paying attention to the study of text sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Up to date, a variety of methods have been developed for improving the performance of sentiment analysis models. The distributed representation for words has been proposed in 2003 (Bengio et al., 2003) . This model trained by two kinds of three-layer neural networks generates vectors to represent words. Glove, which is the improvement of the model mentioned above, has been proposed in 2014 (Pennington et al., 2014) . The improved representation of words gives contribution to the research of NLP tasks including sentiment analysis, and are used as the input of sentiment analysis models. There are several kinds of deep learning methods to extract text features. Sequential models such as Recurrent Neural Network (Schmidhuber, 1990) , Bidirectional Recurrent Neural Networks (Member et al., 1997) , Long Short-Term Memory (Hochreiter and Schmidhuber, 2012) and Gated Recurrent Unit (Cho et al., 2014) mainly extract serialization information of text. Multi-layer sequential models have also been proposed for sentiment analysis (Wang et al., 2016) (Tang et al., 2015) (He et al., 2016) . Treestructured models extract structural information. The first tree-structured model named Recursive Neural Network has been proposed in 2012 (Socher et al., 2012b) , followed by more models such as Matrix-Vector Recursive Neural Network (Socher et al., 2012a) and Recursive Neural Tensor Network (Socher et al., 2013) . In 2015, Tree-structured Long Short-Term Memory Neural Network (Tree-LSTM), which combines the LST-M cell and tree-structured models, has been proposed and it outperforms the traditional LSTM and tree-structured neural networks (Le and Zuidema, 2015) (Tai et al., 2015) (Zhu et al., 2015) . Different from the traditional tree-structured model, Tree-LSTM uses the LSTM cell to control the information from bottom to top so that it can effectively prevent the vanishing gradient problem.",
"cite_spans": [
{
"start": 180,
"end": 201,
"text": "(Bengio et al., 2003)",
"ref_id": null
},
{
"start": 393,
"end": 418,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 718,
"end": 737,
"text": "(Schmidhuber, 1990)",
"ref_id": "BIBREF9"
},
{
"start": 780,
"end": 801,
"text": "(Member et al., 1997)",
"ref_id": "BIBREF7"
},
{
"start": 827,
"end": 861,
"text": "(Hochreiter and Schmidhuber, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 887,
"end": 905,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 1033,
"end": 1052,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 1053,
"end": 1072,
"text": "(Tang et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 1073,
"end": 1090,
"text": "(He et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1236,
"end": 1258,
"text": "(Socher et al., 2012b)",
"ref_id": "BIBREF12"
},
{
"start": 1332,
"end": 1354,
"text": "(Socher et al., 2012a)",
"ref_id": "BIBREF11"
},
{
"start": 1391,
"end": 1412,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 1643,
"end": 1665,
"text": "(Le and Zuidema, 2015)",
"ref_id": "BIBREF5"
},
{
"start": 1666,
"end": 1684,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1685,
"end": 1703,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned above, though combining the L-STM cell, Tree-LSTM does not really combine the structural information and serialization information. In this paper, we introduce three models: Tree-Composition LSTM (TC-LSTM), Leaf-Tree LSTM (LT-LSTM) and Leaf-Composition-Tree L-STM (LCT-LSTM). Those models combine those two kinds of information and experiments show that they perform better than the traditional LSTM and the Constituency Tree-LSTM model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 introduces the LSTM and Tree-LSTM model which are related to our work. Section 3 introduces the models proposed in this paper. Experimental results are shown in Section 4 and in Section 5 we give the conclusions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recurrent Neural Network (RNN) (Schmidhuber, 1990 ) encodes text information according to time. Giving a sentence, its words are encoded by the model in chronological order. For example, x t represents the vector of input word at time step t, the hidden unit of time step t can be calculated as follows:",
"cite_spans": [
{
"start": 31,
"end": 49,
"text": "(Schmidhuber, 1990",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "h t = tanh(W x t + U h t\u22121 + b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "h t represents the hidden layer at time step t, h t\u22121 represents the hidden state at time step t \u2212 1, W is the weight matrix of the input layer, U is the weight matrix between h t\u22121 and h t , b represents the bias and tanh is the activation function which can normalize output information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "tanh(x) = sinh(x) cosh(x) = e x \u2212 e \u2212x e x + e \u2212x .",
"eq_num": "(2)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "At each time step, the hidden layer produces an output layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o t = sof tmax(V h t + b).",
"eq_num": "(3)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "Usually, the output of the final time step can be used to represent the feature of a sentence. Then, after forward propagation, the weights of model are trained by the backward propagation. Traditional RNN model has the problem of gradient vanish. Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 2012) Neural Network has effectively solved the problem. The model has four gates which help to selectively forget or remember information. A memory cell has also been added to memory information transmitted over time steps. The information calculated by an LSTM unit can be shown as follows (Tai et al., 2015) :",
"cite_spans": [
{
"start": 278,
"end": 312,
"text": "(Hochreiter and Schmidhuber, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 599,
"end": 617,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i t = \u03c3(W (i) x t + U i h t\u22121 + b i ), (4) f t = \u03c3(W (f ) x t + U f h t\u22121 + b f ),",
"eq_num": "(5)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o t = \u03c3(W (o) x t + U o h t\u22121 + b o ),",
"eq_num": "(6)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u t = tanh(W (u) x t + U u h t\u22121 + b u ), (7) c t = i t * u t + f t * c t\u22121 ,",
"eq_num": "(8)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = o t * tanh(c t ).",
"eq_num": "(9)"
}
],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "Here, i t , f t , u t , o t denote the four gates, c t is the memory cell, * represents element-wise multiplication. Intuitively, input gate (i t ) and update gate (u t ) denote how much the memory cell update information, forget gate (f t ) determines how much the memory cell forget history information and output gate (o t ) controls how much the hidden unit get information from the cell. \u03c3 represents the sigmoid function. The weights of different layers are different but they are shared at each time step in the same layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Long Short-Term Memory",
"sec_num": "2"
},
{
"text": "Dependency Tree-LSTM and Constituency Tree-LSTM are two types of Tree-LSTM structures. We discuss the latter because it achieves a better performance in the sentiment analysis task (Tai et al., 2015) . Constituency Tree-LSTM includes three types of layers. Input layer includes the leaf nodes, it consists of the words in the sentence, each word is represented by a vector. Composition layer acts as the hidden layer which composes the information flowing from the leaf nodes to the root node. Each composition unit can be seen as the structural feature of its leaf nodes. The final composition node (root node) represents the structural feature of the whole sentence, it is the input of output layer. LSTM cell is used to control the information flowed bottom-up. Different from the cell in sequential models, the hidden information of a composition node comes from their two child nodes:",
"cite_spans": [
{
"start": 181,
"end": 199,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i j = \u03c3(W (i) x j + N \u2211 l=1 U (i) l h jl + b (i) ),",
"eq_num": "(10)"
}
],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f jk = \u03c3(W (f ) x j + N \u2211 l=1 U (f ) kl h jl + b (f ) ), (11) o j = \u03c3(W (o) x j + N \u2211 l=1 U (o) l h jl + b (o) ),",
"eq_num": "(12)"
}
],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u j = tanh(W (u) x j + N \u2211 l=1 U (u) l h jl + b (u) ), (13) c j = i j * u j + N \u2211 l=1 f jl * c jl ,",
"eq_num": "(14)"
}
],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = o j * tanh(c j ).",
"eq_num": "(15)"
}
],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "It is worth noting that, in the input layer, the information composing the gates only include the input words (x j ) but do not have hidden information (h jl ). In the composition layer and output layer, only hidden information from two sub nodes participates in the construction of gates. The structure of Constituency Tree-LSTM model is shown in Figure 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 356,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tree-LSTM",
"sec_num": "2.2"
},
{
"text": "In this section, we discuss three new models which combine the structural information generated by tree-structured model (Constituency Tree-LSTM) and serialization information generated by sequential model (LSTM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our model",
"sec_num": "3"
},
{
"text": "Constituency Tree-LSTM uses composition node at the root of the tree to represent the feature of sentence. We propose a new model named Tree-Composition LSTM (TC-LSTM) which generates a new feature taking all the leaf nodes, composition nodes and their sequential information into account. Firstly, we use postorder traversal to get all the nodes in the tree, and those nodes are treated as a sequence. The sequence contains not only the words in the sentence, but also the structural information in their parent composition nodes. Then we put the sequence into LSTM model for training, thus obtaining the serialization information of those hidden nodes with structural information. It is worth noting that, though sharing the same hidden nodes, in our first proposed model, the weights of the original Tree-LSTM module and the new added LSTM module are trained independently. The input word vectors firstly perform forward propagation on the Tree-LSTM, and then, the sequence mentioned above is obtained. Then the forward propagation of the sequence is performed on the LSTM model. The output error and gradient of the two modules are obtained through the training label separately. Finally, the backward propagation performs independently of each other. The gradients of back propagation updating the word vectors of input layer only flows from Tree-LSTM module. TC-LSTM model is shown in Figure 2 . After training, we only use the output of LSTM module to test on test data in order to verify the performance of sequential information extracted from tree-structured model. Figure 2 : TC-LSTM model. New LSTM module is added to the original Tree-LSTM model. They shared the same hidden nodes but trained separately. Only the output of LSTM module is used to do the prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 1391,
"end": 1399,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1576,
"end": 1584,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TC-LSTM",
"sec_num": "3.1"
},
{
"text": "We propose the Leaf-Tree LSTM (LT-LSTM) model to give a combination of the structural information and sequential information of a sentence. Similar to TC-LSTM, this model has t-wo modules but it only has one output. The first module is the same to Tree-LSTM, and the second module is the LSTM module which takes the leaf nodes of the tree, namely only the words of a sentence as input. During the forward propagation, we add the output of two modules and take the result as the output of the whole model. Gradient of the whole model is generated by the output, and then, assigned to the output of two modules for their backward propagation. Figure 3 shows the structure of the LT-LSTM model.",
"cite_spans": [],
"ref_spans": [
{
"start": 641,
"end": 649,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "LT-LSTM",
"sec_num": "3.2"
},
{
"text": "The output represents the combination of those two kinds of information: structural information and serialization information. Correspondingly, The gradient of the output layer contains the error information of the Tree-LSTM module and L-STM module. Letting the gradient propagate topdown through those two modules can make them learn from each other, thus having the chance to make the comprehensive performance of the whole model better. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LT-LSTM",
"sec_num": "3.2"
},
{
"text": "The third model proposed by us is Leaf-Composition-Tree LSTM (LCT-LSTM).Different from the LT-LSTM and TC-LSTM, this model not only takes the composition nodes into consideration when building the LSTM layer, but also makes sum of the outputs of two modules mentioned above. That is, LCT-LSTM can be seen as the composition of TC-LSTM and LT-LSTM for the reason of not only building the sequential feature for the composition nodes which contain the structural information, but also combining the sequential feature and the structural feature of the input sentence. The structure of LCT-LSTM is shown in Figure 4 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 604,
"end": 612,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "LCT-LSTM",
"sec_num": "3.3"
},
{
"text": "We evaluate our proposed models on the Stanford Sentiment Tree Bank (SST) dataset, which contains sentences collected from movie reviews. The sentences in the dataset are split into three parts: 8544 for training, 1101 for development and 2210 for test. SST dataset has two classification tasks, one for fine-grained classification (five categories: very negative, negative, neutral, positive, and very positive) and the other for binary classification (two categories: negative and positive). The finegrained subtask is evaluated on 8544/1101/2210 splits, and the binary classification is evaluated on 6920/872/1821 splits (there are fewer sentences because the neutral examples are excluded). Every sentence in the dataset is processed into tree structure, and every phrase (corresponding to the nodes in the tree) in the sentence is also labeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We use the Glove vectors of 300 dimension (Pennington et al., 2014) to represent the input words. Word embeddings are fine-tuned during training and the learning rate used for the input layer is 0.1, for the other layers is 0.05. Adagrad LSTM 271955 271653 271200 Tree-LSTM 317555 317253 316800 TC-LSTM 499510 498906 498755 LT-LSTM 499510 498906 498755 LCT-LSTM 499510 498906 498755 Table 1 : Parameters of models. \u03b8 \u2212 all represents the number of all the parameters in a model for F (fine-grained) tasks and B (binary tasks). \u03b8 \u2212 com represents parameters of composition layer. TC-LSTM, LT-LSTM and LCT-LSTM have the same number of parameters but they are trained in different ways.",
"cite_spans": [
{
"start": 42,
"end": 67,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 238,
"end": 399,
"text": "LSTM 271955 271653 271200 Tree-LSTM 317555 317253 316800 TC-LSTM 499510 498906 498755 LT-LSTM 499510 498906 498755 LCT-LSTM 499510 498906 498755 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hyperparameters and Training Details",
"sec_num": "4.2"
},
{
"text": "\u03b8 \u2212 all Models F B \u03b8 \u2212 com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters and Training Details",
"sec_num": "4.2"
},
{
"text": "algorithm is used for training, the minibatch size set by us is 25, L2-regularization is used for each batch using the value of 1e-4, and the dropout for the output layer is 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters and Training Details",
"sec_num": "4.2"
},
{
"text": "The dimension of the input layer is the same as the word vector, and the hidden layer consisted of tree nodes has the dimension of 150. For the sequential module, both of the inputs and the hidden layer have the dimension of 150, the vectors of leaf nodes are projected into the 150 dimension when put into the sequential part. The numbers of parameters for all the models are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyperparameters and Training Details",
"sec_num": "4.2"
},
{
"text": "Every model is trained on the training set for 20 epochs, and tested on the development set for validation after finishing every epoch. We choose the parameters performing best among them to do the evaluation on the test set. For every model, we repeat experiments for 8 times and take the average of their results as the final performance of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters and Training Details",
"sec_num": "4.2"
},
{
"text": "The models proposed by us fuse the structural information and serialization information, so we compare those models with other models which do not combine those two kinds of information. We choose the Constituency Tree-LSTM, LSTM and BiLSTM mentioned in 2015 (Tai et al., 2015) as the baseline models, other tree-structure models such as RNN, MV-RNN and RNTN are also used for comparison.",
"cite_spans": [
{
"start": 259,
"end": 277,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "The results of experiment are shown in Table 2 . We use accuracy to measure the performance of models.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "4.4"
},
{
"text": "Fine-grained Binary LSTM (Tai et al., 2015) 46.4 84.9 Bi-LSTM (Tai et al., 2015) 49.1 87.5 RNN (Socher et al., 2013) 43.2 82.4 MV-RNN (Socher et al., 2013) 44.4 82.9 RNTN (Socher et al., 2013) 45 From Table 2 , we can see that on the whole, the models fusing structural and serialization information outperform other models which do not combine those two kinds of information. LT-LSTM achieves the best performance among our compared models in the fine-grained subtask and LCT-LSTM has the best performance in the binary subtask. TC-LSTM performs slightly better than Constituency Tree-LSTM in the binary subtask but worse than fine-grained subtask, but it still performs better than other single sequential models and tree-structured models.",
"cite_spans": [
{
"start": 25,
"end": 43,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 62,
"end": 80,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 95,
"end": 116,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 134,
"end": 160,
"text": "(Socher et al., 2013) 44.4",
"ref_id": null
},
{
"start": 171,
"end": 192,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "We find that building the serialization feature for the nodes in tree-structure (TC-LSTM) does not really help the tree-structural models, but fusing the structural information and serialization information gives help to it. While fusing, adding the hidden nodes containing the structural information to the sequential model (LCT-LSTM) performs better in the binary subtask, but slightly worse in the fine-grained subtask compared to the model does not do so (LT-LSTM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "In this paper, we propose three new models in order to explore the effect of fusing the structural and sequential information. We evaluate our models on the Stanford Sentiment Tree Bank (SST). Experiments show that fusing the structural information and sequential information is an effective way to improve the performance of models proposed before. Future work can be focused on finding better ways to fusing those two features. Other models, such as the Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Tree-LSTM (Teng and Zhang, 2016) and TreeGRU (Kokkinos and Potamianos, 2017 ) can be used in place of the tree-structured model and the sequential model used in our models. Attention mechanism (Luong et al., 2015) can also be used to do some improvement.",
"cite_spans": [
{
"start": 527,
"end": 549,
"text": "(Teng and Zhang, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 562,
"end": 592,
"text": "(Kokkinos and Potamianos, 2017",
"ref_id": "BIBREF4"
},
{
"start": 710,
"end": 730,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Computer Sci- ence .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Yzu-nlp team at semeval-2016 task 4: Ordinal sentiment classification using a recurrent convolutional network",
"authors": [
{
"first": "Yunchao",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chih",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Chin",
"middle": [
"Sheng"
],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [
"Robert"
],
"last": "Yang",
"suffix": ""
},
{
"first": "Weiyi",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "251--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunchao He, Liang Chih Yu, Chin Sheng Yang, K. Robert Lai, and Weiyi Liu. 2016. Yzu-nlp team at semeval-2016 task 4: Ordinal sentiment clas- sification using a recurrent convolutional network. In International Workshop on Semantic Evaluation. pages 251-255.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Long shortterm memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2012,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and Schmidhuber. 2012. Long short- term memory. Neural Computation 9(8):1735- 1780.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Structural attention neural networks for improved sentiment analysis",
"authors": [
{
"first": "Filippos",
"middle": [],
"last": "Kokkinos",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Potamianos",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filippos Kokkinos and Alexandros Potamianos. 2017. Structural attention neural networks for improved sentiment analysis .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Compositional distributional semantics with long short term memory",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term mem- ory. Computer Science .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh",
"middle": [
"Thang"
],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. Computer Sci- ence .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Ieee",
"middle": [],
"last": "Member",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kuldip",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Member, IEEE, Mike Schuster, and Kuldip K. Pali- wal. 1997. Bidirectional recurrent neural net- works. IEEE Transactions on Signal Processing 45(11):2673-2681.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Meth- ods in Natural Language Processing. pages 1532- 1543.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recurrent networks adjusted by adaptive critics",
"authors": [
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmidhuber. 1990. Recurrent networks adjusted by adaptive critics .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "J",
"middle": [
"Y"
],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank .",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012a. Semantic compositional- ity through recursive matrix-vector spaces. In Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning. pages 1201-1211.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing natural scenes and natural language with recursive neural networks",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chiung",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Chiung Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2012b. Parsing natural scenes and natural language with recursive neural networks. In International Conference on Machine Learning, ICML 2011, Bellevue, Washington, Usa, June 28 -July. pages 129-136.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer Science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representation- s from tree-structured long short-term memory net- works. Computer Science 5(1):: 36.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Documen- t modeling with gated recurrent neural network for sentiment classification. In Conference on Empiri- cal Methods in Natural Language Processing. pages 1422-1432.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bidirectional tree-structured lstm with head lexicalization",
"authors": [
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyang Teng and Yue Zhang. 2016. Bidirectional tree-structured lstm with head lexicalization .",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dimensional sentiment analysis using a regional cnn-lstm model",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [
"Robert"
],
"last": "Liang Chih Yu",
"suffix": ""
},
{
"first": "Xuejie",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "225--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Wang, Liang Chih Yu, K. Robert Lai, and Xuejie Zhang. 2016. Dimensional sentiment analysis using a regional cnn-lstm model. In Meeting of the Asso- ciation for Computational Linguistics. pages 225- 230.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1480-1489.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Long short-term memory over tree structures",
"authors": [
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over tree structures .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Constituency Tree-LSTM model. The upward arrows represent the direction of forward propagation, the downward arrows represent the direction of the backward propagation.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "LT-LSTM model. The new added LST-M model only takes the leaf nodes as input. The output represent the fusion of two kinds of information, and its gradient trains the whole model.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "LCT-LSTM model.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "The result of accuracy on the test set. Fine-grained represents the five-category classification and the Binary represents the positive/negative classification."
}
}
}
}