|
{ |
|
"paper_id": "2019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:30:11.652218Z" |
|
}, |
|
"title": "Converting Sentiment Annotated Data to Emotion Annotated Data", |
|
"authors": [ |
|
{ |
|
"first": "Manasi", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT Veermata Jijabai Technological Institute Mumbai", |
|
"location": {} |
|
}, |
|
"email": "manasi@cse.iitb.ac.in" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Existing supervised solutions for emotion classification demand large amount of emotion annotated data. Such resources may not be available for many languages. However, it is common to have sentiment annotated data available in these languages. The sentiment information (+1 or-1) is useful to segregate between positive emotions or negative emotions. In this paper, we propose an unsupervised approach for emotion recognition by taking advantage of the sentiment information. Given a sentence and its sentiment information, recognize the best possible emotion for it. For every sentence, the semantic relatedness between the words from sentence and a set of emotionspecific words is calculated using cosine similarity. An emotion vector representing the emotion score for each emotion category of Ekman's model, is created. It is further improved with the dependency relations and the best possible emotion is predicted. The results show the significant improvement in f-score values for text with sentiment information as input over our baseline as text without sentiment information. We report the weighted fscore on three different datasets with the Ekman's emotion model. This supports that by leveraging the sentiment value, better emotion annotated data can be created.", |
|
"pdf_parse": { |
|
"paper_id": "2019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Existing supervised solutions for emotion classification demand large amount of emotion annotated data. Such resources may not be available for many languages. However, it is common to have sentiment annotated data available in these languages. The sentiment information (+1 or-1) is useful to segregate between positive emotions or negative emotions. In this paper, we propose an unsupervised approach for emotion recognition by taking advantage of the sentiment information. Given a sentence and its sentiment information, recognize the best possible emotion for it. For every sentence, the semantic relatedness between the words from sentence and a set of emotionspecific words is calculated using cosine similarity. An emotion vector representing the emotion score for each emotion category of Ekman's model, is created. It is further improved with the dependency relations and the best possible emotion is predicted. The results show the significant improvement in f-score values for text with sentiment information as input over our baseline as text without sentiment information. We report the weighted fscore on three different datasets with the Ekman's emotion model. This supports that by leveraging the sentiment value, better emotion annotated data can be created.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "An emotion or a feeling represents a state of mind for any person. Various researchers have put-forward classification of emotions into various categories such as Plutchick emotion model with 8 basic emotions (Plutchick 1980) , the Ekman's Model with six basic emotions -anger, disgust, fear, happy, surprise, sadness (Ekman 1972) and so on. Users easily share their experiences, opinions, and emotions on various topics, product reviews on social platforms such as Twitter, Facebook, Whatsapp. Understanding the emotions expressed in such short posts can facilitate many important downstream applications such as an emotion-aware chatbots, analysis of user reviews, personalized recommendations, a help to psychologically ill patients, and so on. Therefore, it is important to develop the effective emotion recognition models to automatically identify emotions from such text or messages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 225, |
|
"text": "(Plutchick 1980)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 330, |
|
"text": "(Ekman 1972)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The task of emotion detection is typically modelled as supervised multi-class classification or multi-labelled classification task. Supervised models need very large annotated data. Such datasets may not be readily available and are costly to obtain (Jianfei Yu, 2018) . In case of unavailability of annotated data, unsupervised learning approaches (A Agrawal, 2012; Milagros Fern\u00e1ndez-Gavilanes, 2015) can be an ideal solution for the emotion recognition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 268, |
|
"text": "(Jianfei Yu, 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, we assume that text annotated with sentiment information (positive, negative or neutral) is easily available. Sentiment classification predicts positive or negative sentiment polarity of a sentence whereas emotion classification labels the sentence at fine-grain level with one of the emotions, such as happy, surprise, anger, fear, disgust, sadness etc. Happy and surprise emotion are termed as the positive sentiment emotions and anger, disgust, fear, sadness as the negative sentiment emotions. For example in the sentence, Passed an exam by two points . . . +1The sentiment information provided with sentence in above example, helps to confirm that the sentence is made with positive emotion such as happiness, surprise etc rather than the negative emotions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A close friend of mine have not contacted me long time. . . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sentiment value of \u22121 shows exclusion of positive emotions by reducing chances of emotion recognition system being confused them with positive emotions. The sentiment information helps to narrow down choices to one of the negative emotions for the sentence and it must be sadness, fear, anger, or disgust.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(-1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Therefore, we aim to use this sentiment information along with the sentence to create the emotion labelled dataset by recognizing emotion in unsupervised way. To be precise to create the emotion labelled resources with help of available sentiment labelled resources, as a further fine grained emotion analysis task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(-1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we propose an unsupervised approach based on A Agrawal (2012), with our modifications as discussed later in detail. We use the sentiment labelled data that is the sentence and the respective sentiment information as input and recognize the best possible emotion for the same. This approach uses word-embeddings to represent the words in the sentences as well as emotionspecific words. The cosine similarity measure is used to calculate the semantic relatedness between a sentence and the emotion-specific words. The vector representing score for every emotions category is calculated for each sentence and then emotion-score is modified using dependency relations of open-class words from that sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(-1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. The section 2 describes related supervised, unsupervised and hybrid approaches previously proposed in the literature. The section 3 discusses the methodology of proposed system. Section 4 briefs on the experimental set up, datasets used as well as modifications done to the dataset as required for experimentation. The results are discussed in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(-1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The state of the art approaches for emotion Recognition task is supervised approach. The labeled training data is a crucial resource required for building such systems. Due to the lack of a large human annotated datasets, many emotion classification tasks have been performed on text data gathered from social media such as twitter, and the hash-tags, emojis or emoticons are used as the emotional labels for the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As the unsupervised approaches do not need the annotated dataset, different unsupervised approaches are also performed by researchers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A Agrawal 2012found open-class words which they named as NAVA words that is Noun, Adjective, Verb, Adverb words. Pointwise Mutual Information (PMI) based model with syntactic dependencies is used to perform emotion recognition. Shoushan Li and Zhou (2015) created a Dependence Factor Graph (DFG) as learning model based on label dependence and context dependence. The hybrid approaches use the unsupervised approaches for feature creation, pattern extraction which are later used by supervised classification models for emotion classification. Carlos Argueta (2015) had proposed an unsupervised graph-based approach for boot-strapping the Twitter-specific emotion-bearing patterns and then used them for classification task. Li and Xu (2014) used predefined linguistic patterns to extract emotion causes and considered them as features for classification using SVM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 551, |
|
"end": 565, |
|
"text": "Argueta (2015)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 741, |
|
"text": "Li and Xu (2014)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Jianfei Yu (2018) have used transfer learning approach for sentiment classification task and then emotion classification task. Also few researchers have contributed towards creation of emotionaware embedding. Distant supervision and Recurrent Neural Network (RNN)-based approach is proposed for learning emotion-enriched representations.(Ameeta Agrawal, 2018)", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 17, |
|
"text": "Yu (2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The emotion recognition framework for unsupervised approach is as shown in Figure 1 . The input sentence is pre-processed to get the open-class words from that sentence. The second step is to compute the semantic relatedness using cosine similarity between word embedding of words in sentence and emotion-specific words. The module three modifies the emotion score for every emotion from vector computed at module 2. Later, module 4 averages over emotion vectors of all words of a sentence to find resulting emotion present in that sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 83, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let S be the sentence,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "S =< w 1 , w 2 , \u2022 \u2022 \u2022, w n >", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and S s be the respective sentiment value (+1 or -1 or 0). Let E be the set of possible emotions from selected emotion model such as E = {e 1 , e 2 , \u2022 \u2022 \u2022, e m } . To every emotion category, we have assigned few affect bearing words which represent that emotion. Table-1 shows few affect-bearing words used for each emotion category. The aim is to predict the best possible emotion Es belongs to E for the sentence S. Eventually, one of the emotions from the Ekman's emotion model-Anger, Disgust, Fear, Happy, Sadness and Surprise will be predicted for every sentence. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 271, |
|
"text": "Table-1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We have used pre-trained word embedding as they better represent co-occurrence information of words. The words in a given sentence and the emotion-specific words are represented using their respective word embedding and the semantic relatedness between them is found using cosine similarity. Let A and B be word vector representation for 2 words then:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Semantic Relatedness using Cosine Similarity", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "sim(A, B) = cos(\u03b8) = A\u2022B ||A||\u2022||B||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Semantic Relatedness using Cosine Similarity", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The emotion score vector for every open-class words", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{w 1 , w 2 , \u2022 \u2022 \u2022, w n } of a sentence is created.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The length of the emotion vector is six values corresponding to six emotions of Ekman's model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The emotion vector for a word w i is computed by finding cosine similarity of word w i with every emotion-specific word from each emotion category. Let there be m emotion categories and {EW 1 , EW 2 , \u2022 \u2022 \u2022, EW m } be sets of l emotionspecific words for each emotion e j . Then the emotion score ES for w i is calculated as: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "ES(w i , e j ) = l k=1 sim(w i , EW k j )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "EV w i = {ES(w i , e 1 ), ES(w i , e 2 ), . . . , ES(w i , e m )}", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here, the sentiment value plays a crucial role. As we consider the Ekman's emotion model, happy and surprise emotions are emotion with positive sentiment. The emotions such as fear, disgust, angry and sadness are emotions with negative sentiment. Hence, if sentiment = +1 then,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "EV w i = {ES(w i , happy), ES(w i , surprise), 0, 0, 0, 0}", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "if sentiment = \u22121 then,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "EV w i = {0, 0, ES(w i , anger), ES(w i , disgust), ES(w i , f ear), ES(w i , sadness)}", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Computing Vector of Scores for Emotion Categories", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "With the intuition that dependency relationship can contribute more towards emotion detection, we use these relations of open class words to modify the emotion vector EV of the dependent word. The Stanford's coreNLP dependency parser is used for finding dependencies between the openclass words that is noun, adjective, verb and adverb which are considered for further processing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-scoring Scores in Emotion Vector of a Word", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Let sd(w 1 , w 2 ) be a syntactic dependency relation between word w 1 as dependent and word w 2 as modifier. For example, in adjectival modifier relation amod(life, happy), dependent word is life and modifier word is happy. We find these syntactic dependencies of sentence at the time of preprocessing itself and use dependency relations for the re-scoring purpose.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-scoring Scores in Emotion Vector of a Word", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Let D be the dependent word from sentence S and M be the respective modifier word from sentence S. Then the emotion vector D p of p th word is modified by taking average over emotion vectors of the dependent word D p and its modifier word M p . This will help in strengthening every emotion score related to that word w i of sentence S. (A Agrawal, 2012)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-scoring Scores in Emotion Vector of a Word", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "EV Dp = EV Dp + EV Mp 2", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Re-scoring Scores in Emotion Vector of a Word", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "With growing usage of social media, many times, text messages are accompanied with suitable emoji. Hence, emoji as input can contribute towards detecting emotion. Every emoji is being assigned CLDR short name by Unicode Common Locale Data Repository to describe that emoji. For example, grinning face, beaming face with smiling eyes and so on. Same procedure as mentioned in section 3.1 to 3.3 is followed for creating emotion vector M EV for every emoji.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Emojis", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The emotion vector S EV for the sentence S is calculated by taking average over emotion vectors EV w i of all words from that sentence,and emoji emotions vector M EV .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Emotion Vector for Sentence", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The emotion vector of text S is :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "S EV =< S e 1 , S e 2 , . . . , S em >", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "where the emotion vector for emoji, if present in sentence is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "M EV =< M e 1 , M e 2 , . . . , M em > and S EV = 1 n n i=1 EV w i + M EV", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "the best possible predicted emotion Es for S as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Es = argmax(S e i )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "\u2200i = 1 . . . m", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resultant Emotion Prediction", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The datasets used for testing and recognizing emotions are ISEAR dataset (ise), Twitter Emotion Corpus (Mohammad, 2012) and Semeval-2018 Affect in Tweets English Test dataset (Saif M. Mohammad, 2018).", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 119, |
|
"text": "(Mohammad, 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "ISEAR dataset (ise) The \"International Survey on Emotion Antecedents and Reactions\" dataset published by Scherer and Wallbott is built by collecting questionnaires answered by people with different cultural backgrounds (Bostan and Klinger, 2018) . A total of 7,665 sentences labeled with single emotions. The labels are joy, fear, anger, sadness, disgust, shame, and guilt.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 245, |
|
"text": "(Bostan and Klinger, 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Twitter Emotion Corpus (Mohammad, 2012) is prepared with emotion-word hashtags as emotion labels. These are termed as noisy labels as labelled by users. This corpus contains 21050 sentences labelled with one of the emotions from Ekman's emotion model. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 39, |
|
"text": "(Mohammad, 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Few modifications are incorporated before using them for testing and experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modification in Datasets for Testing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Not all of these datasets are labelled with the Ekman's emotions. The researchers follow different emotion models such as the Plutchick model, the Parrot's emotion model. Also few researchers use emotion categorization as per requirement of the system and the data. Hence, we have mapped these emotion labels to one of the best suitable Ekman's emotion as shown in Table-2 This mapping is coarse-grain mapping as the Ekman's model represents six basic emotions -Happy, Surprise, Anger, Disgust, Fear and Sadness and all other emotions can be mapped in these emotions directly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 372, |
|
"text": "Table-2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mapping of Emotion Labels to Ekman's Emotion Model", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Not all the above mentioned datasets are annotated with sentiment values. Hence, to illustrate this problem definition, we labelled these datasets with sentiment value, based on already available emotion labels to the sentences. The sentences from datasets with positive emotions such as happy, love, joy, surprise etc are labelled with positive (+1) sentiment value. And the sentences with negative emotions such as anger, disgust, fear, sadness etc are mapped to negative (-1) sentiment value. Now, the datasets are in the required format for further processing and testing. The format of every testing example is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Datasets with Sentiment Values", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "< sentence, sentiment value, emotion >. While testing the system, the sentence and the sentiment value from modified datasets are considered as input and best possible emotion is recognized. Later, these predicted emotions are compared with these emotion labels for checking the accuracy of the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Datasets with Sentiment Values", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "\u2022 The sentence is pre-processed to remove the stopwords, hyperlinks, hashtags, usernames and the special characters if any. Also part of speech tagging is done to obtain open-class words that is noun, verb, adjective, and adverb. The NLTK PoS tagger and the Wordnet word categories are used to perform the same. As the closed-class words do not contribute towards emotions, they are not considered for further processing. The syntactic dependencies are retrieved for given input sentence using Stanford coreNLP dependency parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 We have experimented on different datsets mentioned in Table- 2 using different pretrained word embeddings such as Google Word2Vec (Tomas Mikolov, 2013), Glove (Jeffrey Pennington and Manning, 2014), and FastText (Joulin et al., 2016) \u2022 The experiments are performed on the text with sentiment information and without sentiment information too. The weighted Fscore, precision and recall are used as metrics to evaluate the accuracy of system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 236, |
|
"text": "(Joulin et al., 2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 63, |
|
"text": "Table-", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As shown in Table- 3, the experiments are performed in two different ways, first by considering only text/sentences as input and secondly by considering text and its sentiment value as input.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 18, |
|
"text": "Table-", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The experiments conducted with only sentences as input, serves here as baseline against experiments using sentences with sentiment information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The sentence with respective sentiment information as input shows significant improvement in weighted F-score value. The results are shown in the Table- 3. It is observed that Google Word2Vec word vectors performs better than other word embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 152, |
|
"text": "Table-", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The Semeval-2018 Task-1 dataset (Saif M. Mohammad, 2018) is multi-labelled dataset. The annotated emotions are assigned independently. This task is multi-class emotion recognition so we consider prediction 'correct' even if one of the assigned emotions is predicted by our system. Table- 4 that sentences with sentiment value as input, the F-score of every individual emotion category has been improved drastically. This shows better prospects for such emotion recognition and conversion process for new resource creation with fine-grained labeling from the sentiment to the emotion.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 287, |
|
"text": "Table-", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The recall values for datasets using Google Word2Vec are shown in Table- Recall values in case of the method with sentiment information has increased by approximately 50% than method without sentiment values. This shows significant improvement in correctly predicting emotion on use of sentiment information.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 72, |
|
"text": "Table-", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "It is visible in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Table-6 illustrates the confusion matrices for results of emotion recognition on ISEAR dataset. It can be seen that positive emotions and negative emotions are rarely confused with each other by using sentiment information. The recall and precision is also increased for every emotion. Yet emotions belonging to the same sentiment value need to be achieved with better accuracy. The emotion 'surprise' is not part of emotion labels for ISEAR so 0s in row for 'surprise'.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 11, |
|
"text": "Table-6", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "It is visible in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The proposed system suggests the way for creation of a resource from the available resources. The use of more easily available sentiment labelled data for creating emotion annotated data is significant. The use of sentiment information for recognizing the emotion is good example of fine-grain labeling task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The proposed approach shows much better accuracy for text labelled with sentiment value than the baseline as text without sentiment information. The use of sentiment information helps to segregate at initial level between emotions with the different polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As the word vectors are based on distributional hypothesis, they may have higher cosine similarity for opposite words, for example, 'happy' and 'sad'. The synonyms may have very low cosine similarity value. This can affect overall accuracy of the system. The rare words may not contribute much and very common words may get very high cosine similarity with opposite words too. Hence, it is necessary to select better list of emotion-specific words. More processing and linguistic information may be added to improve the accuracy of this system. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Unsupervised emotion detection from text using semantic and syntactic relations", |
|
"authors": [ |
|
{ |
|
"first": "Aijun", |
|
"middle": [], |
|
"last": "An", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "346--353", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aijun An A Agrawal. 2012. Unsupervised emotion de- tection from text using semantic and syntactic re- lations. IEEE/WIC/ACM International Joint Con- ferences on Web Intelligence and Intelligent Agent Technology, 1:346-353.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning emotion-enriched word representations", |
|
"authors": [ |
|
{ |
|
"first": "Manos", |
|
"middle": [], |
|
"last": "Papagelis Ameeta Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aijun", |
|
"middle": [], |
|
"last": "An", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "950--961", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manos Papagelis Ameeta Agrawal, Aijun An. 2018. Learning emotion-enriched word representations. The 27th International Conference on Computa- tional Linguistics, Santa Fe, New Mexico, USA, page 950-961.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Emotion detection in text: a review", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wlodek Zadrozny Armin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Narges", |
|
"middle": [], |
|
"last": "Seyeditabari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tabari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1806.00674v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wlodek Zadrozny Armin Seyeditabari, Narges Tabari. 2018. Emotion detection in text: a review. arXiv:1806.00674v1 [cs.CL], 2nd Jun 2018.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An analysis of annotated corpora for emotion classification in text", |
|
"authors": [ |
|
{ |
|
"first": "Laura-Ana-Maria", |
|
"middle": [], |
|
"last": "Bostan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2104--2119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura-Ana-Maria Bostan and Roman Klinger. 2018. An analysis of annotated corpora for emotion classi- fication in text. The 27th International Conference on Computational Linguistics,Santa Fe, New Mex- ico, USA, page 2104-2119.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised graph-based patterns extraction for emotion classification", |
|
"authors": [ |
|
{ |
|
"first": "Yi-Shin Chen Carlos", |
|
"middle": [], |
|
"last": "Argueta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elvis", |
|
"middle": [], |
|
"last": "Saravia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "336--341", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi-Shin Chen Carlos Argueta, Elvis Saravia. 2015. Un- supervised graph-based patterns extraction for emo- tion classification. IEEE/ACM International Con- ference on Advances in Social Networks Analysis and Mining, pages 336-341.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Using youtube comments for text-based emotion recognition", |
|
"authors": [], |
|
"year": 2016, |
|
"venue": "The 7th International Conference on Ambient Systems, Networks and Technologies (ANT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "292--299", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Al Moatassime Hassan Douiji yasmina, Mousannif Ha- jar. 2016. Using youtube comments for text-based emotion recognition. The 7th International Confer- ence on Ambient Systems, Networks and Technolo- gies (ANT), pages 292-299.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Distant supervision for emotion classification with discrete binary values", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Suttles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Ide Jared Suttles. Distant supervision for emo- tion classification with discrete binary values. CI- CLING, March 2013.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher Jeffrey Pennington and Christopher D. Manning. 2014. Glove: Global vectors for word representation.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Improving multilabel emotion classification via sentiment classification with dual attention transfer network", |
|
"authors": [ |
|
{ |
|
"first": "Jing Jiang Pradeep Karuturi-William Brendel Jianfei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu\u0131s", |
|
"middle": [], |
|
"last": "Marujo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1097--1102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Jiang Pradeep Karuturi-William Brendel Jian- fei Yu, Lu\u0131s Marujo. 2018. Improving multilabel emotion classification via sentiment classification with dual attention transfer network. Conference on Empirical Methods in Natural Language Processing (EMNLP), page 1097-1102.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.01759" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Text-based emotion classification using emotion cause extraction. Expert Systems with Applications", |
|
"authors": [ |
|
{ |
|
"first": "Weiyuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "1742--1749", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyuan Li and Hua Xu. 2014. Text-based emotion classification using emotion cause extraction. Ex- pert Systems with Applications, 41(4):1742-1749.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Gti: An unsupervised approach for sentiment analysis in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan Juncal-Mart\u00ednez Enrique Costa-Montenegro Milagros", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez-Gavilanes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tamara\u00e0lvarez-L\u00f3pez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "533--538", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Juncal-Mart\u00ednez Enrique Costa-Montenegro Milagros Fern\u00e1ndez-Gavilanes, Tamara\u00c0lvarez- L\u00f3pez. 2015. Gti: An unsupervised approach for sentiment analysis in twitter. 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 533-538.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Understanding emotions: A dataset of tweets to study interactions between affect categories", |
|
"authors": [ |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "11th Edition of the Language Resources and Evaluation Conference (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kiritchenko Svetlana Mohammad, Saif M. 2018. Un- derstanding emotions: A dataset of tweets to study interactions between affect categories. In 11th Edi- tion of the Language Resources and Evaluation Conference (LREC),Miyazaki, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The First Joint Conference on Lexical and Computational Semantics (*Sem)", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "246--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad. 2012. emotional tweets. The First Joint Conference on Lexical and Computational Se- mantics (*Sem), Montreal, Canada., page 246-255.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semeval-2018 Task 1: Affect in tweets. The International Workshop on Semantic Evaluation (SemEval-2018)", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh-Svetlana Kiritchenko Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Bravo-Marquez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "246--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Salameh-Svetlana Kiritchenko Saif M. Mohammad, Felipe Bravo-Marquez. 2018. Semeval-2018 Task 1: Affect in tweets. The International Workshop on Semantic Evaluation (SemEval-2018),New Orleans, LA, USA, page 246-255.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sentence-level emotion classification with label and context dependence", |
|
"authors": [ |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Wang Shoushan Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1045--1053", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong Wang Shoushan Li, Lei Huang and Guodong Zhou. 2015. Sentence-level emotion classification with label and context dependence. 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing, 1:1045-1053.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems", |
|
"authors": [ |
|
{ |
|
"first": "Kai Chen Greg Corrado-Jeffrey Dean Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Chen Greg Corrado-Jeffrey Dean Tomas Mikolov, Ilya Sutskever. 2013. Distributed representations of words and phrases and their compositionality. Ad- vances in neural information processing systems.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Overview of System", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "An emotion score vector EV for every word is created using a sentiment value and an emotion scores of corresponding emotions with given", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Emotion Emotion Words</td></tr><tr><td>Anger</td><td>anger, angry, annoy, irritate, frus-</td></tr><tr><td/><td>trate</td></tr><tr><td>Disgust</td><td>disgust, hate, dislike, ill, sick</td></tr><tr><td>Fear</td><td>fear, worry, terrify, afraid, frighten</td></tr><tr><td colspan=\"2\">Happiness happiness, happy, love, joy, glad</td></tr><tr><td>Sadness</td><td>sadness, sad, hurt, cry, bad</td></tr><tr><td>Surprise</td><td>surprise, amazing, astonish, won-</td></tr><tr><td/><td>derful, incredible</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"text": "Few affect-bearing words used sentiment value. So, an emotion-score vector for word w i is,", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Mapping of original emotion labels to Ekman's emotions", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "5 for illustration.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Sr No Word Embedding</td><td colspan=\"2\">ISEAR dataset</td><td colspan=\"2\">Twitter Emo-</td><td colspan=\"2\">Semeval-18</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">tion Corpus</td><td colspan=\"2\">Task-1 Dataset</td></tr><tr><td/><td/><td>w/o Sen-</td><td>with</td><td>w/o Sen-</td><td>with</td><td>w/o Sen-</td><td>with</td></tr><tr><td/><td/><td>timent</td><td>Senti-</td><td>timent</td><td>Senti-</td><td>timent</td><td>Senti-</td></tr><tr><td/><td/><td>Value</td><td>ment</td><td>Value</td><td>ment</td><td>Value</td><td>ment</td></tr><tr><td/><td/><td/><td>Value</td><td/><td>Value</td><td/><td>Value</td></tr><tr><td>1</td><td colspan=\"2\">Google Word Vectors 0.37</td><td>0.56</td><td>0.32</td><td>0.52</td><td>0.52</td><td>0.76</td></tr><tr><td>2</td><td>GLOVE vectors</td><td>0.23</td><td>0.33</td><td>0.25</td><td>0.45</td><td>0.38</td><td>0.67</td></tr><tr><td>3</td><td>Fast Text Word Vec-</td><td>0.34</td><td>0.49</td><td>0.30</td><td>0.49</td><td>0.51</td><td>0.74</td></tr><tr><td/><td>tors</td><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "Weighted F-score using different word embedding and with / without Sentiment Value", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Sr No Method (Input)</td><td colspan=\"6\">Anger Disgust Fear Happy Sadness Surprise</td></tr><tr><td/><td/><td colspan=\"2\">ISEAR Dataset</td><td/><td/><td/><td/></tr><tr><td>1</td><td>Sentence</td><td>0.35</td><td>0.27</td><td>0.43</td><td>0.45</td><td>0.33</td><td>-</td></tr><tr><td>2</td><td>Sentence and Sentiment Value</td><td>0.45</td><td>0.39</td><td>0.51</td><td>0.97</td><td>0.52</td><td>-</td></tr><tr><td/><td/><td colspan=\"3\">Twitter Emotion Corpus</td><td/><td/><td/></tr><tr><td>3</td><td>Sentence</td><td>0.21</td><td>0.11</td><td>0.28</td><td>0.55</td><td>0.14</td><td>0.10</td></tr><tr><td>4</td><td>Sentence and Sentiment Value</td><td>0.32</td><td>0.18</td><td>0.42</td><td>0.79</td><td>0.55</td><td>0.15</td></tr><tr><td/><td colspan=\"5\">Semeval-2018 Task-1 Affects in Tweets Dataset</td><td/><td/></tr><tr><td>5</td><td>Sentence</td><td>0.40</td><td>0.43</td><td>0.39</td><td>0.66</td><td>0.51</td><td>0.21</td></tr><tr><td>6</td><td>Sentence and Sentiment Value</td><td>0.67</td><td>0.65</td><td>0.53</td><td>0.92</td><td>0.76</td><td>0.32</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"text": "Emotion category-wise F-score for emotion recognition using Google Word2Vec vectors", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"text": "Emotion category-wise Recall values for emotion recognition using Google Word2Vec vectors", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Without Sentiment Information</td><td>With Sentiment Information</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"text": "Confusion Matrix for ISEAR dataset using Google Word2Vec Vector", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |