%
% File emnlp2018.tex
%
%% Based on the style files for EMNLP 2018, which were
%% Based on the style files for ACL 2018, which were
%% Based on the style files for ACL-2015, with some improvements
%%  taken from the NAACL-2016 style
%% Based on the style files for ACL-2014, which were, in turn,
%% based on ACL-2013, ACL-2012, ACL-2011, ACL-2010, ACL-IJCNLP-2009,
%% EACL-2009, IJCNLP-2008...
%% Based on the style files for EACL 2006 by 
%%e.agirre@ehu.es or Sergi.Balari@uab.es
%% and that of ACL 08 by Joakim Nivre and Noah Smith

\documentclass[11pt,a4paper]{article}
\usepackage[hyperref]{emnlp2018}
\usepackage{times}
\usepackage{latexsym}

\usepackage{url}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{multirow}

%\aclfinalcopy % Uncomment this line for the final submission
%\def\aclpaperid{***} %  Enter the acl Paper ID here

%\setlength\titlebox{5cm}
% You can expand the titlebox if you need extra space
% to show all the authors. Please do not make the titlebox
% smaller than 5cm (the original size); we will check this
% in the camera-ready version and ask you to change it back.

\newcommand\BibTeX{B{\sc ib}\TeX}
\newcommand\confname{EMNLP 2018}
\newcommand\conforg{SIGDAT}
\newcommand{\mychar}{%
  \begingroup\normalfont
  \includegraphics[height=\fontcharht\font`\B]{fig6.pdf}%
  \endgroup
}


\title{Investigating Fine-grained Interactions between Emojis and Sentiments in Social Media}

\author{First Author \\
  Affiliation / Address line 1 \\
  Affiliation / Address line 2 \\
  Affiliation / Address line 3 \\
  {\tt email@domain} \\\And
  Second Author \\
  Affiliation / Address line 1 \\
  Affiliation / Address line 2 \\
  Affiliation / Address line 3 \\
  {\tt email@domain} \\}

\date{}

\begin{document}
\maketitle
\begin{abstract}
Emojis are increasingly being used to express emotions, feelings and moods of users in microblog. Most of existing sentiment analysis models take emojis as natural labels for expanding training corpus or as features of the model. However, emojis exist ambiguity, leading to noise label and noise feature. Based on the cognitive fact that emojis in social media play interactively influence on the sentiment expression of texts, we investigate fine-grained interaction in social media, and propose a neural network model with emoji-based attention for microblog sentiment classification. We use three type features: texts, emojis and interactive features between texts and emojis. The interactive features are obtained by using a emoji-base attention network, which implicitly disambiguate emoji. To train and evaluate our approach, we construct of chinese microblog corpus. The experimental results show that our model achieves significant improvements compared with baselines.  
\end{abstract}

\section{Introduction}
Nowadays, microblogs, such as Twitter and Sina Weibo, have become one of the most important social media for users to express their feelings, emotions and share their opinions. Sentiment classification is one of the most fundamental tasks and key topics of microblog researches, which aims to predict the overall sentiment polarity denoted by microblog data.

In face-to-face communication, sentiment can often be deduced from visual cues like smiling. However, in microblogs, such visual cues are lost. Over the years, people have embraced the usage of so-called emoticons or emojis as an alternative to face-to-face visual cues in social media. Especially emojis have been widely used in microblogs. According to the website \footnote{https://emojipedia.org/twitter}, emojis were used over ten billion times on Twitter. Therefore, in the sentiment classification task, many researchers also take emojis as an important factor to be integrated into their models, which apply emojis as model features~\cite{Chikersal2015SeNTU,Jiang2014Microblog} or natural annotations~\cite{Huang2017Multimodal} to expand training corpus.

Research shows that emojis exist disambiguation issues that a same emoji can express different emotions in different contexts. For instance, there are four Sina Weibo instances using a crying emoji \mychar{} to express different sentiment, as shown in Figure \ref{emoji_po}. a crying emoji \mychar{} in 1) often means sad emotion for negative sentiment, but it also can express positive sentiments, namely to express a touching feeling, a sympathy and also gratefulness corresponding to 2),3) and 4), respectively. These examples demonstrate that an emoji can be ambiguous, which makes it difficult for previously method~\cite{zhao2012moodlens,jiang2013every,Jiang2014Microblog} that views emojis as features or sentiment labels to disambiguate them.
\begin{figure}[hp]
\centering
\small
\includegraphics[width=0.40\textwidth]{fig1.pdf}
\caption{\label{emoji_po} The different polarity of an emoji}
\small
\end{figure}

In social cognitive fields, studies believe that emojis, which expresses sentiment of communicators, are similar to non-verbal components in human communication (Walthier and D’Addario, 2001), such as facial expressions, gestures, and so on. They believe that emojis in social media play interactively influence on the sentimental expression of texts. It can even change the text sentiment polarity. Figure 2 shows the emotional change processes of the texts interacted with the emojis for above two instances. Such interactions between emojis and text implicitly disambiguates the sentiment of emojis. \vspace*{1pt}
\begin{figure}[tp]
\centering
\includegraphics[width=0.45\textwidth]{fig2.pdf}
\caption{\label{change} Sentiment changes in microblog texts}
\end{figure}

In order to investigate such correlations quantitatively, we build a corpus of 10,042 microblogs, the statistics of which is shown in Table \ref{corpus-table}. There are 38.40\% sentiment change on text interacted with emojis. Furthermore, We investigate the sentiment distribution of top 10 emojis as shown in Figure \ref{emoji_sent}. These demonstrate that emoji play large influence on the sentiment of text, even changing the sentiment of text.

The sentiment change processes are similar to the attentional mechanism that selectively focuses on useful information on texts~\cite{Jiang2014Microblog}. The weight of each word is different in a microblog text, and emojis influence on those weights, change the text expression, further change the sentiment polarity of the text.

Inspired by the cognitive fact above and attentional mechanism, we propose a neural model with emojis attention for microblogs sentiment classification, as illustrated in Figure\ref{figure 2}. We use three kinds of features: text, emoji and interactive feature between them. We first capture the representations of a text based on a bidirectional long short-term memory (Bi-LSTM) model, then get interactive feature between the text and an emoji by using emojis-based attention on Bi-LSTM model. Finally, we concatenate the three features as the input of classification model and obtain sentiment polarity of the text.

\begin{figure}[tp]
\centering
\includegraphics[width=0.45\textwidth]{fig3.pdf}
\caption{\label{figure 2}The neural model with emoji attention}
\end{figure}

The main contributions of our work can be summarized as follows: 
\begin{itemize}
\item We construct emoji-based microblog sentiment annotated corpus, which contains 10,042 microblog posts. 
\item We simultaneous train emojis and words in microblogs, in processing of corpus, and obtain the representation vectors of emojis containing contextual information.
\item Since emojis have influence on texts, we propose an attention mechanism that can capture the crucial part of texts in response to given emojis. Experimental results show the effectiveness of our model compared with the baselines.
\end{itemize}

\section{Dataset Creation}
In most annotated sentiment corpus, emojis are usually filtered out as noise. To train and evaluate our approach, in this section, we describe the process of collecting and annotating our datasets of the microblogs, including the pure text polarity and the overall polarity of the microblog with emojis. 

\subsection{Data Collection}
Sina Weibo is the most commonly used Chinese microblog website. We used it to gather microblogs that express sentiment. We first download 300 thousand microblogs using the public streaming Weibo API\footnote{https://api.weibo.com}, and extracted microblogs with emojis from them, and get 110 thousand microblogs. We then sorted the microblogs according to occurrences of each emoji, and selected the list of emojis with at least 10 occurrences. Finally, we split each microblog by emojis, and selected microblogs with only one emoji. 

Automatic filtering is applied to remove URLs, users and hashtags. We retained length of microblog greater than 5, and obtained 80 thounds microblogs. We randomly take 15 thousand microblogs used for the next step of labelling data. Jieba Chinese text segmentation tool\footnote{https://github.com/fxsjy/jieba} is used to segment texts for the data set.

\subsection{Annotation}
To construct this corpus, we hired three student markers, one of whom is a senior linguistics student, the other two are computer science students.
Sentiment polarities are classified into positive, neutral and negative, denoted by 0, 1, and 2, respectively. A marked label must appears at least twice to be accepted. 

The work of annotation is mainly divided into two parts. For the first part, annotators were asked to label each microblog polarity without considering emoji. In other words, only the text of each microblog was used as evidence for a text polarity. For the second part, they were asked to label each microblog considering the emoji basic polarity, and got a overall polarity of the microblog. Because emoji polarities are ambiguous, the consistency of overall polarities is less than that of text polarities. We finally got 10,042 labelled microblogs with emojis, and 10,042 corresponding microblogs without emojis. Table~\ref{corpus-table}shows the corpus statistic.

\begin{table}[tp]
\small
\begin{center}
\begin{tabular}{|p{2.0cm}|p{0.83cm}|p{0.8cm}|p{0.93cm}|p{1.38cm}|}
\hline Corpus & Positive & Neutral & Negative & Consistency\\ \hline
Text polarity & 3827 &3618 &2597 & 85\%\\
Overall polarity &5771 &942 &3329 & 72\%\\
\hline
\end{tabular}
\caption{\label{corpus-table} Corpus Statistic, Row 1 and Row 2 denote sentiment of microblogs with and without emojis, respectively. }
\end{center}
\end{table}

Emojis are ambiguous, the same emoji has different polarities in different context. We count the number of each emoji in annotated corpus, and select the top 10 emojis by their occurrences. Statistics for emoji polarities are shown in Figure \ref{emoji_sent}. We can see that the same emoji has different polarities in different microblogs. For example, Row 2 represents the emoji \mychar{} has three polarities. Therefore, we need to eliminate the ambiguity of an emoji.\\
\begin{figure}[tp]
\centering
\includegraphics[width=0.45\textwidth]{emoji_sent.pdf}
\caption{\label{emoji_sent} Statistics for emoji polarities.}
\end{figure}

\section{Model}
In this section, we will introduce a neural model with emoji-based attention neural network for sentiment analysis in detail. First, we introduce Bi-LSTM based sentiment analysis model. Afterwards, we discuss emoji-based attention Bi-LSTM sentiment analysis model, and get concatenate features. At last, we describe the model training.

\subsection{Bidirectional LSTM (BI-LSTM) Based Sentiment Analysis Model}
Bidirectional LSTM (Bi-LSTM) is a special recurrent neural network (RNN) model, whose purpose is mainly to process sequence data, and has been widely used in natural language processing (NLP) tasks. In sentiment analysis, Bi-LSTM model is usually applied to learning the representation of a sentence, classifying the sentiment of the text according to the representation. \citet{Yang2017Hierarchical} apply Bi-LSTM to document classification, and achieve good performance. The Bi-LSTM model is briefly described below.

LSTM is useful for capturing long range dependencies in sequences. A LSTM model has multiple LSTM cells, where each LSTM cell models the digital memory in a neural network. It has gates that allow the LSTM to store and access information over time. Given a short text with words $w_{t}$, $t \in [1,T]$, the words are embedded to their vectors through an embedding matrix $W_{e}$, $x_{t}$ = $W_{e}$ $w_{t}$, $x_{t} \in \mathbb{R}^d$, where $d$ is the dimension of word embedding. The inputs of each cell in LSTM consist of the word embedding $x_{t}$, the previous cell state $c_{t-1}$ and the previous hidden state $h_{t-1}$, the outputs include $h_{t}$ and $c_{t}$. Formally, each cell in LSTM can be computed as follows:
\begin{align} 
 i_{t} &=\sigma(W_{i}x_{t} + U_{i}h_{t-1} +b_{i}) \nonumber\\
 f_{t} &=\sigma(W_{f}x_{t} + U_{f}h_{t-1} +b_{f}) \nonumber\\
\tilde{c} &=\tanh(W_{c}x_{t} + U_{c}h_{t-1} +b_{c})\nonumber\\
 c_{t} &= f_{t}\odot c_{t-1} + i_{t}\odot \tilde{c} \nonumber\\
 o_{t} &=\sigma(W_{o}x_{t} + U_{o}h_{t-1} +b_{o})\nonumber\\
 h_{t} &= o_{t} \odot \tanh(c_{t})
\end{align}
where $i$, $f$, $o$ are the input gate, forget gate and output gate respectively. $W_{i},W_{f},W_{o},W_{c},b_{i},b_{f},b_{o},b_{c}$ are the parameters we need to train. $\odot$ stands for element-wise multiplication. $\sigma$ is sigmoid function. $x_{t}$ includes the inputs of LSTM cell unit. $h_{t}$ is the vector of hidden layer.

For a standard LSTM, there is a disadvantage that the text can only be read forward. So our model adopt bidirectional LSTM that reads text bidirectionally. Bi-LSTM contains a forward $\overrightarrow{LSTM}$ which reads the text from $x_{1}$ to $x_{T}$ and a backward $\overleftarrow{LSTM}$ which reads from $x_{T}$ to $x_{1}$.
\begin{align} 
 x_{t} &= W_{e}w_{t}, t \in \left[1,T \right] \nonumber\\
 \overrightarrow{h_{t}} &= \overrightarrow{LSTM}(x_{t}),t \in \left[1,T \right] \nonumber\\
 \overleftarrow{h_{t}} &= \overleftarrow{LSTM}(x_{t}), t \in \left[1,T \right] 
\end{align}
The model maps each word w$_{t}$ to a pair of hidden vectors $\overrightarrow{h_{t}}$ and $\overleftarrow{h_{t}}$ by the Bi-LSTM layer. We obtain a representation for a word w$_{t}$ by concatenating the $\overrightarrow{h_{t}}$ and $\overleftarrow{h_{t}}$, that meaning, $h_{t} = [\overrightarrow{h_{t}}, \overleftarrow{h_{t}}]$. Thus, We get the [$h_{0}$,$h_{1}$,$h_{0}$, $\cdots$, $h_{T}$], and then feed hidden states to an average pooling layer to obtain a sentence representation $s$.
\subsection{Emoji-based Attention Mechanism Bi-LSTM Sentiment Analysis Model}
To indicate effects of emojis on texts, we propose emoji-based attention mechanism, and model effects of emojis on texts. For a given mocroblog, each word contribute unequally to the sentiment polarity of the mocroblog, and effects of emojis are also unequally. Emoji attention mechanism measure the weight of words in microblog after incorporating words and emojis.

For a microblog text $\left\{ w_{1}, w_{2}, \cdots, w_{T};E \right\}$, where $w_{i}$ is token after tokenization of the microblog text, $E$ is representation of an emoji in the microblog. Firstly, both $w_{i}$ and $E$ are converted to word embedding representations, $w_{i} \in \mathbb{R}^d$, $E \in \mathbb{R}^d$, where $d$ is the dimension of word embedding. Since many microblog users will post multiple identical emojis on the same microblog, we adope an emoji vectors in the microblog.

Different from the 3.1 section, in this section, let [$h_{1}$,$h_{2}$,$h_{3}$, $\cdots$, $h_{T}$] be representation of the text $\{ w_{1}, w_{2}, \cdots, w_{T}\}$ by Bi-LSTM neural network layer. We aggregate the representations of those informative words to form the sentence representation. Formally, the 
sentence representation is a weighted sum of hidden states as:
\begin{align} 
s &= \sum\limits_{t=1}^Ta_{t}h_{t}
\end{align}
where $a_{t}$ measures the importance of the $t-th$ word combined with emojis, the attention weight $a_{t}$ for each hidden state can be defined as:
\begin{align} 
a_{t} &= \frac{\exp(score(h_{t}, v_{e}))}{\sum_{j=1}^{T}\exp(score(h_{j},v_{e}))}
\end{align}
where e is a score function which scores the importance of words for composing sentence representation. The score function e is defined as:
where score is score function which measures the importance of words for composing sentence representation. The score function is defined as:
\begin{align} 
a_{t} &= v^T\tanh(W_{h}h_{t}+W_{E}v_{e}+b)
\end{align}
where $W_{h}$, $W_{E}$ are weight matrices, b is bias vector, $v$ is weight vector and $v^T$ denotes its transpose.

We call $s$ as the interactive feature, because which contains both the text feature and the emoji feature. Finally, we concatenate three type feature, which is defined as:
\begin{align} 
l_{c} &= [\overrightarrow{h_{0}}, \overleftarrow{h_{T}}] \oplus s \oplus v_{e}
\end{align} 
Where $\overrightarrow{h_{0}}$ and $\overleftarrow{h_{T}}$ represent the left LSTM output of the first word and that of the last word in a microblog text, respectively.

\subsection{Training}
Our training objective is to minimize the cross-entropy
After introducing emoji-based atteinton mechanism, we get the vector representation $l_{c}$ of the concatenate feature. To sentiment classification of the text, our model uses a non-linear function to project text representation $s$ into the target space of $C$ classes: 
\begin{align} 
d_{c} &= tanh(W_{c}l_{c}+b_{c})
\end{align}
Afterwards, we use a softmax layer to obtain the text sentiment distribution:
\begin{align} 
p_{t} &= \frac{\exp(d_{c})}{\sum_{k=1}^{C}\exp(d_{k})}
\end{align}
where C is the number of sentiment labels, p$_{c}$ is the predicted probability of sentiment label c. 

We train our network by maximizing the cross-entropy loss function. If $D$ represents the training set of microblog text, the loss function of our model is defined as:
\begin{align} 
L &= \sum_{d \in D}\sum_{c=1}^{C} p_{c}^g(d) \log(p_{c}^{d})
\end{align}

\section{Experiment}
In this section, we introduce the experimental settings, results and analysis.
\subsection{Experiment Setting}

\textbf{Embedding} In order to obtain embedding representations of words and emojis in microblogs, the word or emoji embeddings are trained on a large scale Chinese microblog corpus that consists of 3.5 million random microblogs by Sina Weibo API. Text words and emojis are trained simultaneously using SkipGram \cite{mikolov2013distributed} of word2vec\footnote{https://code.google.com/p/word2vec}. The vocabulary size of word embeddings is 252,267. We randomly initialize vectors of words or emojis which are not in the vocabulary, and perform supervised fine-tuning from the training corpus.\\
\textbf{Evaluation Metrics}
We perform five-fold cross-validation experiments and report the overall performances. The whole dataset is split into five equal sections, each decoded by the model trained from the remaining four sections. We randomly choose one section from the four training sections as the development dataset in order to tune hyper-parameters. The classification results are measured by accuracy metric.which is defined as:
\begin{align} 
Accuary &= \frac{T}{N}
\end{align}
where $T$ is the numbers of predicted sentiment ratings that are identical with gold sentiment ratings, $N$ is the numbers of documents.\\
\textbf{Parameter settings} We set the dimension word vectors and emoji embeddings to be 200. The dimensions of hidden states and cell states in
our LSTM cells are set to 100. We use the Adadelta~\cite{Zeiler2012ADADELTA} as our optimization method during training. We trained all models with a batch size of 16 examples, and a momentum of 0.9, initial learning rate of 0.01 for AdaGrad.

\subsection{Baselines}
To verify influences of emojis on texts and eliminate the ambiguity of emojis, we compare our model with several baseline methods, including Emoji, Bi-LSTM (text), Bi-LSTM (text+emoji) and Bi-LSTM (text+emoji)$^\#$\\ 
\textbf{Emoji:} We use only the self-polarity of each emoji, which polarity is manually annotated, as the judgement basis.\\
\textbf{Bi-LSTM (text):} The texts of microblogs in which remove emojis are fed into the Bi-LSTM layer.\\
\textbf{Bi-LSTM (text+emoji):} We take microblogs, which contains texts and emojis, as inputs for the Bi-LSTM sentiment analysis network model.\\
\textbf{Bi-LSTM (text+emoji)$^\#$:} To illustrate the ambiguity of emojis, we extend the training corpus, using other microblog data with emojis.

The representation of the microblog text by Bi-LSTM layer of each model is treated as a feature vector for judging model. The optimization goals and the training method of four baselines are the same. 
\begin{table}[tp]
\small
\begin{center}
\begin{tabular}{|p{2.0cm}|p{0.9cm}|p{0.7cm}|p{1.5cm}|}
\hline Models & Polarity & F(\%)& Accuracy(\%)\\ \hline
Emoji & - & - & 84.65\\ \hline
\multirow{3}{*}{Bi-LSTM(text)} & Positive & 75.19 & \multirow{3}{*}{76.55}\\
& Neutral & 62.51 &\\ 
& Negative & 63.50 &\\ \hline
\multirow{3}{*}{Bi-LSTM(text+emoji)} & Positive & 89.50 & \multirow{3}{*}{86.23}\\
& Neutral & 37.08 &\\
& Negative & 89.92 &\\ \hline
\multirow{3}{*}{Bi-LSTM(text+emoji)$^\#$} & Positive & 89.49 & \multirow{3}{*}{85.51} \\
& Neutral & 23.73 \\
& Negative & 89.26\\ \hline
\multirow{3}{*}{Bi-LSTM(attention)} & Positive & \textbf{91.49} & \multirow{3}{*}{\textbf{87.61}} \\
& Neutral & 36.70 \\
& Negative & \textbf{90.54}\\ \hline
\end{tabular}
\caption{\label{results} {\small Results of different models.}}
\end{center}
\end{table}

\subsection{Experiment Results}
Table~\ref{results} shows experiment results of different model for microblog sentiment analysis. We analysis each model according to the results shown in Table~\ref{results}.\\
\textbf{Emoji} can improve the performance of sentiment classifier by only emoji, which indicates emojis influences on microblog sentiment.\\
\textbf{Bi-LSTM (text)}, standard bidirectional LSTM, cann't take advantage of the emoji information in sentence, not surprisingly the model has worst performance.\\ 
\textbf{Bi-LSTM (text+emoji)} can improve the performance of sentiment classifier by treating emojis as a part of microblogs. In other words, the model uses text vectors and emoji embeddings, and obtain a better performance.\\
\textbf{Bi-LSTM (text+emoji)$^\#$}, although the model is extend training data, the performance is not improved, but reduced by 0.72\%. Which explains that the emoji sentiment in microblogs has ambiguity. When we add train data set in which the microblog polarity automatically marked according to the polarity of the emojis themselves, so the accuracies reduce.\\
\textbf{Bi-LSTM (attention)} uses emoji-base attention mechanism, meanwhile integrate text and emoji features. Which three information are concatenated to fed into the hidden layer of our network, and then softmax. We get the best accuracy 87.61\% . Our model can not only use text features, emoji features and interaction features of texts and emojis, but also can capture the most important text information in response to a given emoji.

Moreover, the results show that the emoji-based attention model obtains a considerable improvement than the without attention mechanism. The improvement also shows emoji-based attention mechanism model can effectively using the mutual information between emojis and texts. From the above, the result of Bi-LSTM (text+emoji) proves the importance of selecting more meaningful words in sentiment classification, which is also a main reason of introducing emoji information in an attention form.

Table~\ref{results} also illustrates F-scores of different sentiment polarities. We can see that the negative F-score improved greatly and surpass the positive F-score, the neutral F-score is the lowest. It shows that negative emojis have the greatest influence on the text polarity, and neutral texts are changed by emoji sentiments. which emojis have little effect on the accuracy of neutral microblogs, but significantly increase the recall value.

\subsection{Analysis}
\textbf{Interaction of emojis and texts} We randomly select the experimental results for Bi-LSTM(text+emoji) and Bi-LSTM(attention) model, respectively, analysis accuracies of different sentiment polarities from pure text to microblog with emojis, also called overall polarity. Table~\ref{accuracy} shows the results on sentiment non-changes and sentiment changes of the two models. 

\begin{table}[tp]
\small
\center
\begin{tabular}{|p{1.5cm}|p{1.0cm}|p{1.0cm}|p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Sentiment}}& \multicolumn{2}{|c|}{Polarity} & \multicolumn{1}{c|}{\multirow{2}{*}{Bi-LSTM}}& \multicolumn{1}{c|}{\multirow{2}{*}{Bi-LSTM}}\\
\cline{2-3}
\multicolumn{1}{c|}{} & {Text} & {Overall}&\multicolumn{1}{c|}{(text)} & \multicolumn{1}{|c}{(attention)}\\
\hline
\multicolumn{1}{c|}{\multirow{3}{*}{non-change}} & positive & positive & 95.29 & 94.19 \\
& neutral & neutral &45.98  & 56.32 \\
& negative & negative &87.68 & 88.15 \\\cline{2-5}
& \multicolumn{2}{|c|}{average} &89.20 &\textbf{89.45} \\
\hline
\multicolumn{1}{c|}{\multirow{3}{*}{change}} & positive & neutral & 20 & 21.43 \\
& positive & negative &76.47 & 88.24 \\
& neutral & positive &93.26  & 96.85 \\
& neutral & negative &87.27  & 97.27 \\
& negative & positive &100  &100 \\
& negative & neutral &11.54 & 12.82 \\\cline{2-5}
& \multicolumn{2}{|c|}{average} & 81.71 & \textbf{87.07} \\
\hline
\end{tabular}
\caption{\label{accuracy} {\small Results of different sentiment polarities. Column 4-5 represent the accuracies of Bi-LSTM (text) and Bi-LSTM (attention), respectively.}}
\end{table}
From above Table~\ref{accuracy}, we can see that our model, Bi-LSTM(attention), respectively improves neutral and negative accuracies by 10.34\% and 0.47\% for non-change sentiment polarities compared with the baseline model, Bi-LSTM(attention). The average of our model is also higher than that of the baseline model for non-change sentiment. \\
For changes sentiment, our model outperforms the baseline model in most cases. Especially,  the accuracies of our model significantly improve by 11.78\%, 3.59\% and 10\%, respectively, corresponds to from text positive to microblog negative, from text neutral to microblog negative and from text neutral to microblog positive. Which further demonstrate that our model can make better use of interactive information of emojis and texts for sentiment classification.\\
\textbf{The performance of microblog with different emojis} We study the performance of microblog with different emojis. We still use above two test results, and ten emojis which are top 10 in corpus in Figure \ref{emoji_accu}. Which illustrates the accuracies of different emojis. We can see that our model obtains better accuracies compared with Bi-LSTM (text+emoji). Our model further eliminates the ambiguity of emojis. For example, the third emoji \mychar{}, our model improves the accuracy by 88.37\%. Because our model applies Bi-LSTM outputs, which is concatenation of the first word and last word in microblog text, emoji-based attention and emoji embeddings three types, but the Bi-LSTM(text+emoji) only use text and emoji,not interactive information. Therefore, our model can effectively eliminate the ambiguities of emojis.

\begin{figure}[tp]
\centering
\includegraphics[width=0.45\textwidth]{emoji_accu.pdf}
\caption{\label{emoji_accu} Accuracies of different emojis}
\end{figure}

\textbf{Influences of three type features}
We explore the accuracies of three type feature in our model, Bi-LSTM(attention). That is, we use the concatenation of the text feature and the interaction feature, the concatenation of the interactive feature and the emoji feature, the concatenation of the text feature and the emoji feature, respectively. Table ~\ref{feature} shows accuracies of different combination. We can see that the best result is Row-3, which uses the text and interaction features between the emoji and the text, the lowest one is Row-2 which only applies text and emoji features without interaction features. But the three accuracies lower than our model 87.61\%. Therefore, combination feature helps microblog sentiment analysis.
\begin{table}[t!]
\small
\begin{center}
\begin{tabular}{|l|c|}
\hline \ Features & \ Accuracy (\%) \\ \hline
interaction and emoji & 86.31 \\
text and emoji & 83.56 \\
text and interaction & 86.75 \\
\hline
\end{tabular}
\caption{\label{feature} Influences of three type features.}
\end{center}
\end{table}

\subsection{Case Study}
Several random microblog samples to demonstrate the difference in our model and Bi-LSTM(text+emoji), which is shown in Figure \ref{emoji_sent}. Column 2-4 represent gold polarity, predict polarity by Bi-LSTM (text+emoji), predict polarity by Bi-LSTM (attention), respectively. We can see that Bi-lstm (text+emoji) predicts wrong, but our model predict right, which reason is that Bi-LSTM(text+emoji) equally treats emojis and texts, but our model takes account of weights of words, and emoji information.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{case_study.pdf}
\caption{\label{emoji_sent} Microblog samples of Bi-LSTM(attention) predict right, but Bi-LSTM(text+emoji) predict wrong.}
\end{figure}

It is enlightening to analyze which words decide the sentiment polarity of the microblog with an emoji. We can obtain the attention weight $a$ in Equation (5) and visualize the attention weights accordingly. Figure \ref{weight_cases} shows the representation of how attention focuses on words with the interaction of an emoji in a microblog.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{weight_case.pdf}
\caption{\label{weight_cases} Attention Visualizations}
\end{figure}

We use a histogram to represent the weight of each word. The vertical axis indicates the weight of each word, and the horizontal axis represents words in a microblog text. The column height indicates the importance degree of the weight in attention vector $a$, the higher the more important.

\section{Related Work}
Sentiment analysis, or opinion mining is the computational study that analyzes people's opinions, feelings, attitude and emotions towards events, issues, entities, products, individuals, topics and their attributes~\cite{liu2012sentiment}. Sentiment analysis is usually regarded as a classification problem. Various classification models are widely building in the task such as SVM, decision trees, machine learning and so on. Most of these models frequently utilize features including words, word numbers, parts of speech, syntax and manual annotated sentiment lexicon.

Neural network approaches have been widely used in sentiment analysis in recent years, which aim at reducing the need of hand-crafted features. Such approaches mostly adopt convolutional neural network (CNN) and recurrent neural network (RNN) models. \citet{chen2016neural} propose a hierarchical neural network to incorporate user and product attention into sentiment analysis. \citet{zhang2016gated} use gated neural networks for targeted sentiment analysis. These models mainly exploit texts as input, have no consider effects of emojis on texts. 

Microblogs, such as Twitter and Sina Weibo, are a popular social media in which exist a large number of emojis to express people's emotions and opinions. Currently, the application strategy of emojis are roughly classified into three as follows: 

The first strategy~\cite{davidov2010enhanced,mohammad2015using} takes emojis as natural annotations, and the strategy supposes that emojis express the users' emotions and opinions, independently. \citet{purver2012experimenting} leverage hashtags and emotions in Twitter data to generate training data. Experiment results show the method is suitable for some emotions (happiness, sadness and anger) but less able to distinguish others. \citet{wijeratne2016emojinet} regard an emoji character taking on different polarities based on its context. Training corpus construction using emojis contains lots of noise, and these noise has side effects on training model.

The second strategy~\cite{zhao2012moodlens} is to incorporate emojis as a feature into the classification model. \citet{jiang2013every} combine Weibo structural features, sentence structure features, and facial expression features in the SVM model, and conduct three categories of positive, neutral, and negative sentiment of Weibo. Such strategy also does not reflect the sentimental effect of emojis on texts.

The third strategy is to treat emojis and texts as two parallel sources of information. \citet{hogenboom2013exploiting} believes that emoticons have an influence on text sentiment, and think that there are negative and positive two types of influences, but don't consider ambiguity. They just treats text and emoticons as a linear relationship.


divide social media texts into two parts: emojis and texts. Then they use different models to calculate their respective sentiments. Finally, the last two sentiments are combined to obtain the final text sentiment.

Synthesizing the above three strategies, these models don't consider the interaction between emojis and texts, although they take into consideration emojis.

The attention mechanism is applied earlier to image processing~\cite{mnih2014recurrent}. Recently, it has been increasingly applied to natural language processing, which role is to select curcial information from a large number of information. In natural language processing, studies have shown that the attention mechanism has greatly improved the performance of machine translation~\cite{luong2015effective}, question answering system~\cite{tan2016improved}, and sentiment analysis~\cite{Long2017A}.

This paper proposes a neural network model with emoji-based attention mechanism to obtain interaction information between texts and emojis , and integrates texts, emojis and interaction information. Which well models the cognitive fact which emojis have the sentiment influence on text.

\section{Conclusion and Future Work}

Inspired by the cognitive fact that emojis have influence on sentiment expression of texts, we propose a novel cognition based attention model to improve the sentiment analysis. In the model, each word in a sentence combined emojis will be treated with different weights, which makes the semantics of a sentence more accurate expression. Our model take full advantage of the interactive information between text and emoji, text information and emoji information. We train emojis and texts at the same time to get the word embeddings and emoji embeddings, so emoji embeddings also contains contextual features by this way. More importantly, to study the interaction between texts and emojis in micrblogs, we annotate microblog corpus which contain pure text polarities and overall polarities of microblog with emojis.

Experiment results validate the effectiveness of our method in sentiment analysis as our method clearly outperforms other baseline methods that only use word embedings and emoji embeddings. Our emoji-based attention mechanism can further combine texts and emojis to improving performance. Future work includes using many emojis for whole microblog sentiment analysis.

\section*{Acknowledgments}

The acknowledgments should go immediately before the references.  Do
not number the acknowledgments section. Do not include this section
when submitting your paper for review. \\

\noindent {\bf Preparing References:} \\

Include your own bib file like this:
{\small\verb|\bibliographystyle{acl_natbib_nourl}|
\verb|\bibliography{emnlp2018}|}

Where \verb|emnlp2018| corresponds to the {\tt emnlp2018.bib} file.
\bibliography{emnlp2018}
\bibliographystyle{acl_natbib_nourl}
\end{document}
