\subsection{The Architecture}\label{sec:impl}
The system architecture from a bird's eye view is presented in Figure~\ref{fig:architecture}. In a nutshell, a document enters the analysis phase, where topic inference and sentiment scoring take place, resulting in \(\langle\)topic, sentiment\(\rangle\)-pairs. During the subsequent generation phase, these are intersected with the \(\langle\)topic, sentiment\(\rangle\)-pairs in the user agenda. This intersection, possibly augmented with a knowledge graph, forms the input for a template-based generation component.
%The system consists of three  components: {\em Topic Inference}, which identifies the article's topics; {\em Sentiment analysis}, which scores topics based on author sentiment; and {\em Response generation}, which accepts as input a topic, a sentiment, and a pre-defined user agenda, and generates as output a response.

\begin{figure*}
\centering
\scalebox{0.52}{\includegraphics{figure-arch-06.jpg}}
\caption{The system architecture from a bird's eye view. Grayed out components are executed offline.} \label{fig:architecture}
\end{figure*}

%\paragraph{Topic inference}
%Given document, one has to analyze it and identify its relation to the the user's agenda.
\paragraph{Analysis phase}
For the task of inferring the  topics of the document we use topic modeling:
a probabilistic generative modeling technique that allows for the discovery of abstract topics over a large body of documents \cite{Papadimitriou:1998:LSI:275487.275505,Hofmann:1999:PLS:312624.312649,Blei:2003:LDA:944919.944937}.
%A topic in this context extends our ``bag of words'' definition with  a probability distribution over words. A topic model provides a   probability distribution over topics for each document, and a for each topic, a probability distribution over the observed words.
%The result of  training a Topic Model is a list of vectors of fixed length reflecting words in the document. For each document, each vector represents a topic \(t\) and each element in the vector is has a probability of having been generated by the topic.
% of length \(l\) by means of assigning each word token a probability distribution over topics.
%\[\forall w\in[1.. l] : TMD(w)=\{\langle i,Pr‚Å°(i)\rangle \}_{i=1}^m\]
%While there are several approaches and tools for this task (including keyword extraction, entity disambiguation and more)
 %which we use to determine the document‚ and its relevancy to the agenda.\
%In our implementation collect a set of documents on a  subject matter (below, mobile devices) and train the
%Topic model distributions using LDA are trained on a set of documents in order to estimate the  parameters of the aforementioned probability distributions.
Given a new document and a trained  model, the inference method provides a weighted mix of topics for that document, where each topic is represented as a vector containing a set of  keywords associated with probabilities.
%We represent the main topic of a document as the  highest scoring word in the highest scoring topic of that document.
Specifically, we use topic modeling based on {\em Latent Dirichlet Allocation} (LDA) \cite{Blei:2003:LDA:944919.944937,Blei:2012:PTM:2133806.2133826}.
For training the topic model and inferring the topics in new documents we use {\em Gensim} \cite{urek_software_2010}, which is a fast and easy-to-use implementation of LDA.
%and very efficiently processes large volumes of data.

 %We overload our definition of a topic to be a Topic Model Distribution of a document:
%

%\paragraph{Sentiment Analysis}
Next, we wish to infer the sentiment that is expressed in the text with relation to the topic(s) identified in the document.
%Opinion mining and sentiment analysis are well-studied fields that show promising results.
%Available techniques for automatic sentiment analysis range from machine learning  techniques (using  Maximum Entropy \cite{Pang:2002:TUS:1118693.1118704}, Naive Bayes \cite{Turney:2002:TUT:1073083.1073153}, SVM \cite{Pang:2005:SSE:1219840.1219855}, or decision trees \cite{Schrauwen-2010-sentiment}) to  semantic/lexical methods that  are based on term identification and semantic orientation of individual words (such as polarity and objectivity) identified using large-scale  dictionaries such as SentiWordNet \cite{BACCIANELLA10.769} and WordNet-Affect \cite{Strapparava2004}.\footnote{There are also hybrid approaches using both a dictionary and statistics \cite{Turney:2003:MPC:944012.944013}. For a survey of the field see \newcite{Mejova-2009-Sentiment}.}
We use the semantic/lexical method as implemented in
%\footnote{Adopting a machine-learning extension would require a specially tailored training the suits our domain, which we leave for future research. }
 \newcite{Kathuria-2012-WSD}. We rely on a WSD sentiment classifier that uses the SentiWordNet \cite{BACCIANELLA10.769} database and  calculates the positivity and negativity scores of a document based on the positivity and negativity of individual words.
The result of the sentiment analysis is a pair of values, indicating the positive and negative sentiments of the document-based scores for individual words. We use the larger of these two values as the sentiment value for the whole document.\footnote{Clearly, this is a simplifying assumption. We discuss this assumption further in Section ~\ref{sec:future}.}  %$sentiment_d=\langle neg,pos\rangle$, where $0\leq neg,pos\leq1$.
%The two  scores for positivity and negativity allow us to implement a  more fine-grained scale of sentiment levels (strong, %medium, and neutral) based on the relation between them.





%\paragraph{Document/Agenda Intersection}
\paragraph{Generation phase}
Our generation function first intersects the set of topics in the document and the set of  topics in the agenda in order to discover relevant topics to which the system would generate responses. A response may in principle integrate content from  a range of topics  %and use  multiple keywords
  in the topic model distribution, but in our implementation we focus on a single, most prevalent, topic, for the sake of generating concise responses. We pick the highest scoring word of the highest scoring topic as the topic of the document, and intersect it with topics in the agenda. %We  assume that the overall sentiment analysis of the document is the sentiment of the  for all topics. %In case the topic we extracted is part of the user agenda,
The system generates a response based on the topic, the sentiment found for the topic in the document, and the sentiment for that topic  in the user agenda.
%\paragraph{Response Generation}
%Having conducted the topic inference and sentiment analysis, we move on to the generation of natural language responses.


%There are two main approaches for natural language generation: In the traditional approach one performs syntactic realization (among other tasks) in order to generate a natural-language text. A more modern,
%There are different approaches for natural language generation, among which  syntax-based realization and template-based realization.\footnote{Both methods have the same generative power \cite{Reiter:1997:BAN:974487.974490}.}
The generation component relies on a template-based approach similar to \newcite{Reiter:1997:BAN:974487.974490} and \newcite{VanDeemter:2005}. This approach is suitable for our task as it requires only content determination. Templates are  then applied to these content elements, generating our user responses.
%This approach is flexible enough, allowing us to easily add random variation in expressing the same content.

Templates are essentially subtrees with leaves that are place-holders for other templates or generation functions \cite{Theune:2001:DSG:973927.973930}.
  These functions receive (relevant parts of) the  input and emit the corresponding  part-of-speech (POS)  sequence that realizes  relevant referring expressions.
The resulting POS sequences are ultimately place holders for words from a lexicon \(\Sigma\).  In order to generate a variety of expression forms --- nouns, adjectives and verbs --- these items are selected randomly from a fine-grained lexicon we defined. The sentiment (positive or negative) expressed in the the response is defined by the intersection stated above, but the way it is expressed is also chosen at random via the same process.



\begin{figure*}
\small
\begin{tabular}{c}
\Tree[.S_{response} [.NP I ] [.VP V\(\downarrow\)\\\(_{belief}\)  [.SBAR that  (S_{response}) ]  ] ]
\Tree[.S_{response} S_{article}  S_{item}  (S_{relation})  ]
\\ \\
\Tree[.S_{item} NP\(\downarrow\)\\\(_{itemRef}\)  VP\(\downarrow\)\\\(_{senti_{i}}\)   ]
\Tree[.S_{article} NP\(\downarrow\)\\\(_{articleRef}\)  VP\(\downarrow\)\\\(_{senti_{a}}\)   ]
\Tree[.S_{relation} NP\(\downarrow\)\\\(_{relationRef}\)  VP\(\downarrow\)\\\(_{senti_{r}}\)   ]
\end{tabular}
\begin{tabular}{|ll}
articleRef &\(\leftarrow\)  ExpressArticle(...)\\
itemRef & \(\leftarrow\) ExpressItem(...)  \\
relationRef & \(\leftarrow\)  ExpressRelation(...)  \\
sentiment\(_{a}\) & \(\leftarrow\) ExpressArticleSentiment(...)  \\
sentiment\(_{i}\) & \(\leftarrow\)  ExpressItemSentiment(...)  \\
sentiment\(_{r}\) & \(\leftarrow\)  ExpressRelationSentiment(...) \\
belief &\(\leftarrow\)   ExpressBelief(...) \\
\end{tabular}
\normalsize
\caption{Template-based response generation. The templates are  on the left. The Express* functions on the right uses regular expressions over   the arguments and vocabulary items from a closed lexicon.}\label{gen}
\end{figure*}

Our generation implementation is based on SimpleNLG \cite{Gatt:2009:SRE:1610195.1610208} which is a surface realizer API that allows us to  create the desired templates and functions, and aggregate content into  coherent sentences.
%This lets us concentrate  on planning  the content of the response, avoiding complexities of  realization.
The templates and functions that we defined are depicted in Figure~\ref{gen}.
%This also makes our framework easily extendable.
%{\bf System Variants}
%The realization functions described are described in table \ref{}. Each tamplate employs a regular expression over the arguments and a finite set of part-of-speech tags, and it randomly selects filler from a pre-defined lexicon.
In addition, we handcrafted a simple knowledge graph containing the words in a set of pre-defined user agendas.  Table \ref{tab:knowledge_graph} show a snippet of the constructed knowledge graph. The knowledge graph can be used to expand the response. The topic of the response is a node in the $KB$ and we randomly select one of its outgoing edges for creating a related statement that has the target node of this relation as its subject.  The related sentence generation uses the same template-based mechanism as before.
In principle, this process may be repeated any number of times and express larger parts of $KB$. However, to keep the responses concise, we only add one single knowledge-base relation per response.
%
The code, templates,  lexicon and KB are provided as supplementary materials (and  are  available at \url{www.authors.com}).


%\begin{multline*}
%g_{\rm kb}(c,a, KB)=s \\
%KB\subseteq\{(w_i,r,w_j)| w_i,w_j\in \Sigma, r\in R\}
%\end{multline}




\begin{table}
\center
\scalebox{0.8}{
\begin{tabular}{|l|l|l|}
\hline
Source  & Relation  & Target \\ \hline\hline
Apple   & CompetesWith  & Samsung \\ \hline
Apple   & CompetesWith  & Google \\ \hline
%Samsung   & Creates     & Android \\ \hline
Apple   & Creates     & iOS \\ \hline\hline
%iphone  & CreatedBy & apple \\ \hline
%ios     & CreatedBy & apple \\ \hline
\end{tabular}}
\caption{A knowledge graph snippet.}\label{tab:knowledge_graph}
\end{table} 