\begin{abstract}
Natural language traffic in social media (blogs,  microblogs, talkbacks)  enjoys vast   monitoring and {\em analysis} efforts. However, the question  whether computer systems can   {\em generate} such content in order to effectively interact with humans has been only sparsely attended to.  This paper presents an  architecture for generating subjective responses to opinionated  articles  based on users' agendas, topics, sentiment and a knowledge graph. We present an   empirical evaluation method for quantifying the human-likeness and relevance of the generated responses. We show that responses  generated using additional world knowledge in the input are regarded as more human-like than those that rely on  topic, sentiment and agenda only, whereas the use of world knowledge does not affect perceived relevance.
\end{abstract} 