<html>
	<head>
		<style>
			body {
				font-family: Helvetica, Arial;
			}
			td {
				 vertical-align: top;			
			}
			
			tr.sep {
				height: 0.5em;
				background-color:grey;
			}
			
			.paths
			{
				font-family: Courier New, monospace;
			}
		</style>
	</head>
	<body>
		<div id="title">
			<h1>Generating Subjective Responses to Opinionated Articles in Social Media:</h1>
			<h2>An Agenda-Driven Architecture and a Turing-Like Test</h2>
			<div>Tomer Cagan, Stefan L. Frank and Reut Tsarfaty</div>
		</div>	
		
		<div id="general">
			<a href="general"></a>
			<p>This page contains supplementary materials for our article 
			<b><i>"Generating Subjective Responses to Opinionated Articles in Social Media: An Agenda-Driven Architecture and a Turing-Like Test"</i></b> (ACL-NLPSD-2014). 
			This includes code, data that were used for 
			analysis of on-line documents and response generation, as well as the  evaluation data and analysis scripts.</p>
			
			<p>You can jump directly to <a href="#appendices">appendices</a>, <a href="#code">code</a> or <a href="#data">data</a>.</p>
		<div>
		
		<div id="appendices">
			<a href="appendices"></a>
			<h2>Appendices</h2>
			<p>The appendices for the article, listing additional relevant information not included in the article itself, can be downloaded from <a href="acl2014appendice.pdf">here</a>.</p>
		</div>
		
		<div id="code">
			<a href="code"></a>
			<h2>Code Modules</h2>
			The code used for the article can be downloaded from <a href="acl2014.software.zip">here</a><br/>
			<p>The following is a summary of the project code modules. Each file is self documented</p>
			
			<div id="code-data">
				<h3>Data Collection</h3>
				<p>Script used for collection of data for experiments.</p>
				<table>
					<tr>
						<td><span class="paths">/create_rss_datasets/create_rss_datasets.py</span> -</td>
						<td>(Optional) Downloads the articles from a given listing file that contains RSS feed addresses. <br/>
						  Normalize output into a standard form that can be also used for Mallet or similar libraries.</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: path_to_rss_feeds_file</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: rss_dataset_file</td>
					</tr>
				</table>
			</div>
			<div id="code-analysis">
				<h3>Analysis</h3>
				<p>Scripts used for analysis of the data: topic learning and document processing (topic inference and sentiment analysis).</p>
				<table>
					<tr>
						<td><span class="paths">/train_topic_models/train_topic_models.py</span>-</td>
						<td>Trains the topic models on a document corpus (e.g. the 
						 one create by create_rss_datasets). Can limit the amount 
						 of trained topics models.</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: path_to_rss_dataset_file [-n number_of_topics_limit]</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: words.dict, corpus.mm, model.lda that contains the trained LDA model</td>
					</tr>
					<tr class="sep"></tr>
					<tr>
						<td><span class="paths">/convert_models/convert_models.py</span>-</td>
						<td>Converts a trained LDA Topic model into a textual 
							representation (for manual inspection).</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: path_to_trained_lda_model_file.</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: textual representation of the LDA model.</td>
					</tr>
					<tr class="sep"></tr>
					<tr>
						<td><span class="paths">/process_docs/process_docs.py</span>-</td>
						<td>Processes new documents (file) according to the trained topic 
							model and performs their evaluation computing their sentiment 
							and inferring the topic model distribution.</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: docs_to_process.txt path_to_lda_model path_to_corpus_mm_file.</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: .results file, that contains the topic inference and the sentiment 
									analysis results for the given documents.</td>
					</tr>
					<tr class="sep"></tr>
					<tr>
						<td><span class="paths">/get_topic_keywords/get_topic_keywords.py</span>-</td>
						<td> Detects matching topic keywords in the processed docs 
						 according to a predefined keyword agenda.</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: docs_to_process.txt path_to_lda_model path_to_keywords_agenda_file.txt.</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: .keyword_probs file, that contains the keyword probabilities for each
							processed document</td>
					</tr>
					<tr class="sep"></tr>
					<tr>
						<td><span class="paths">/common/*.py</span>-</td>
						<td>Common utility code.</td>
					</tr>
				</table>
			</div>
			<div id="code-generation">
				<h3>Generation</h3>
				<p>A java program that is used for generating responses to the analyzed documents.</p>
				<table>
					<tr>
						<td><span class="paths">/nlg/ResponseGenerator/</span>-</td>
						<td>Java based response generation component (simulation), 
							using the SimpleNLG package.</td>
					</tr>
					<tr>
						<td></td>
						<td>Input: processing_results keyword probabilities file keywords agenda file model text file</td>
					</tr>
					<tr>
						<td></td>
						<td>Output: results of generation including the sentiment and KB usage to a file</td>
					</tr>
				</table>
			</div>
		</div>	<!-- end of code listing -->
		
		<div id="data">	
			<a href="data"></a>
			<h2>Experiment Data</h2>
			The data and analysis scripts used for the article can be downloaded from <a href="acl2014.data.zip">here</a><br/>
			<p>Following is a description of the files that were used/produced as part of
				generation, experiment, and analysis:</p>
				
			<div id="data-datafiles">	
				<h3>Data files (<span class="paths">/articles_data</span>)</h3>
				<div id="data-datafiles-training">
					<h4>Training (/articles_data/training):</h4>
					<p>Files that have to do with data collection and training of topic models.</p>
					<table>
						<tr>
							<td><span class="paths">rss_list.txt</span></td>
							<td>- A list of RSS feeds for data extraction.</td>
						</tr>
						<tr>
							<td><span class="paths">rss_list.dataset</span></td>
							<td>- Downloaded and formatted RSS articles for training.</td>
						</tr>
						<tr>
							<td><span class="paths">words.dict</span></td>
							<td>- Words dictionary, internally used for gensim training.</td>
						</tr>
						<tr>
							<td><span class="paths">corpus.mm</td>
							<td>- Corpus extracted from the dataset file (gensim format).</td>
						</tr>
						<tr>
							<td><span class="paths">corpus.mm.index</span></td>
							<td>- Index corpus extracted from the dataset file (gensim format).</td>
						</tr>
						<tr>
							<td><span class="paths">model.lda</span></td>
							<td>- Trained LDA model, in gensim format.</td>
						</tr>
						<tr>
							<td><span class="paths">model.txt</span></td>
							<td>- Textual representation of the trained LDA model.</td>
						</tr>			
					</table>
				</div> <!-- end of data files - training -->
				
				<div id="data-datafiles-inference">
					<h4>Inference/Generation (<span class="paths">/articles_data/inference</span>):</h4>
					<p>Files that are part of the inference/generation workflow.</p>
					<table>
						<tr>
							<td><span class="paths">keywords_agenda.txt</span></td>
							<td>- List of keywords used as agenda. Note that the sentiment is currently ignored (as we simulate all sentiments).</td>
						</tr>
						<tr>
							<td><span class="paths">test_docs.keyword_probs</span></td>
							<td>- Probability of each keyword for each processed doc (after topic inference).</td>
						</tr>
						<tr>
							<td><span class="paths">test_docs.txt</span></td>
							<td>- Test documents to generate responses for.</td>
						</tr>
						<tr>
							<td><span class="paths">test_docs.results</span></td>
							<td>- Results of the analysis (topic model + sentiment) of new documents (using topic models and sentiment classifiers).</td>
						</tr>
						<tr>
							<td><span class="paths">test_docs.gen</span></td>
							<td>- Generated responses for experiment (results of the system simulated execution).</td>
						</tr>
					</table>
				</div>  <!-- end of data files - inference/generation -->
				<p>* read more about the data file in corresponding scripts that generate them.</p>
			</div>	<!-- end if data-files -->
			
			<div id="data-experiment">
				<h3>Experiment data (<span class="paths">/experiments</span>):</h3>
				<p>Data used for the surveys.</p>
				<table>
					<tr>
						<td><span class="paths">survey_run_data.xlsx</span></td>
						<td>
							All the data for running the experiments:
							<ul>
								<li>Documents.</li>
								<li>Responses (human and generated).</li>
								<li>Comprehension questions/answers.</li>
							</ul>
						</td>
					</tr>
				</table>
			</div> <!-- end of data-experiment -->
			
			<div id="data-results">
				<h3>Experiment Results (<span class="paths">/experiments</span>):</h3>
				<p>Experiment results data.</p>
				
				<div id="data-results-survey1">
					<h4>Computer-likeness (survey 1) (<span class="paths">/experiments/human_computer_survey1</span>):</h4>
					<p>Results of the computer-likeness rating survey.</p>
					<table>
						<tr>
							<td><span class="paths">human_computer_survey_data.csv</span></td>
							<td>- Survey 1 experiment data and scores.</td>
						</tr>
						<tr>
							<td><span class="paths">Responses and scores.txt</span></td>
							<td>- Results of the survey - showing the texts and scores.</td>
						</tr>			  
						<tr>
							<td><span class="paths">Responses and scores.xlsx</span></td>
							<td>- Same as above but in MS Excel 2010 format.</td>
						</tr>
					</table>
				</div> <!-- end of data results - survey 1 -->
				
				<div id="data-results-survey2">
					<h4>Relevance (survey 2) (<span class="paths">/experiments/relevance_survey2</span>):</h4>
					<p>Results of the relevance rating survey.</p>
					<table>
						<tr>
							<td><span class="paths">relevance_survey_data.csv</span></td>
							<td>- Survey 2 experiment data and scores.</td>
						</tr>
					</table>
				</div> <!-- end of data results - survey 2 -->						
			</div> <!-- end of data-results -->
			
			<div id="data-analysis">
				<h3>Analysis Scripts (<span class="paths">/analysis</span>):</h3>
				<p>Analysis scripts (in R) for both sets of survey data.</p>
			</div> <!-- and of data-analysis -->
		</div> <!-- end of data -->
	</body>
</html>
