%The report should contain:
%   + Use a template: this latex template (IEEE style with 10 point font size) or similar in style. Please do not squizze the font size or layout beyond the style file.
%   + Information on first page: name, study number, title.
%   + Your programming code in an appendix. You may exclude code that contains excessive GUI-setup, long wordlists, etc.
%   + You are allowed to use code that you found on the Internet (provider that the author distributes it under a suitable license such as BSD or GPL), but remember to give full and clear attribution to the author. Any text citations should also be fully referenced and in quotation marks. Paraphrased citations should also be fully referenced. 
%
%There are length requirements for the report:
%    Two-persons report: 3 pages
%
%This limit does not apply for the appendix, which can be any page length. The appendix may contain code and automated generated content, e.g., from Epydoc, pylint results or other.
%
%The report could contain, e.g.:
%    Discussion of the design of the program
%    Description of the implementation.
%    Table or graphical overviews of modules and/or classes
%    Database schema description
%    Description of central parts of the code
%    Screenshots of the program
%    Plots of results
%    Plots of code performance
%    ... 
%
%When looking into the code in the report apart from the actual functionality we may possibly examine the below items:
%    How well-structured it is (modules/classes/functions)?
%    Is it effective? Does it use the Python language constructions effectively? For example sometimes you can avoid a standard "for" loop writing it the code a single line. Vectorized NumPy operations might be faster than standard Python, see, e.g., my blog post.
%    Is it secure? In a web application you should sanitize input and escape during output. If your program receives strange input does it crash?
%    Is the code documented? A structured way would use the __doc__ variable and, e.g., pydoc. See the "documentation" part of the introduction slides
%    Is the code tested? Are errors and exceptions handled well. You have likely run the program and seen that it works. A more structured approach would also utilize some of the testing functionality in Python. See "Testing" part of the introduction slides.
%    Is the coding "nice looking" and consistent? For inspiration see Style Guide for Python Code. 


\documentclass[10pt]{IEEEtran}
\pdfoutput=1

\usepackage{graphicx}
\usepackage{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{listings}
\usepackage[table]{xcolor}
\usepackage{pdfpages}
\usepackage{subcaption}

\hypersetup{colorlinks=true,citecolor=[rgb]{0,0.4,0}}


\title{Mining Stock Market News}
\author{Magdalena Anna Furman (s110848) \& Helge Munk Jacobsen (s082940)}

\begin{document}
\maketitle

\begin{abstract}
The main problem that is addressed by this project is to explore and analyse the influence of the news articles on stock prices of corresponding companies. Since there exist many factors that influence the stock prices, it might be possible that there is no relationship all. The data has to be collected by use of existing API's --- Google Finance Company News RSS feed and Yahoo Finance API. 
\end{abstract}

\section{Introduction}
%(Discussion of the problem that is going to get solved, the data available and its features.)
%Lets write this as the last thing.
The aim of this project is to discover and analyze the correlation between stock prices and news articles related to the corresponding companies. The text data from news articles, appearing in the Google Finance Company News RSS Feed, have been collected, as well as the stock price data from the Yahoo Finance API. Moreover, it has been investigated whether there exists any correlation between these two data sources and if some of the words, topics or sentiment are correlated. This information have been used to try to predict how a given news article may affect the price of the corresponding stock.

\section{Implementation}
%(Discussion of the design of the program)
% + graphical desc of modules / classes)
In this section the design of the packages as well as the key features implemented are described in detail. Moreover it consist of specification of the implemented packages and scripts.

\subsection{Design}
As the main goal of this project is experimental and scientific, it has been decided to build the application as a few encapsulated data mining packages, that can be reused for different tasks and easily interfaced by various scripts. Each of these packages is dedicated to perform a well defined task. The packages are accessed through various scripts that manage, explore and use the data in order to perform prediction tasks and visualize the results.

\subsubsection{Data mining package --- stocknews.py}
This package supports fetching the news related to the appropriate stocks. It uses Google Finance as source for pointing out the relevant article URLs, and then fetches these URLs in parallel. Behind the scene it also provides automatic data preprocessing making the data directly applicable for machine learning models, such as filtering out the article content form the noisy html page and counting the words after normalizing (tokenization, words stemming and stop words removal). 

Moreover, it provides an easy interface which allows to iterate the data. The available attributes are the short name of the stock, the date and time when the article was published, the title of the article, the content represented as bag-of-words, the url of the article, the relevance of the article (provided by the order in which the Google Finance API lists the articles), the content of the article scraped from the HTML page as well as the plain HTML content. Each of these attributes or any combination of multiple attributes can be requested as an iterator. The iterator can also be limited to only iterate over articles related to specified stocks.

\begin{figure}[htbp]
\centering
\includegraphics[width = 0.9\columnwidth]{pictures/packages.png}
\caption{Graphical representation of created classes and scripts}
\end{figure}

\paragraph{Raw Data vs. Enclosed Object}
One challenge is to deal with large amounts of data that needs to be downloaded in order to be easily accessible. This implies storing the data somehow. The question is how enclosed the data should be. One extreme is to expose the data completely to the programmer, providing a package with static functions. The downside of this solution is that the programmer would have to study the complex data structure and would have provide the database filename each time a function is used. Another extreme is to completely enclose the data in an object that provides an API for accessing the data. This solution implies loss of flexibility, but offers ease of use, which is desirable. Unfortunately, the data can not necessarily be loaded into the memory, so such an object has to store a reference to a database. However, completely enclosing the data would suggest that database is removed on \texttt{\_\_del\_\_()}, and this quickly becomes a mess, since we can not always guarantee that \texttt{\_\_del\_\_()} is called e.g. if the console is closed.

It has been chosen to encapsulate the data in object, that provides easy way to interact with, but still exposes the database to the programmer and making him aware that it exists. Some of the ways in which a programmer can interact with the data is through the magic methods for container objects: \texttt{\_\_iter\_\_()}, \texttt{\_\_contains\_\_()}.

\paragraph{Scraping the Textual Content}
Fetching all text from an URL in order to collect an article often includes navigation elements, advertisements, comments and many other unnecessary content. The main challenge is here to retrieve the interesting part --- article content --- and reject the additional unnecessary parts at the same time. A recursive algorithm which counts the number of words in each XML tag and selects the one tag with most words has been developed. The screenshots in the Fig.~\ref{fig:scrape} shows how well the algorithm performs. The green dashed border marks the XML tag that has been chosen, whereas the red dashed border marks the tags that are not included even though they are inside the chosen tag. The example on the right shows that the algorithm is able to filter away the advertisements that looks like text.

\begin{figure}[htbp]
\centering
\includegraphics[width = \columnwidth]{pictures/scrape_both.png}
\caption{Sample articles showing how the article content is retrieved from plain HTML}\label{fig:scrape}
\end{figure}

\paragraph{Parallel Content Acquisition}
Waiting for multiple HTTP responses is very time consuming, so downloading the HTML is done in parallel to speed the process up.

\paragraph{Iterating the data by ''yield''}
As mentioned above, the stocknews package provides an easy way to iterate through an otherwise complex data structure. This is implemented using the Python feature ''yield''. It is implemented in a way that all combinations of data attributes and stocks can be requested without additional time usage and without using any memory, since it is handling over one value at a time.

\subsubsection{Data mining package --- stockdata.py}
This package supports fetching the stock data related to the appropriate stocks. It uses Yahoo Finance API as data source. It provides the data preprocessing such as filtering of unnecessary dates (e.g. weekends and bank holidays) as well as a function for approximating the stock price change. Since the data is not as large as in the previous case, the data is not stored in a database. The stock data is stored in the memory as pandas DataFrame for easy manipulation.
Thanks to this, the data is indexed by the dates to which the stock prices are associated, which implies easy access to data. The package supports also saving/loading data to/from a csv file. The package also provides a method for getting the change in stock price averaged over a specified timespan before and after a specified date.

\subsubsection{Scripts}
Since we address a very scientific and exploratory task, the main interfacing with our two data mining packages will be done through custom scripts. We provide 3 scripts showing how the data mining packages can be used to deal with downloading data (scripts\_get\_data.py), performing a sentiment analysis by training a model based on the data and testing the result (scripts\_train\_models.py), and using this model to visualize the sentiment of new articles (scripts\_classify\_articles.py). 

In addition, there exist two scripts (test\_stocknews.py and test\_stockdata.py), which consist unit tests for both stocknews and stockdata packages.

\section{Data Mining}
This section describes the data mining problem that was addressed by the project and the data that have been acquired.

\subsection{The Problem}
The main problem which is addressed by this project was to discover the relationship between the stock prices and the articles published on Yahoo Finance. We have done this by turning the problem into a classification problem. For each article we look at the date and calculates a the smoothened change of the price over a few days. This change is then turned into three classes; negative, neutral and positive, divied by a specified threshold. We perform a sentiment analysis by training 6 different models as well as at random classifier thatpreservs the balance of the classes.

\subsection{The Data}
The data for this particular task for finding correlation between news articles and stock data has some build-in challenges worth elaborating on. 

Stock prices are by nature extremely noisy and hence is really hard to use as a measure for a sentiment analysis. This suggests that lots of data is needed in order to discover a pattern, if it even exists.

New articles represented as bag-of-words (or ngrams for that matter) has an extremely high dimensionality. This makes the task of avoiding overfitting rather hard. With hundreds of thousands of words it is very likely that we will find some words that correlates nicely, so it is important not to confuse correlation with causality.

\section{Development Process}
% (Details of the development process, e.g., editor, IDE, revision control system, operating system, cloud service, ... )
The project has been hosted on Google Code, which is a free environment for open source projects. Subversion has been used as the revision control system while project development phase. The whole development process have been performed under Windows 7x64 OS. This implies the use of WinPython, which is a portable distribution of the Python programming language, dedicated for Windows XP/7/8. It includes an interactive development environment, Spyder, which have been used for writing, debugging, testing and profiling the code. Spyder was one of the IDE’s that has been used while developing the code. The PyCharm Community Edition IDE has been also used as coding environment. It includes an integrated tool for managing the SVN. In case of Spyder, it was sufficient to use an external subversion client (TortoiseSVN in this case).

\section{Testing}
%(Plots of code performance + Coverage)
%	Pylint (Reference to output in appendix)
%	Unittest (How we structured our tests) Docstrings
%	Profilling (nice graph! and a bit of what pycallgraph does)
In this section, elaboration (testing, profiling, convention checking) on the implemented code is presented. We used pylint to check our coding quality. Both packages consist of sample usage which is presented by doctests. In addition to this, two packages consisting unit tests have been created (test\_stockdata.py and test\_stocknews.py). The code have also been documented by using sphinx package (See Appendix~\ref{app:doc}).

\subsection{Code Checking}
The analysis by pylint tool showed that the implemented code have been written according to the python convention. The errors regarding too long lines have been ignored. In stockdata package pylint points out an error that ''a tuple has no column member'', but it is not an error, since it confuses a pandas DataFrame with a tuple. In stocknews package pylint complains about a use of deprecated module string. The usage is motivated by the fact, that str module does not contain the list of upper case or lower case letters. The pylint output is attached in the Appendix~\ref{app:codecheck}.

\subsection{Unit Tests}
The testing of the implemented packages have been performed by using nose. It has been decided to use nose, since it is simpler than the built-in unittests and provides descriptive error messages at acceptable level. The tests cover almost fully the two implemented packages stockdata and stocknews (see Appendix~\ref{app:codecheck}).
 
%\begin{table}
%\begin{center}
%\begin{tabular}{|c|c|}
%\hline 
%Package & Coverage \\ \hline 
%stockdata & 99\% \\ \hline 
%stocknews & 96\% \\ \hline 
%\end{tabular}
%\end{center}
%\caption{The results of code coverage on stockdata and stocknews packages}\label{tab:cover}
%\end{table}
 
\subsection{Profiling}
It has been chosen to profile the program using ''pycallgraph''. This package monitors the calls between functions and outputs a graph that visualises the interactions between the functions, classes and class methods. It also displays the time used at each of these functions/methods as well as the number of function calls.

\section{The Results}
%(Screenshots of the program + Plots of results)
%	Classification
%		Confusion matrices
%		Stock price and articles
%		Most positive / negative / descriptive words

We have employed six different classification models on around 27000 articles and tested the performance on 16000 articles using 200 features. The results show that none of the classifiers performed better than a random classifier. This is probably caused by the noisy nature of stock prices and the curse of dimensionality. Figure~\ref{fig:chart} presents the collected data and the information that was extracted from stock prices and news articles: stock change and articles sentiment. The green bars represent the articles classified as positive class, whereas the red bard represent the articles classified as negative class. Figure~\ref{fig:score} presents the performance of the classifiers.

\begin{figure}[htbp]
\centering
\includegraphics[width = \columnwidth]{pictures/barchart.png}
\caption{Stock price, stock change and articles sentiment}\label{fig:chart}
\end{figure}

\begin{figure}[htbp]
\caption{Classification accuracy}
\centering
\includegraphics[width = \columnwidth]{pictures/score.png}\label{fig:score}
\end{figure}

\section{Discussion and Future Work}
There is always room for improvements. We here discuss of few options for improving on the current codebase:
\begin{itemize}
\item Logging instead of printing to the console: When providing finished packages that is meant to be used by others, it might not be such a good idea to pollute the console with print statements. A way to avoid this is by using the Python logging module. This allows the user of the package to decide what level of messages to output.
\item Using docopt: When providing scripts it can sometimes be helpful to provide a way to interface with them. This can be done using docopt. However this only makes sense in the case where you want to perform the task over and over with different settings.
\item Different underlying representation of the data: The internal representation of the news article data ended up being a bit complex. This is the price to pay for making it efficient. We chose to accommodate the problem by encapsulation and by providing an interface for highly customized iteration.
\end{itemize}

\section{Conclusion}
We have successfully defined a data mining problem, acquired the data and visualized it. It was a difficult task to discover the relationship between the stock prices and news articles. The classification accuracy scored around 30\%, which means that there is no relationship between the published articles and the stock prices. Apart from data mining context, the project also addresses learning how to program in Python. We have successfully implemented two packages, which can be reused by others (stockdata, stocknews), profiled them using pycallgraph package, tested them using nose package and documented the code using sphinx package.  

\bibliographystyle{IEEEtran}
\bibliography{biblio}

\clearpage
\onecolumn
\appendices
\section{Code listings}

\definecolor{darkgreen}{rgb}{0, 0.4, 0}
\lstset{language=Python,
  numbers=left,
  frame=bottomline,
  basicstyle=\scriptsize,
  identifierstyle=\color{blue},
  keywordstyle=\bfseries,
  commentstyle=\color{darkgreen},
  stringstyle=\color{red},
  literate={Ö}{{\"O}}1 {é}{{\'e}}1 {Å}{{\AA}}1,
  breaklines=true
}
\lstlistoflistings

\label{listing:stockdata}\lstinputlisting[caption=Content of stockdata.py]{../Code/stockdata.py}
\label{listing:stocknews}\lstinputlisting[caption=Content of stocknews.py]{../Code/stocknews.py}
\label{listing:sca}\lstinputlisting[caption=Content of script\_classify\_articles.py]{../Code/script_classify_articles.py}
\label{listing:sgn}\lstinputlisting[caption=Content of script\_get\_news.py]{../Code/script_get_news.py}
\label{listing:stm}\lstinputlisting[caption=Content of script\_train\_models.py]{../Code/script_train_models.py}
\label{listing:tstockdata}\lstinputlisting[caption=Content of test\_stockdata.py]{../Code/test_stockdata.py}
\label{listing:tstocknews}\lstinputlisting[caption=Content of test\_stocknews.py]{../Code/test_stocknews.py}

\newpage
\section{Code checking}
\label{app:codecheck}
\lstset{numbers=none}

\label{listing:pylint_stockdata}\lstinputlisting[caption=Output of pylint analysis on stockdata.py]{out/pylint_stockdata.txt}

\newpage
\label{listing:pylint_stocknews}\lstinputlisting[caption=Output of pylint analysis on stocknews.py]{out/pylint_stocknews.txt}

\newpage
\label{listing:nose}\lstinputlisting[caption=Output of code coverage on both stocknews and stockdata packages]{out/nose.txt}

\begin{figure}[htbp]
\centering
\includegraphics[width = \textwidth]{pictures/pycallgraph.png}
\caption{Output of pycallgraph when run on script\_get\_data.py and script\_classify\_articles.py. The training of the models are omitted because the Scikit packages are very huge and contains lots of cross-package calls.}
\end{figure}

\newpage
\section{Most predictive words}
\label{app:words}
\label{listing:words}\lstinputlisting[caption=Output from script\_train\_models]{out/output_script_train_models.txt}

\newpage
\section{Automatic generation of documentation}
\label{app:doc}
\begin{figure}[htbp]
\centering
\includegraphics[width = \textwidth]{pictures/sphinx.PNG}
\caption{The screen shot of generated documentation}
\end{figure}

\end{document}
