\documentclass[11pt]{article}
\usepackage{amsmath, hyperref, multicol, fullpage}
\usepackage{tikz}
\usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows}
\usepackage{float}


\title{The Neural Network of Beer}    
\author{Team 1\\ Josh Keeler, Michelina Pallone, Michael Weston}
\date{May 2, 2014}

\begin{document}

\begin{titlepage}
\begin{center}

% Upper part of the page. The '~' is needed because \\
% only works if a paragraph has started.


\textsc{\large Johns Hopkins University}\\
\textsc{\large Class 605.447 - Neural Networks}\\[1.5cm]


% Title
\rule{\textwidth}{1.6pt}\vspace*{-\baselineskip}\vspace*{2pt} % Thick horizontal line
\rule{\textwidth}{0.4pt}\\[\baselineskip] % Thin horizontal line

{ \huge \bfseries The Neural Network of Beer\\[0.4cm] }
\Large{\emph{Tapping} into Data Science}

\rule{\textwidth}{1.6pt}\vspace*{-\baselineskip}\vspace*{2pt} % Thick horizontal line
\rule{\textwidth}{0.4pt}\\[\baselineskip] % Thin horizontal line

% Author and supervisor
\begin{minipage}{0.4\textwidth}
\begin{flushleft} \large
\emph{Authors:}\\
\textsc{Josh Keeler}\\
\textsc{Michelina Pallone}\\
\textsc{Michael Weston}\\
\end{flushleft}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\begin{flushright} \large
\emph{Professor:} \\
\textsc{Mark Fleischer}
\end{flushright}
\end{minipage}

\vfill
\includegraphics[totalheight=0.5\textheight]{beertap.jpg}


% Bottom of the page
{\large \today}

\end{center}
\end{titlepage}

\setlength\parindent{0pt}

\newpage

\section{Executive Summary}

This project tackles the deliciously difficult challenge of determining whether someone will enjoy drinking a beer he or she has never tried.  We created a neural network which predicts the rating, on a scale of $1-5$, of beers based upon a users' previous preference.  The network was trained with the Feedforward-Backpropagation algorithm where several properties, or classifiers, of the beer are the inputs.  Once the network has been trained, a user can input properties of the new beer, and the network will output a rating.\\

Information about beer classifiers and ratings for training data were pulled from two publicly available databases, Untappd.com and BreweryDB.com.  The final classifiers selected for training and testing were alcohol by volume, bitterness, and style of beer.  Data was divided into a training and testing set.  We tried several variations of the training to testing percentage ratio and found that results were consistent across the board.\\

We set up three different neural networks: a single layer, a simple multi-layer, and a multi-layer with an extra hidden layer.  Initial weights for the neural network were chosen randomly between $-2.50$ to $2.50$.  We used $1000$ iterations of the Feedforward-Backpropagation algorithm to train the weights.  We used the final weights to predict ratings for beers in our testing set.\\

To compare the different variations of the neural networks we used the mean square error.  First we computed the mean square error (MSE) when the predicted rating was a random integer value between $1$ and $5$.  In this case, we found the MSE to be $3.77$.  For a single perceptron network we have been able to consistently produce an MSE of approximately $1.45$.  Given that ratings are out of $5$, this  represents a fairly significant error, although it also represents a significant improvement over a random guess.  For the two layer neural network we had MSE$= 0.11$ and for our hidden layer neural network we had MSE$=0.093$.\\


All of our networks performed significantly better than the random results.  Our hidden layer neural network provides noticeably better performance than the single-layer and marginally better performance than the multi-layer network.


\newpage


\section{Introduction}
% This is basically our project proposal
Bar hoppers and inexperienced beer drinkers are faced with a tough decision when presented with a large beer menu and no reviews.  Instead of sitting indecisively while attempting to decode the difference between a beer and a lager, it would be nice if they had a neural network that could predict if they will like a given beer.  We set up, trained, and tested a neural network, which takes beer classifiers as inputs, and produces a rating as an output. \\

Ideally, this neural network would be trained on an individual's own preferences and would give personalized recommendations.  However, due to time and budget considerations, data to train and test the network was compiled from existing sources including Untappd.com and Brewerydb.com.  Untappd.com and Brewerydb.com provided detailed information about hundreds of beers, including an average public rating.  These databases also store personalized ratings but these are visible only to the individual who made them.  For the purpose of gathering data quickly we chose to use the mean rating stored in the publicly available portions of the databases.  Since the network is trained against the crowd-sourced ``average rating" the output of the neural network reflects a prediction of the crowd-sourced ``global rating" instead of an individualized prediction. \\


Three variations of a neural network were tried.  A single perceptron, a simple multi-layer network and a multi-layer network with an additional hidden layer.  The Feedforward-Backpropagation algorithm was used to train the neural network.  Results of the multi-layer network with hidden layer were best, although not significantly better than the multi-layer network without the extra hidden layer.



\section{Data Set}

\subsection{Sources}
% need better description of untapped/brewerydb their differences, what information we got from where, etc.  also should have more precise numbers

Collecting the data to train the network was not a simple task and proved a limiting factor on the scope of the project.  While there have been some efforts to create a repository of ratings and information about beer, these collections are far from complete.  Most databases contain records that are not found in other databases, and these records are not standardized; the different pieces of information stored about any given beer will depend a great deal on the person doing the data entry.  The ratings we used were stored in a different database than the classifiers we used as inputs to the network.  Because of this, one of the major challenges encountered was tying the specific classifiers of beer to the community ratings given to that beer.\\

The BreweryDB database was used to collect information about the specific classifiers of beer.  Using the API made available through BreweryDB.com, we assembled a database containing a list of beers along with information about breweries and styles.  All of the information used as classifiers for beers within the network came from the BreweryDB database.\\

Untappd was used to assemble a list of ratings for each beer.  Untappd is a website and smartphone application that is building a community allowing users to rate beers contained within the Untappd database.  Using the Untappd web API we were able to assemble a list of beers along with the average rating for each beer on a scale of one to five.\\

In order to assemble input vectors to our network we needed to associate the data records that were pulled from each site.  Untappd and BreweryDB do not contain a common identifier for each beer and beer names are not unique.  To join the data we used a combination of beer name and brewery name.  Unfortunately, using this system there were many records that did not have records in both databases.  So, while there was some continuity in the names between each database, there was a large amount of error checking and manual consolidation of the data required.  \\

The manual nature of the data consolidation prevented us from feasibly joining the entire BreweryDB and Untappd databases, but we have assembled nearly $1000$ records, which were  used to train, test, and validate the network.\\

\subsection{The Classifiers}
% we may want to not where each piece of data come from?
For each beer entry, we had the following information:
\begin{itemize}
  \item Name
  \item Brewery
  \item Rating (on a scale of $1-5$)
  \item Alcohol by Volume (ABV)
  \item International Bitterness Unit (IBU)--A measure of the bitterness in the beer
  \item Category
  \item Style
\end{itemize}

ABV and IBU were already in a numerical format which was easy to feed into the neural network.  However, the other inputs needed to be converted into ordinal number.  We approached this by making an input node for each classifier, with a boolean input for each possible value of the ordinal.  Most inputs would be $0$ and the applicable ordinal would be $1$.  This has the benefit/hazard of training each individual ordinal separately.  \\


\section{Methodolgy}
\subsection{Technology stack}
There were a number of different scripts that where required throughout the lifecycle of this project.  In order to effectively use the API developed by BreweryDB.com and Untappd.com to access their data,  scripts needed to be written to parse the JSON data coming from each site.  This was done through a community developed Java wrapper for the BreweryDB API and a Perl wrapper for the Untappd API, both of which were expanded upon for the purposes of our specific data needs.\\

When it came to the actual implementation of the data partitioning and neural network, our team chose to use the Python programming language.  There were a number of factors that came in to play when making this choice.  All of our team members felt comfortable with Python, having already developed some form of application using it, or having worked with scripting languages very similar to it.  Additionally, being able to utilize numerical libraries that have already been developed for Python enabled us to spend more time iterating on the prediction algorithm itself, rather than spending a lot of time doing preliminary work.
 
\subsection{Creating a training and testing set}

Using a script and a random number generator the data was partitioned into testing and training sets.  We were unsure exactly what percentage of the data should be used for training versus testing so we tried several  different splits.  We tried an 85:15 training to testing ratio with several different permutations of data and the network produced a very reasonable prediction.  We then narrowed the training/testing ratio to 50:50 and 40:60 with nearly identical results.


\subsection{The Network Set Up}
We tried three different variations on the set up of a neural network.  One was a single perceptron, one was a two layer network, and the third had a hidden layer of perceptrons.  The perceptrons used were ``standard" perceptrons, with a scaled summation acting as the activation function, and a 0-1 sigmoidal function as the activity function.  Diagramed below is a single perceptron.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[
init/.style={
  draw,
  circle,
  inner sep=2pt,
  font=\Huge,
  join = by -latex
},
squa/.style={
  draw,
  inner sep=2pt,
  font=\Large,
  join = by -latex
},
start chain=2,node distance=13mm
]
\node[on chain=2]
  (x2) {$x_2$};
\node[on chain=2,join=by o-latex]
  {$w_2$};
\node[on chain=2,init] (sigma)
  {$\small y$};
\node[on chain=2,squa,label=above:{\parbox{2cm}{\centering Activity \\ function}}]   
  {$f$};
\node[on chain=2,label=above:Output,join=by -latex]
  {$r$};
\begin{scope}[start chain=1]
\node[on chain=1] at (0,1.5cm)
  (x1) {$x_1$};
\node[on chain=1,join=by o-latex]
  (w1) {$w_1$};
\end{scope}
\begin{scope}[start chain=3]
\node[on chain=3] at (0,-1.5cm)
  (x3) {$x_3$};
\node[on chain=3,label=below:Weights,join=by o-latex]
  (w3) {$w_3$};
\end{scope}
\node[label=above:\parbox{2cm}{\centering Bias \\ $b$}] at (sigma|-w1) (b) {};

\draw[-latex] (w1) -- (sigma);
\draw[-latex] (w3) -- (sigma);
\draw[o-latex] (b) -- (sigma);

\draw[decorate,decoration={brace,mirror}] (x1.north west) -- node[left=10pt] {Inputs} (x3.south west);
\end{tikzpicture}
\caption{A close-up of a single perceptron}
\end{figure}
\begin{figure}[h]
$$y = \sum_{i=1}^{3}x_iw_i +b,\ \ \  f(y) = \frac{1}{1+e^{-y}}$$
\caption{Activation, $y$, and Activity, $f(y)$, functions}
\end{figure}
\end{center}


\subsubsection{A Single-Layer Neural Network}
This network consisted of a single perceptron and three inputs: ABV, IBU, and Category.  Technically ``Category" is represented by multiple inputs each of the inputs has its own weight.  All of the category inputs would have the value $0$ except one representing that particular category.  This network performed with a Mean Squared Error of approximately 1.45 which is better than randomly selecting an input.  However, while significantly better than the random MSE of 3.77, the output range is only between 1 and 5; this is a very significant error and insufficient to predict a persons' taste.
\begin{center}
\def\layersep{2.5cm}

\begin{figure}[H]
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep]
    \tikzstyle{every pin edge}=[<-,shorten <=1pt]
    \tikzstyle{neuron}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]
    \tikzstyle{input neuron}=[neuron, fill=white!50,  minimum size=2pt];
    \tikzstyle{output neuron}=[neuron, fill=red!50];
    \tikzstyle{hidden neuron}=[neuron, fill=blue!50];
    \tikzstyle{annot} = [text width=4em, text centered]

    % Draw the input layer nodes
    %\foreach \name / \y in {1,...,4}
    % This is the same as writing \foreach \name / \y in {1/1,2/2,3/3,4/4}
        \node[input neuron, pin=left:ABV] (I-1) at (0,-1) {};
        \node[input neuron, pin=left: IBU] (I-2) at (0,-2) {};
        \node[input neuron, pin=left: Category of Beer] (I-3) at (0,-3) {};


    % Draw the output layer node
    \node[output neuron,pin={[pin edge={->}]right:Rating}, right of=I-2] (O) {};

    % Connect every node in the input layer with every node in the
    % hidden layer.
            \path (I-1) edge (O);
            \path (I-2) edge (O);
            \path (I-3) edge (O);


\end{tikzpicture}
\caption{The single-layer network}
\end{figure}
\end{center}

\subsubsection{A Multi-Layer Neural Network}

We next tried a multi-layer network where each pair of inputs were sent into a perceptron in the hidden layer.  Inputs were the same from the single-layer.  For ABV and IBU the node has only a single input.  However for the Category input, the node represents an aggregation of 13 different inputs, which are not drawn.  This network performed with an MSE of approximately 0.11.  While the single perceptron network was not useful for predicting preferences, a multi-layer network capable of predicting preference to within 0.11 points is predicting accurately with marginal noise.    Given that a users' input is limited to the discrete set [1,2,3,4,5], this level of error is trivial.  The user cannot enter the value 3.5, so a prediction of 3.4 is only significant in that it is a prediction that is closer to 3 than it is to 4.

\begin{center}
\def\layersep{2.5cm}

\begin{figure}[H]
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep]
    \tikzstyle{every pin edge}=[<-,shorten <=1pt]
    \tikzstyle{neuron}=[circle,fill=green!50,minimum size=17pt,inner sep=0pt]
    \tikzstyle{input neuron}=[neuron, fill=white!50,  minimum size=5pt];
    \tikzstyle{output neuron}=[neuron, fill=red!50];
    \tikzstyle{hidden neuron}=[neuron, fill=blue!50];
    \tikzstyle{annot} = [text width=4em, text centered]

    % Draw the input layer nodes
    %\foreach \name / \y in {1,...,4}
    % This is the same as writing \foreach \name / \y in {1/1,2/2,3/3,4/4}
        \node[neuron, pin=left:ABV] (I-1) at (0,-1) {};
        \node[neuron, pin=left: IBU] (I-2) at (0,-2) {};
        \node[neuron, pin=left: Category] (I-3) at (0,-3) {};


    % Draw the output layer node
    \node[output neuron,pin={[pin edge={->}]right:Rating}, right of=I-2] (O) {};

    % Connect every node in the input layer with every node in the
    % hidden layer.
            \path (I-1) edge (O);
            \path (I-2) edge (O);
            \path (I-3) edge (O);


\end{tikzpicture}
\caption{Multi-Layer Network}
\end{figure}
\end{center}


\subsubsection{The Hidden Layer Network}
The third, and most complicated network we used expanded upon the multi-layer network by using a hidden layer to correlate pairs of inputs.  This performed marginally better than the Multi-Layer network, in that the MSE was 0.093.  However, no analysis of exactly how much better was conducted.  If the Multi-Layer network is predicting accurately with trivial noise, tiny reductions in the noise are not considered significant improvements.

\begin{center}
\def\layersep{2.5cm}

\begin{figure}[H]
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep]
    \tikzstyle{every pin edge}=[<-,shorten <=1pt]
    \tikzstyle{neuron}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]
    \tikzstyle{input neuron}=[neuron, fill=green!50];
    \tikzstyle{output neuron}=[neuron, fill=red!50];
    \tikzstyle{hidden neuron}=[neuron, fill=blue!50];
    \tikzstyle{annot} = [text width=4em, text centered]

    % Draw the input layer nodes
    %\foreach \name / \y in {1,...,4}
    % This is the same as writing \foreach \name / \y in {1/1,2/2,3/3,4/4}
        \node[input neuron, pin=left:ABV] (I-1) at (0,-1) {};
\node[input neuron, pin=left:IBU] (I-2) at (0,-2) {};
\node[input neuron, pin=left:Category] (I-3) at (0,-3) {};

    % Draw the hidden layer nodes
    %foreach \name / \y in {1,...,3}
        %\path[yshift=0.5cm]
           \node[hidden neuron] (H-1) at (3 cm,-1  cm) {};
           \node[hidden neuron] (H-2) at (3 cm,-2 cm) {};
           \node[hidden neuron] (H-3) at (3 cm,-3 cm) {};

    % Draw the output layer node
    \node[output neuron,pin={[pin edge={->}]right:Rating}, right of= H-2] (O) {};

    % Connect every node in the input layer with every node in the
    % hidden layer.
           \path (I-1) edge (H-1);
           \path (I-1) edge (H-2);
           \path (I-2) edge (H-1);
           \path (I-2) edge (H-3);
           \path (I-3) edge (H-2);
           \path (I-3) edge (H-3);

    % Connect every node in the hidden layer with the output layer
    %\foreach \source in {1,...,5}
        \path (H-1) edge (O);
        \path (H-2) edge (O);
        \path (H-3) edge (O);

    % Annotate the layers
   % \node[annot,above of=H-1, node distance=1cm] (hl) {Hidden layer};
    %\node[annot,left of=hl] {Input layer};
    %\node[annot,right of=hl] {Output layer};
\end{tikzpicture}
\caption{This is a multi-layer network with hidden layer}
\end{figure}
\end{center}

\subsection{Updating the Weights:  the  Feedforward-Backpropagation Algorithm}

We tried several different implementations of the initial weights.  We tried a constant value of $0.00$, and a constant value of $0.50$, as well as several different random selections, the maximum of which ranged from $-10$ to $10$.  The initial weight proved to have little impact on the final weights.  Although in some cases the network would have a high error unless more iterations were used to train.  The final selection was a random value chosen between $-2.50$ and $2.50$.  We used $1000$ iterations of the basic Feedforward-Backpropagation algorithm.  For an iteration $k$ and a weight $w_j$, the weights were updated with the following formula:
$$
w_j(k+1) = w_j(k) -\eta\nabla E_j.
$$

We set $\eta = 1$.  Additionally, $\nabla E = (\frac{\partial E}{\partial w_{ij}})$ with $\frac{\partial E}{\partial w_{ij}} = -e_j[1-y_j]y_jx_j$ where $y_j$ is the value of the activation function for node $j$, and $x_i$ is the value of the input for node $i$.

\section{Results}

\subsection{Performance of the network}

After training each network, we ran the testing sets through to find a predicted output rating for each beer in the testing set.  We expect that if our neural network works perfectly, the output rating will be exactly the same as the rating from Untappd.com.\\

To analyze the results we used the Mean Square Error.  Let $n$ be the number of entries in the testing set and $Y_i$ be the rating for beer $i$ from Untappd.com.  Let $Y_i^*$ be the predicted rating for beer $i$ from our neural network.  Then the mean square error, MSE, is
$$
\mathrm{MSE} = \frac{1}{n}\sum_{i=1}^n(Y_i-Y_i^*)^2.
$$

The mean square error is the average of the sum of the errors-squared.  It is used as a means to estimate the accuracy of predictions. Mean Squared Error is frequently used to compare two results, with the lower MSE used to identify the better set of results.  We first calculated a MSE with random values between $1$ and $5$ for our predicted rating, $Y_i^*$, and compared that MSE to the Mean Squared Error of our outputs.\\

For random values of $Y_i$ we had $\mathrm{MSE} = 3.77$.  For a single perceptron network we have been able to consistently produce an MSE of approximately $1.45$.  Given that ratings are out of $5$, this  represents a fairly significant error.  As the rating has a continuous range of only four, a MSE of above 1 is poor.  However, while this is a significant error, it does show an improvement over a random guess. \\


Using the same input vector, the two-layer neural network had a $\mathrm{MSE} = 0.11$.  This network produces less than a quarter of the noise that the single perceptron network does.  Our hidden-layer neural network had $\mathrm{MSE} = 0.093$, even better than the two-layer network.  However, while the two-layer network typically produced an error of 0.11, the random selection of both weights and inputs did cause variance.  The best performance of the two-layer network occasionally overlapped the worst performance of the hidden-layer network, so it is difficult to say that the hidden-layer network was unequivocally better than the multi-layer network.  \\

Here we present a sample of the results.  The following tables list the trained weights for the each layer of the hidden-layer network.\\

\begin{center}
\begin{table}[h]
\begin{tabular}{|c c c c c c c c c c c c c c|}
\hline
Layer 0 &&&&&&&&&&&&&\\
Node & $w_{0}$ & $w_{1}$ & $w_2$ & $w_3$ & $w_4$ & $w_5$ & $w_6$ & $w_7$ & $w_8$ & $w_9$ & $w_{10}$ & $w_{11}$ & $w_{12}$\\
\hline
IBU & 22020 &&&&&&&&&&&& \\
ABV & 4562&&&&&&&&&&&&\\
Style & 1.30 & 2.00 & 0.80 &  0.90 & 1.60&-2.50&-4.10&1.00&-2.50&-21.60&3.20&-1.30&-0.70\\
\hline
\hline
Layer 1 & & &&&&&&&&&&&\\
Node & $w_0$ & $w_1$& &&&&&&&&&&\\
\hline
ABV + IBU & 3.13 & -7.35 & &&&&&&&&&&\\
ABV + Style & 6.17 & 4.13&&&&&&&&&&&\\
IBU + Style & 5.10  & -6.43&&&&&&&&&&&\\
\hline
\hline
Layer 2 &&&&&&&&&&&&&\\
Node & $w_0$ & $w_1$  & $w_2$&&&&&&&&&&\\
\hline
Output &-2.52& 0.70&1.62&&&&&&&&&&\\
\hline
\end{tabular}
\caption{Weights for layers of the network }
\end{table}
\end{center}
The hidden-layer network with the above weights had MSE=0.11.  Compare to the MSE for random values, MSE=3.77

\subsection{Conclusion}
 In the multi-layer network, the weights of ABV and IBU are so incredibly high that the node is essentially on or off; there is no continuum.  This means that the network is actually representing the existence of data, the actual value of the data has little meaning.  This also means that style is the strongest predictor of public rating.  While a little disappointing, since it prevents a definitive proof of concept, style as a primary predictor makes a great deal of sense.  The discrete nature of the ratings probably contributes to this, a user who likes wheat beer probably rates most/all wheat beer similarly.\\
 
 For a public mean-rating, this on-off behavior for ABV and IBU makes a great deal of sense; since these databases are crowd-sourced, it seems likely that the most popular beers have the most information gathered.  Therefore, the existence of the data may be a predictor that the beer has a high public rating but completely meaningless to an individual's preference.  Additionally beers in a given style tend to have similar alcohol level and bitterness, with the mean value the influence of ABV and IBU are probably masked by the fact that many people are contributing a rating.\\
 
Individual preferences may be much more nuanced, the ABV and IBU features probably have more value on an individual scale, although it seems likely that the addition of more features is essential to accurately predicting user preferences.  Individual preferences pose a new challenge; they will need to be effective with a much smaller training set, because it is unlikely that a user has tried 1000 beers.  Even a user who has a large database to mine will likely quit using the application if it proves wildly ineffective the first hundred times.  The network's low error for the public average is very promising and it definitely seems to have value to begin experimenting with individualized results. \\


\section{Future Work}

This section presents some potential improvements to the network that might have significant impact upon the results.  We could modify our current neural network in a few different ways.  The first and probably most important is to add additional classifiers.  As discussed earlier in the document additional classifiers will hopefully improve predictive ability, and likely prove important to accurate prediction of individual preferences. Listed below, are a few classifiers we've considered.\\

\begin{itemize}
 \item Keywords: We can search the descriptions of each beer for keywords which can be used as classifiers.
 \item Brewery: If someone likes a beer from a specific brewery, they may like another.
 \item Glass Type:  If a beer comes in a fun glass, it impact the opinion of the person drinking it.
\end{itemize}

In addition to new classifiers, we'd like to experiment with adding a momentum term to the FFBP algorithm in order to influence the rate of convergence.  It may reduce the number of iterations to reach weight convergence and prevent oscillations in the training process.  To further improve the training process, instead of just doing 1000 iterations of weight updates, we can track the error in the FFBP algorithm to find exactly where the error is the lowest and when it begins to increase.  This may include creating a validation set in addition to the training and testing sets so that the network doesn't ``remember" inputs.\\

In addition, to improving our current set-up, we'd like to explore how our neural network compares to other methods of predicting ratings.  A slight variation yet equally interesting implementation to explore would be the Restricted Boltzmann Machine.  If we were to start collecting beer preferences from individual users, we could create a recommendation engine for beer.  For instance, an RBM may have a hidden unit that would correspond to very bitter beers (high IBU), while another hidden unit may correspond to Belgian style beers.  New users to the system would enter their ratings for a number of beers that existed within the system.  Once these ratings were given, a certain number of hidden units would be active.  Given these active hidden units, we could then determine beers that also activate that unit.  The challenges of this implementation would be to cover all of the possible hidden layers that might exist in a beer rating engine, as well as collecting all of the information on the individual beers that feed into each layer.\\

We could also try a clustering algorithm.  Each beer entry could be made into a vector where each entry in the vector would correspond to a classifier or keyword.  We could then run a k-means clustering algorithm on the data.  If a user likes one beer in the cluster, they may like another.  This could be potentially useful for predicting individual preferences because it would not require a large amount of training data.  In fact, as long as clusters have more than one beer, a user would only need one positive rating to find recommendations.\\

Finally, we hope to present our findings to Untappd.com and BreweryDB.com.

\newpage
\setcounter{section}{0}
\section*{Appendix: Description of Included Files}
\renewcommand{\thesubsection}{\Alph{subsection}.}
\renewcommand{\theenumi}{\Alph{subsection}\arabic{enumi}}
\subsection{Data Sets}
\begin{enumerate}
\item {\bf rawRatings\_train.csv:}  Example of a training set we used to train the weights in the neural network.
\item {\bf rawRatings\_test.csv:} Example of the testing set we sent into the neural network.
\item {\bf test\_values.csv:} Example of the output values compared to the actual ratings.
\end{enumerate}

\subsection{Code}
\begin{enumerate}
\item {\bf randomSplit.py:} Script to split a csv of data into a training, testing, and validation set.  Written in Python3, Usage: randomSplit.py full\_file \%\_training \%\_testing \%\_validation
\item {\bf final\_project.py:} Program which creates, trains, and tests the neural network.  Written in Python3, Usage: final\_project.py TrainingCSV Columns TestCSV Columns
\item {\bf getdata.py:} Supporting module to final\_project.py
\end{enumerate}

\subsection{Presentation}
\begin{enumerate}
\item {\bf NN\_Beer\_Ratings.ppt:} Slides from Adobe Connect presentation
\item {\bf The\_Neural\_Network\_of\_Beer.pdf:} This paper
\end{enumerate}


\end{document}
