\documentclass[10pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{framed, color}
\usepackage{url}
\definecolor{shadecolor}{rgb}{1,1,1}
\usepackage{graphicx}
%\usepackage{subcaption}
\usepackage{listings}
\usepackage{color}
\usepackage{natbib}
\usepackage{listings}
\lstset{ %
language=R,                % choose the language of the code
basicstyle=\footnotesize,       % the size of the fonts that are used for the code
%numbers=left,                   % where to put the line-numbers
numberstyle=\footnotesize,      % the size of the fonts that are used for the line-numbers
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=7pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=single,           % adds a frame around the code
tabsize=2,          % sets default tabsize to 2 spaces
captionpos=b,           % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={(*@}{@*)},          % if you want to add a comment within your code
xleftmargin=.5in,
xrightmargin=.25in,
keywordstyle=\ttfamily
}



\title{Week 4: Hopfield Network}
\author{Phong Le, Willem Zuidema}

\begin{document}
\lstset{language=R}
\renewcommand{\lstlistingname}{Code}

\maketitle

Last week we studied multi-layer perceptron, a neural network in which 
information is only allowed to transmit in one direction (from input units
to output units), for supervised learning. This week, we will study another 
famous net, namely Hopfield net where information is allowed to transmit
backward.

\paragraph{Required Library} We use the library `png' for reading and writing png files.
Install it by typing \texttt{install.packages("png")}.

\paragraph{Required R Code} At \url{http://www.illc.uva.nl/LaCo/clas/fncm13/assignments/computerlab-week4/} 
you can find the R-files you need for this exercise.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Definition}

Hopfield network is a recurrent neural network in which any neuron is an input
as well as output unit, and
\begin{itemize}
	\item each neuron $i$ is a perceptron with the binary threshold activation function, 
	\item any pair of neurons $(i,j)$ are connected by two weighted links $w_{ij}$
	and $w_{ji}$.
\end{itemize}
Formally speaking, a neuron is characterized by 
\begin{equation}
	\label{equation yi}
	v_i(t) = sign(u_i(t)) = \left\{
				            \begin{matrix}
				                1 & \textit{ if } u_i(t) \geq 0 \\
				                -1 & \textit{ otherwise}
				            \end{matrix}
			            \right.
\end{equation}
where $u_i(t)$, the total input at time $t$, is computed by
\[          
	u_i(t) = \sum_{j \neq i} v_j(t-1) w_{ji} + \theta_i
\]	
A neuron $i$ is called \textit{stable} at time $t$ if $v_i(t) = v_i(t-1) = sign(u_i(t-1))$.
A Hopfield net is called stable if all of its neurons are stable. 
Here after, we drop the time \textit{t}, and assume that all the biases are 0; 
i.e., we use $u_i$ instead of $u_i(t)$, and $\theta_i = 0$ for all $i$. 

%\begin{framed}
%Exercise 1 (optional): (Hopfield Net as Computation Unit)
%Because each neuron in a Hopfield net is 
%a perceptron with the binary threshold activation function, we can mimic 
%logic operations with Hopfield nets. 
%	\begin{enumerate}
%		\item Design Hopfield nets (with minimum numbers of neurons) for 
%		OR, AND, and NOT.	
		
%		\item How many neurons do you need for XOR?
%	\end{enumerate}
%end{framed}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Activation}

There are two ways to update a Hopfield net, \textit{asynchronous} -- where only one 
neuron, which is randomly picked up, is updated at a time -- and \textit{synchronous} -- 
where all neurons are updated at a time. It is proved that, if the net is symmetric, 
i.e. $w_{ij} = w_{ji}$ for all $i,j$, then it will definitely converge to a stable point 
(see Theorem 2, page 51, \cite{krose1996introduction}), which is a local minimum 
of the following energy function 
\begin{equation*}
	E = - \frac{1}{2}\sum_{i,j} w_{ij} v_i v_j %- \sum_i y_i \theta_i 
	= - \frac{1}{2}\mathbf{v}^T \mathbf{W} \mathbf{v} %- \mathbf{y}^T \Theta
\end{equation*}

\begin{framed}
Exercise 1: In this exercise, you will study how a Hopfield net updates its state. 
Let consider the network given in Figure~\ref{figure 3 nodes}. 

\begin{enumerate}
	\item First of all, you need to load the file `hopfield.R' (typing \texttt{source("hopfield.R")}).
	
	\item Then, create a Hopfield net, by create a weight matrix and then 
	put it in a list as follows
\begin{verbatim}
weights = matrix(c(
                0,  1,  -2,
                1,  0,  1,
                -2, 1,  0),
            3, 3)
hopnet = list(weights = weights)
\end{verbatim}

	\item Set the initial state \texttt{init.y = c(1,-1,1)}.
	
	\item Next, activate the net by typing \\\texttt{run.hopfield(hopnet, init.y, stepbystep=T, topo=c(3,1))}.
	You should see the below 
\begin{lstlisting}
weights =
     [,1] [,2] [,3]
[1,]    0    1   -2
[2,]    1    0    1
[3,]   -2    1    0
input pattern =  1 -1 1
\end{lstlisting}
	and a black-white plot illustrating the current state of the network (black represents -1, white 1).

	\item Press Enter, you should see something like this
\begin{lstlisting}
iter  1
pick neuron  2
input to this neuron  2
ouput of this neuron  1
new state  1 1 1
\end{lstlisting}
	This means that, in the first iteration, we pick the neuron number 2. The total input to this neuron is 2, 
	and hence its output is 1. 
	
	\item Press Enter three or four times to see what happens next.
	
	\item Now, do all the steps above again with your own weight matrix and initial state. 
	Report what you get (just copy what you see in your R console and paste it into your report).

\end{enumerate}

\end{framed}
\begin{figure}
	\center
	\includegraphics[scale=0.5]{hopfield_3nodes.png}
	\caption{A Hopfield net.}
	\label{figure 3 nodes}
\end{figure}

\begin{framed}
Exercise 2: To see why symmetry is important 
for the convergence, let's consider a 2-neuron Hopfield net with the weight matrix 
\[
	\mathbf{W} = 
		\begin{pmatrix}
			0 & 1 \\
			-1 & 0 
		\end{pmatrix}
\]
%and zero biases (i.e. $\Theta = 0$). 
Your task is to examine whether the net converges 
or not if the initial state is $\mathbf{v} = (1,-1)^T$. To do so,

\begin{itemize}
	\item load the R file `hopfield.R' (\texttt{source("hopfield.R")}),

	\item create the corresponding Hopfield net by typing
\begin{verbatim}
hopnet = list(weights = matrix(c(0, 1, -1, 0), 	2, 2))	
\end{verbatim}

	\item set the initial state \texttt{init.y = c(1,-1)},
	
	\item activate the net by typing
\begin{verbatim}
run.hopfield(hopnet, init.y, maxit = 10, stepbystep=T, topo=c(2,1))
\end{verbatim}
		(note that \texttt{maxit} is the number of times we pick a neuron to 
		activate it.)

\end{itemize}

Now, could you conclude anything about the convergence? 

Set the weight matrix $
	\mathbf{W} = 
		\begin{pmatrix}
			0 & 1 \\
			1 & 0 
		\end{pmatrix}
$
and do all the steps above again. Does the net reach a stable state?


\end{framed}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Learning with Hebb's rule}

A Hopfield net is an associative memory in the sense that it could be used 
to store patterns and we can retrieve a stored pattern with an incomplete 
input. The principle behind it is that, the weight matrix is set such that 
the stored patterns correspond to stable points; and therefore, 
given an incomplete input pattern, the net will iterate to converge 
to a stable point which corresponds to a stored pattern most `similar' 
to the input.

Given $m$ patterns $\mathbf{p}_k = (p_{k,1},...,p_{k,n})^T,k=1..m$,
the Hebbian learning rule can be used to set up the weights as follows
\begin{equation*}
	w_{ij} = \left\{
							\begin{matrix}
								\sum_{k=1}^m p_{k,i} p_{k,j} & \text{ if } i \neq j \\
								0 & \text{ otherwise}
							\end{matrix}
			            \right.	
\end{equation*}
which could be written compactly:
\[
	\mathbf{W} = \sum_{k=1}^m \mathbf{p}_k \mathbf{p}_k^T - m\mathbf{I}
\]
%\begin{framed}
%Exercise 3 (optional): 
%Prove that the Hebbian learning rule above could be written compactly by
%$\mathbf{W} = \sum_{k=1}^m \mathbf{p}_k \mathbf{p}_k^T - m\mathbf{I}$.
%\end{framed}

Unfortunately, the learning rule above is not perfect. In order to decide when to apply
that rule and which net structure to use, let's claim a store pattern, says $\mathbf{p}_1$, 
and see when the net converges to that pattern. Recall that the net is stable at 
$\mathbf{p}_1$ when all neurons are stable, i.e. 
$\mathbf{v} = sign(\mathbf{u}) = sign \big( \mathbf{W} \mathbf{p}_1 \big) = \mathbf{p}_1$.
Analysing $\mathbf{u}$ we have
\begin{align*}
	\mathbf{u}&= \mathbf{W} \mathbf{p}_1 \\
	 &= (\mathbf{p}_1 \mathbf{p}_1^T + ... + \mathbf{p}_m \mathbf{p}_m^T - m\mathbf{I}) \mathbf{p}_1 \\
	 &= \mathbf{p}_1 \mathbf{p}_1^T \mathbf{p}_1 + ... + \mathbf{p}_m \mathbf{p}_m^T \mathbf{p}_1 - m\mathbf{I}\mathbf{p}_1 \\
	 &= (n-m)\mathbf{p}_1 + \sum_{j>1} \alpha_j \mathbf{p}_j
\end{align*}
where $n$ is the number of neurons, $\alpha_j$ could be considered as the `similarity' between $\mathbf{p}_1$ 
and $\mathbf{p}_j$. From this, we can see that, in order to have $sign(\mathbf{u}) = \mathbf{p}_1$,
$n > m$ and $\alpha_j$ are small. Intuitively, the number of neurons must be greater than the number of 
patterns we want to store, and those patterns must be distinguishable. 

\begin{framed}
Exercise 3: In this exercise, you will study how to use the Hebbian learning rule to update 
a Hopfield net's weights, and its effect. 

\begin{enumerate}
	\item Let's create two $10 \times 10$ images, namely \texttt{pattern1, pattern2}, and the matrix weight as follows
\begin{verbatim}
pattern1 = rep(1, 10*10)
pattern2 = rep(-1, 10*10)
weights = matrix(rep(0,10*10*10*10),10*10, 10*10)
\end{verbatim}

	\item Then, use the Hebbian learning rule to update the weight matrix
\begin{verbatim}
weights = weights + pattern1 %o% pattern1 + pattern2 %o% pattern2 - 2*diag(10*10)
\end{verbatim}

	\item Examine whether the net with that weight matrix stores those two patterns by executing
\begin{verbatim}
run.hopfield(list(weights=weights), pattern1, 
                    stepbystep=T, topo=c(10,10), replace=F)
\end{verbatim}
	and 
\begin{verbatim}
run.hopfield(list(weights=weights), pattern2, 
                    stepbystep=T, topo=c(10,10), replace=F)
\end{verbatim}

	\item Create other two patterns, which are \texttt{pattern1} but 
	some lines are filled with -1
\begin{verbatim}
pattern3 = matrix(pattern1, 10, 10)
for (i in 1:3)
      pattern3[i,] = -1
dim(pattern3) = NULL

pattern4 = matrix(pattern1, 10, 10)
for (i in 1:5)
      pattern4[i,] = -1
dim(pattern4) = NULL
\end{verbatim}
	If these patterns are set as the initial state for the net, which patterns will the net retrieve?
	
	\item Now, use the Hebbian learning rule to store \texttt{pattern3}, by updating the weight
	matrix as follows
\begin{verbatim}
weights = weights + pattern3 %o% pattern3 - diag(10*10)
\end{verbatim}
	Does the net now remember this pattern?
	
	\item If \texttt{pattern4} is set as the initial state for the net, which patterns will the net retrieve?

	\item One interesting property of the Hebbian learning rule is that we can use its reverse (i.e,. 
	addition becomes subtraction and vice versa) to `erase' 
	a pattern out of the memory. To make the net `forget' \texttt{pattern3}, execute
\begin{verbatim}
weights = weights - pattern3 %o% pattern3 + diag(10*10)
\end{verbatim}
	Now, check if the net still remembers \texttt{pattern3}. 
	Check if \texttt{pattern4} is set as the initial state for the net, which patterns will the net retrieve?


\end{enumerate}

\end{framed}

\begin{framed}
Exercise 4: 
In this exercise, you will train a Hopfield net to store some digits. The files you need are 
`hopfield.R' for training a Hopfield net, `hopfield\_digit.R' for the whole experiment, and `0.png', 
`1.png',..., `9.png' which are images of the ten digits.

\begin{enumerate}
	\item First of all, execute the file `analyse\_digit.R' to examine the similarities between the 
	digits. Which pair is the most similar? Which is the least? Which digit is the most distinguishable
	from the others? (Note the digit $0$ is indexed 10.)

	\item Open `hopfield\_digit.R' and set \texttt{digits} to a set of digits you want to store 
	in a Hopfield net. First try with all ten digits. Then with all odd digits, and finally with 
	all even digits. After executing the `hopfield\_digit.R' file, you should see a plot, in which 
	the first row contains input images, the second row output images, and the third row expected 
	output images. 
	What do you see? Can you explain?

	\item Find the largest set of digits that the net can store.
	
	\item Now, we consider only even digits. The parameter \texttt{noise\_rate} decides the probability 
	that a pixel is flipped, e.g. \texttt{noise\_rate = 0.1} means any pixel is flipped with the probability 0.1.
	The higher \texttt{noise\_rate} is, the more noisy the input is. 
	Gradually increase the value of \texttt{noise\_rate} from $0$ to $1$, report the maximum value 
	that the network correctly retrieves all inputs.

	\item In some cases, the retrieved digit looks quite good, but not perfect. Can you explain why?

%	\item (For fun) You can observe intermediate states of the net by uncomment (removing \#'s 
%	at the beginning) the six last lines. (However, you need to rotate your screen, or you head, 
%	90 degrees since we haven't found out how to plot images correctly.)
	
\end{enumerate}

\end{framed}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Submission}
You have to submit a file named `your\_name.pdf'. 
The deadline is 15:00 Monday 25 Nov.
If you have any questions, contact Phong Le (p.le@uva.nl).


\bibliographystyle{apalike}
\bibliography{ref}

\end{document}
