\begin{sagesilent}
load('../firings.py')
W=matrix(ZZ, [[0,1,0,0,0],[0,0,1,0,0],[0,0,0,1,0],[0,0,0,0,1],[1,0,1,0,0]])
from numpy.random import seed
seed(int(5000))
\end{sagesilent}
\newpage
\section{The network}
The basic network topology is described in Gibson \& Robinson (1992) and it is based on the model of interconnected neurons developed by Marr (1971). It contains a network of excitatory neurons reciprocally connected with a constant probability. 

Because neurons are interconnected, the connectivity matrix $W$ is a square matrix of size ${n^2}$, being $n$ the number of excitatory neurons.  

\includegraphics[width=.75\textwidth]{./figures/recurrent_network.pdf}

This can be represented with a matrix of the form:
\begin{equation*}
W = \sage{W} 
\end{equation*}

Every element $W_{ij}$ of the matrix is an independent random variable that takes the value one if the neuron $i$ projects to the neuron $j$, or zero elsewhere. It follows that $\Pr(W_{ij}=1)= c$ and $\Pr(W_{ij}=0)=1-c$, where c is a constant that represent the connection probability. Additionally, it is assumed that neurons do not make projections to themselves (i.e $W_{ij}=0$ if $i=j$).  

\begin{equation*}
    \Pr(W_{ij}) = \begin{cases}
          1, & \text{if} \ \ W_{ij} < c ,\\
          0, & \text{if} \ \ i=j  \ \ \text{or} \ \ W_{ij} \geq 1-c .
        \end{cases}
\end{equation*}

\section{Memory patterns}

A memory pattern ($p$) is the representation of the \emph{activity} of $n$ neurons. It is a vector of length $n$ whose $i-$element is one if the $i$th neuron is \emph{active}, and zero otherwise. A neuron is said to be \emph{active} in the pattern if the neuron was firing in an original pattern. A total of $m+1$ patterns will be applied to the excitatory network. Thus, the set of applied patterns $Z$ can be represented as:

\begin{equation*}
Z^p:p=\{0,1, \ldots, m\}
\end{equation*}

Every $i$-th element in the pattern $Z^p_i$ is an indepedent random variable with $\Pr(Z^p_i=1) = a$ and $\Pr(Z^p_i=0) = 1-a $, being $a$ the \emph{activity} of the neuron.

\begin{equation*}
    \Pr(Z^p_i) = \begin{cases}
          1, & \text{if} \ \ Z^p_i < a ,\\
          0, & \text{otherwise} .
        \end{cases}
\end{equation*}

$a$ is therefore the probability that the $i$-th neuron in the pattern $p$ was firing in the original pattern $Z^0$. Then, the patterns applied to the network will be constructed similarly by assuming that the probability $a$ does not change between patterns. Note that all $Z^p$ patterns will consists of $(n \cdot a)$ active neurons and $n(1-a)$ inactive neurons in average. However, the organization of zeros and ones in the $i$th element of the vector will differ between patterns.

We choose the initial pattern $Z^0$ deterministically to have all active neurons followed by the inactive neurons. This is $na$ active neurons followed by $n(1-a)$ inactive neurons.

\begin{equation*}
    Z^0_i = \begin{cases}
          1, & \text{for} \ \ i = \{1, \cdots , na\} ,\\
          0, & \text{for} \ \ i = \{(na+1), \cdots , n\}.
        \end{cases}
\end{equation*}
\subsection{Example patterns}
\begin{sagesilent}
ncells = 10 
activity = 0.20
Z = Pattern(n = ncells, a= activity)
# generate 3 more patterns
Y = generate(Z, m=3)
activity = '%2.1f'%(activity)
\end{sagesilent}

For a small network with $n=\sage{ncells}$ neurons and activity $a=\sage{activity}$ we would obtain the following patterns to store in the network:\\

\begin{math}
Z^0 = \sage{vector(ZZ, Y[0])} 
\end{math}

\begin{math}
Z^1 = \sage{vector(ZZ, Y[1])} 
\end{math}

\begin{math}
Z^2 = \sage{vector(ZZ, Y[2])} 
\end{math}

\begin{math}
Z^3 = \sage{vector(ZZ, Y[3])} 
\end{math}

Note that this does not say that the neurons fire, but that neurons in that pattern were active in the original pattern.

\section{Learning rule}
To store a pattern $Z^p$ in the network, a Hebbian rule is applied. The matrix of synaptic strengths $J$ contains the synaptic strength between neuron $i$ and neuron $j$, and takes the value one if the $i$-th and the $j$-th are simultaneously active, and zero elsewhere.
\begin{equation*}
J_{ij}= W_{ij}Z^p_iZ^p_j
\end{equation*}

With this rule, plasticity is taking place at locations where the connectivity is assured (i.e $W_{ij}=1$)

\subsection{Spike timming-dependent implementation}
Note that this rule can easily modified to a bidirectional spike-timming dependent plasticity rule just by simply addingn a $\Delta t$ parameter.
\begin{equation*}
    J_{ij} = \begin{cases}
          A_{LTP}W_{ij}Z^p_iZ^p_j e^{-\Delta t/\tau},
 & \text{if} \ \ \Delta t > 0 ,\\
          -A_{{LTD}}W_{ij}Z^p_iZ^p_j e^{\Delta t/\tau}, & \text{if} \ \ \Delta t < 0.
             \end{cases}
\end{equation*}

where is the time difference between the pre and postsynaptic spike $\Delta t = t_{j} - t_{i}$
\section{State of the network}
The state of the network is given by a vector $X(t)$ of dimension $n$. The $X_i$  entries take the value one if the $i$th neuron fires and zero if not. To resemble the vector of stored patterns, the initial vector $X(0)$ is constructed from a random distorsion of $Z^0$, and the next $X(t)$ vectors are updated following an activation function $h(t)$.

\subsection{Initial state}
For the initial state $X(0)$ we can construct a vector that is a random distortion of $Z^0$ by combining two constants. The first constant $b_1$, describes the similarity of $X(0)$ and $Z^0$ in the first $na$ elements. If $b_1=1$ the state vector $X(0)$ is exactly the same as the pattern $Z^0$ in the first $na$ elements. A second constant $b_n$ describes the similarity of $X(0)$ and $Z^0$ in the rest of the elements of the vector (those with zero activity). Similarly, if $b_n=1$ the rest of the elements of the vector will be identical as in $Z^0$ (from $(na+1)$ until $n$
\begin{equation*}
    \Pr(X_i(0) = 1) = 
        \begin{cases}
            b_1, & \text{for} \ \ i = \{1, \cdots, na \}, \\ 
            (1-b_n), & \text{for} \ \ i = \{(na+1), \cdots , n\}.
        \end{cases}
\end{equation*}

or 

\begin{equation*}
    \Pr(X_i(0) = 0) = 
        \begin{cases}
            (1- b_1), & \text{for} \ \ i = \{1, \cdots, na \}, \\ 
            b_n, & \text{for} \ \ i = \{(na+1), \cdots , n\}.
            
        \end{cases}
\end{equation*}
The initial input pattern $X(0)$ is simply the pattern $Z^0$ if $b_1$ and $b_n$ are one.

\subsubsection{Example states}

\begin{sagesilent}
ncells = 10 
activity = 0.4
Z = Pattern(n = ncells, a= activity)
activity = '%2.1f'%(activity)
\end{sagesilent}
Assume we have a network with $n=\sage{ncells}$ neurons and activity $a=\sage{activity}$. 

As described before, the first stored pattern will be:\\

\begin{math}
Z^0 = \sage{vector(ZZ, Z[0])} 
\end{math}

Lets assume now $b_1 = 0.5$ and $b_n=1$ (that is, the memory $Z^0$ is distorted to $X(0)$ by having only 50\% (on average) of its active neurons. We would obtain:\\

\begin{math}
X(0) = \sage{vector(ZZ, np.concatenate(([1,0,0,1], np.zeros(6, dtype=int))))} 
\end{math}

\subsection{The activation fuction: update of the network state}
To the network will start from an input vector of activities $X(0)$. The state of the network will be updated at discrete time points $t = 1, 2, \ldots$
The state of every neuron in the network at time $t$ can be obtained with an activation function $h_i(t)$. This functions simply computes the excitatory input ot the $i$-th neuron at time $t$. The activation is normalized by the number of cells $n$ in the network. 

\begin{equation*}
h_i(t+1) = \frac{1}{n} \sum^n_j J_{ij}X_{j}(t)
\end{equation*}

To decide if a neuron fires or not, we compare the value of the value $h_i$ at time $t$ with a threshold function T(t) given by:

\begin{equation*}
T(t) = g_0 + g_1 {S(t)}
\end{equation*}

where $g_0$ and $g_1$ are constants, and $S(t)$ is simply the average firing of the network:
\begin{equation*}
S(t) = \frac{1}{n}\sum^n_i X_i(t)
\end{equation*}

so that the threshold function is given by :
\begin{equation*}
T(t) = g_0 + \frac{g_1}{n}\sum^n_i X_i(t)
\end{equation*}

If $h_i(t)$ exceeds a the value of $T(t)$, then the neuron fires. The threshold function $T(t)$ is a linear function which depends on the average firing of the network. The slope of that line is representing an inhibitory cell that receives the entire pattern of the network and provides inhibitory input to every cell in the network.

The function $h_i(t)$ will give us the state of the neuron $i$ at discrete times intervals $t$. If we assume that the target memory is the initial pattern $Z^0$, the number of valid firings is given by the following expression: 
\begin{equation*}
S_l(t) = \sum_i^{na} X_i(t)
\end{equation*}
and the number of \emph{spurious firings} is given by:
\begin{equation*}
S_n(t) = \sum^n_{na} X_i(t)
\end{equation*}

\section{Recall}

Recall of the network is simply vector the resulting of the activity after computation of the threshold. To see how similar is this vector from the original stored pattern $Z^0$ we calculated the cosine of the angle between the vectors. This will give us a value of 1 if there is complete retrival.


