\documentclass{article}
\usepackage{amsmath}
\usepackage{listings}
\usepackage{xcolor}

\lstset{
  basicstyle = \sffamily,
  keywordstyle = \bfseries,
  commentstyle = \color{olive}\ttfamily,
  keywordstyle = \color{blue!70}\bfseries,
  flexiblecolumns,
  numbers = left,
  showspaces = false,
  numberstyle = \ttfamily,
  showstringspaces = false,
  captionpos = top,
  %frame = lrtb,
  frame = shadowbox,
}

\lstdefinestyle{Python}{
  language = Python,
  basicstyle = \ttfamily,
  numberstyle = \ttfamily,
  stringstyle = \color{magenta}\ttfamily,
  keywordstyle = [2] \color{teal},
  breaklines = true,
  columns = fixed,
  basewidth = 0.5em,
}

\begin{document}
\title{Basic Math into Deep Learning}
\author{Dandelight}
%\address{Sichuan University}
\date{2021/01/06}
\bibliographystyle{plain}

\maketitle
\begin{center}
  \texttt{guoruiming@stu.scu.edu.cn}
\end{center}
\begin{abstract}
  Further than \texttt{helloworld}, nearer than the $A+B$ problem.
\end{abstract}

Suppose we have a \emph{neuron} that takes two input values, $x_0$ and $x_1$.
Each of the two values would be weighted by a \textbf{factor}, 
$w_0$ and $w_1$, respectively, before being summed together, along with an
optional \emph{bias}, $b$, into the result.

\[ x = \begin{pmatrix}
  x_0 & x_1
\end{pmatrix} 
, w = \begin{pmatrix}
  w_0 \\ w_1
\end{pmatrix} \]

\[
  z = x \cdot w + b
\]

Straightforward, isn't it?

Now we have to apply the \emph{activation function} to it to get an neuron's
output:

\begin{equation}\label{eq:single_neuron_output}
  y = f(x) = \begin{cases}
    0 & \text{if } z < t \\
    1 & \text{if } x \ge t
  \end{cases}
\end{equation}

where $t$ is a threshold.
The \emph{step function} is a key component of the original perception, but more
advanced activation functions have been introduced with advantages of:

\begin{itemize}
  \item non-linearity
  \item continuous differentiability
\end{itemize}

The most common activation functions are:

\begin{itemize}
  \item \textsc{Sigmond}:
    \[
      \sigma (z) = \frac{1}{1+\mathrm{e}^{-z}}
    \]
  \item \textsc{Derivated Sigmond}:
    \[
      y = \sigma (z) \times (1 - \sigma (z))
    \]
    Proof:
    \[
      f(x) = \frac{1}{1+\mathrm{e}^{-z}} = \frac{\mathrm{e}^z}{\mathrm{e}^z+1}
           = 1 - (\mathrm{e}^z + 1)^{-1}
    \]
    \[
      f(x)' = - \frac{\mathrm{e}^z}{(\mathrm{e}^z + 1)^2}
    \]
  \item \textsc{Hyperbolic Tangent}
    \[
      \tanh (z) = \frac{\mathrm{e}^z - \mathrm{e}^{-z}}
                      {\mathrm{e}^z + \mathrm{e}^{-z}}
    \]
  \item \textsc{REctified Linear Unit (ReLU)}
    %\begin{equation}
    \[
      \mathrm{ReLU} (z) = \max{0,z} = \begin{cases}
        0 & \text{if } z < 0 \\
        z & \text{if } z \ge 0
      \end{cases}
    \]
    %\end{equation}
\end{itemize}

We have modeled a simple (yet powerful!) artificial neuron.

It can:

receive a signal $\rightarrow$ process it $\rightarrow$ output a value
that can be forwarded.

Usually, neural networks are orgnized into \emph{layers}, which is, sets
neurons that typocally receive the same input and apply the same operation.

Our formula need a little change when it comes to multiple-layered NNs.

\[
  z = x \cdot W + b
\]

\[
  W = \begin{pmatrix}
    \vdots & \vdots & \vdots \\
    w_A    & w_B    & w_C    \\
    \vdots & \vdots & \vdots
  \end{pmatrix} = \begin{pmatrix}
    w_{a1} & w_{b1} & w_{c1} \\
    w_{a2} & w_{b2} & w_{c2}
  \end{pmatrix}
\]

\[
  b = \begin{pmatrix}
    b_A    & b_B    & b_C
  \end{pmatrix}
  , z = \begin{pmatrix}
    z_A    & z_B    & z_C
  \end{pmatrix}
\]

\[ y = f(z) = \begin{pmatrix}
  f(z_A)  & f(z_B)  & f(z_C)
\end{pmatrix} \]


What about training a module? Here's code!


\lstinputlisting[
  style = Python,
  caption = {\bf neuron.py},
  label = {neuron.py},
]{neuron.py}


\begin{lstlisting}[
  style = Python,
  caption = {main.py},
]
# Now we apply our work to classification.
\end{lstlisting}

% Partial Derivate
$\frac{\partial x}{\partial y}$


\end{document}

\bibliography{ml}
