\documentclass{amcs}

\title{Piecewise-Linear Neural Network: Possible Tool for Modeling of the Processes to be Controlled}

\author[ad1][]{Petr DOLEZEL}
\author[ad2][]{Miroslav FIKAR}

\address[ad1]{Faculty of electrical Engineering and Informatics\\University of Pardubice, Namesti Cs. legii 565, Pardubice, Czech Republic\\ e-mail: \url{petr.dolezel@upce.cz}}
\address[ad2]{Institute of Information Engineering, Automation and Mathematics, Faculty of Chemical and Food Technology\\ Slovak University of Technology in Bratislava, Radlinskeho 9, Bratislava, Slovakia\\ e-mail: \url{miroslav.fikar@stuba.sk}}

\Runauthors{J. Doe and M. John}

%Please do not remove these
%\Year{}
%\Vol{}
%\No{}
%\Startpage{}
%\Endpage{}
%\DOI{}
%\Received{10 May 2006}
%\Revised{24 October 2005}
%\Rerevised{15 December 2006}

\bibliographystyle{dcu}

\begin{document}
\begin{abstract}
The article introduces a new technique for process modeling. On the basis of the fact that once nonlinear problem is modeled by piecewise-linear model, it can be solved by many efficient techniques, the result of introduced technique provides a set of linear equations. Each of these equations is valid in some region of state space and together, they approximate whole nonlinear process.
\end{abstract}
%
\begin{keywords}
artificial neural network, modeling, nonlinear systems.
\end{keywords}
\maketitle

\section{Introduction}

Piecewise-linear functions provide useful tool to deal with nonlinear problems. Once nonlinear problem is modeled by some piecewise-linear function, it is possible to divide it into a set of linear subproblems where each of them can be solved by some efficient algorithm. This idea was originally proposed in \cite{Chua77} and was expanded in \cite{Huang12} or \cite{Breiman93}.
Among others, this approach can be used in nonlinear process control design. Although majority of control loops is still based on PID control \cite{Astrom95}, some special control systems are a challenge to design using PID-like controllers. There are several reasons for this difficulty - processes to be controlled may be highly nonlinear, very complex or time-varying. Therefore, more sophisticated control strategies (i.e. adaptive control, robust control or predictive control) were presented in the second half of the twentieth century. Thus, large collection of control techniques, which can handle even highly nonlinear and complex processes, is available these days. However, most of these techniques require a precise mathematical model of the process to be controlled. In next paragraphs, there is introduced a technique, which can efficiently provide currently valid linear model of the process even if the process is highly nonlinear.

\section{Problem Formulation}

As mentioned in section one, a majority of recent control techniques require mathematical model of the process to be controlled. It means that a set of equations, which allows us to predict the future behavior of the process, is needed.
In linear dynamic system identification, there are available several kinds of models, some of them fully stochastic (time series), others stochastic with exogenous input. Full widely accepted standard is established in \cite{Ljung99}. However, in this paper, ARX model (Auto-Regresive model with eXogenous input) is considered - see eq. (\ref{eq:ARX}).

\begin{equation}
  y(k)= \frac{B(z^{-1})}{A(z^{-1})}u(k)+\frac{1}{A(z^{-1})}v(k). \label{eq:ARX}
\end{equation}

In equation above, $u$ is the input to the process, $y$ is the output, $v$ is the stochastic variable, $A$, $B$ are polynomials of complex variable $z^{-1}$ and $k$ is the discrete time.
the crucial task is to determine the polynomials $B(z^{-1})$, $A(z^{-1})$, which are then used by a controller to define suitable control action - see e.g. \cite{Bobal05} for some possibilities of performing it. In addition, if the process is significantly nonlinear, the coefficients of both polynomials will shift depending on operating point.
Therefore, the aim of the article is to introduce a methodology of determining the polynomials $B(z^{-1})$, $A(z^{-1})$ which are currently valid. The methodology is supposed to be efficient enough to be used online.
Consider nonlinear SISO process that is to be controlled (Fig. \ref{fig1}). Then, the methodology should work as seen in Fig. \ref{fig2}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.30\textwidth]{Fig1}
  \caption{SISO Process.}
  \label{fig1}
\end{figure}

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.45\textwidth]{Fig2}
  \caption{The behavior of the methodology.}
  \label{fig2}
\end{figure}

The idea depicted above is not new, the way of determination of the polynomials $B(z^{-1})$, $A(z^{-1})$ is, anyway. It uses a special topology of artificial neural network and contrary to related techniques, it is computationally simple.

\section{Artificial Neural Network for Universal Approximation}
In 1983, Hornik proved \citeyear{Hornik89} that standard multilayer feedforward network (MFN) with one hidden layer is capable of approximating any real measurable function to any desired degree of accuracy. Topology of MFN with one hidden layer is depicted in Fig. \ref{fig3}, where input layer brings external inputs $x_1$, $x_2$, …, $x_P$, hidden layer contains $S$ neurons which process sums of weighted inputs and output neuron processes sum of weighted outputs from hidden neurons. Dataflow between input $i$ and hidden neuron $j$ is gained by weight $w^1_{~j,i}$. Dataflow between hidden neuron $j$ and output neuron is gained by weight $w^2_{~1,j}$. Neurons in hidden layer contain squashing activation function, while output neuron contains non-squashing activation function - see \cite{Hornik89} for formal definition.
For practical applications, continuous, bounded and monotonic activation function is used for neurons in hidden layer and continuous and monotonic activation function is used in output neuron - for some examples, see \cite{Haykin99}, \cite{Nguyen03}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig3}
  \caption{Multilayer feedforward neural network with one hidden layer.}
  \label{fig3}
\end{figure}

Output of the network in Fig. \ref{fig3} can be expressed by following equations.

\begin{equation}
y^{~1}_{a~j}=\sum_{i=1}^Pw^1_{~j,i}~x_i+w^1_{~j}
\label{eq:NNout1}
\end{equation}

\begin{equation}
y^{1}_{~j}=\phi^1\left(y^{~1}_{a~j}\right)
\label{eq:NNout2}
\end{equation}

\begin{equation}
y^{~2}_{a~1}=\sum_{j=1}^Sw^1_{~1,j}~y^{1}_{~j}+w^2_{~1}
\label{eq:NNout3}
\end{equation}

\begin{equation}
y=\phi^2\left(y^{~2}_{a~1}\right)
\label{eq:NNout4}
\end{equation}

In equations above, $\phi^1(.)$ means activation functions of hidden neurons and $\phi^2(.)$ means output neuron activation function.
Apparently, the network has to be well trained to achieve sufficient approximation qualities. In other words, the network is to learn associations between a specified set of input-output pairs (training set). As there were presented many training techniques from simple back-propagation algorithm \cite{Rumelhart86} to some specialized hybrid techniques using evolutionary algorithms \cite{Blanco01}, they are not defined here. However, the important note is that analytical derivatives of activation functions are required for training by any gradient-based technique.

\section{Process Identification by MFN}
Process identification is statistical procedure which leads to mathematical model of dynamical process from measured data. Let us narrow the problem down to identify the coefficients of the polynomials $B(z^{-1})$, $A(z^{-1})$, where

\begin{equation}
B(z^{-1})=[0 + b_1z^{-1} + b_2z^{-2} + ... + b_mz^{-m}]
\label{eq:B}
\end{equation}

\begin{equation}
A(z^{-1})=[1 + a_1z^{-1} + a_2z^{-2} + ... + a_nz^{-n}]
\label{eq:A}
\end{equation}

The deterministic part of the linear models with exogenous input can be illustrated as seen in Fig. \ref{fig4}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.4\textwidth]{Fig4}
  \caption{The deterministic part of the models with exogenous input.}
  \label{fig4}
\end{figure}

Contrary to Fig. \ref{fig4}, process identification using ANN (whole procedure is defined in \cite{Haykin99}) provides differently-shaped models. Those models are rarely usable for process control, but they are able to model even highly nonlinear processes for MFN are universal approximators. See Fig. \ref{fig5} where the deterministic part of the NNARX (Neural Network Auto-Regresive model with eXogenous input) model is shown. NNARX model is widely used representative of neural models.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig5}
  \caption{The deterministic part of the NNARX model.}
  \label{fig5}
\end{figure}

Output of the MFN in Fig. \ref{fig5} is determined by equations \ref{eq:NNout1} - \ref{eq:NNout4}. The thing is that NNARX model is black-box-like structure and it cannot be directly used for control action evaluation. Thus, the procedure of transforming NNARX model into model described in Fig. \ref{fig4} is proposed in next paragraph.

\section{Piecewise-Linear Neural Model}
As mentioned in section 3, MFN has to contain squashing activation function in hidden layer and non-squashing activation function in output neuron. In addition, these activation functions are expected to be differentiable through some gradient-based training technique can be used. In point of fact, only several types of activation function are really being applied. Actually, in most cases, hyperbolic-tangent or sigmoid activation function are used in hidden layer and linear activation function is used in output neuron - see Fig. \ref{fig6}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig6}
  \caption{Activation functions.}
  \label{fig6}
\end{figure}

This article offers another approach. It suggests replacing hyperbolic tangent activation function in hidden layer with linear saturated activation function (\ref{eq:LinSat}).

\begin{equation}
y_{~i}= \left\{ \begin{array}{ccc@{\quad}r}
    1 & \text{for} & y_{a~i}>1 \\
    y_{a~i} & \text{for} & -1\leq y_{a~i}\leq1 \\
    -1 & \text{for} & y_{a~i}<1 \\
    \end{array} \right.
\label{eq:LinSat}
\end{equation}

Although linear saturated activation function is not fully differentiable, MFN then becomes a piecewise-linear structure. Furthermore, MFN approximation qualities are expected to stay similar through both function resembling courses - see Fig. \ref{fig7}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig7}
  \caption{Activation functions comparison.}
  \label{fig7}
\end{figure}

Let us presume an existence of NNARX model which uses MFN with linear saturated activation functions in hidden neurons and linear (identic) activation function in output neuron. Apparently, this model acts as piecewise-linear model and one linear submodel turns to another when any hidden neuron becomes saturated or becomes not saturated.
Although the output of MFN used in this NNARX model can be evaluated by eqs. (\ref{eq:NNout1}) - (\ref{eq:NNout4}), another way for MFN output computing is useful. Let us define saturation vector $\mathbf{v}$ of $S$ elements. This vector indicates saturation states of hidden neurons - see (\ref{eq:VektorV}).

\begin{equation}
v_i= \left\{ \begin{array}{ccc@{\quad}r}
    1 & \text{for} & y^{1}_{~i}=1 \\
    0 & \text{for} & -1< y^{1}_{~i}<1 \\
    -1 & \text{for} & y^{1}_{~i}=-1 \\
    \end{array} \right.
    i = 1, 2, ..., S.
\label{eq:VektorV}
\end{equation}

Now, ANN output can be expressed by

\begin{equation}
y(k) = -\sum_{j=1}^n a_jy(k-j)+\sum_{j=1}^mb_ju(k-j)+c,
\label{eq:DifRov}
\end{equation}

where

\begin{equation}
 a_j=-\sum_{i=1}^S w^2_{~1,i}(1-|v_i|)w^1_{~i,j}
\label{eq:Koef1}
\end{equation}

\begin{equation}
 b_j=\sum_{i=1}^S w^2_{~1,i}(1-|v_i|)w^1_{~i,j+n}
\label{eq:Koef2}
\end{equation}

\begin{equation}
 c=w^2_{~1}+\sum_{i=1}^S\left(w^2_{~1,i}v_i+(1-|v_i|)w^2_{~1,i} w^1_{~i}\right)
\label{eq:Koef3}
\end{equation}

Thus, difference equation (\ref{eq:DifRov}) defines MFN output and it is linear in some neighbourhood of current state (in that neighbourhood, where saturation vector $\mathbf{v}$ stays constant).
In other words, if the neural model of any nonlinear process in form of Fig. \ref{fig5} is designed, then it is simple to determine parameters of linear difference equation which approximates process behaviour in some neighbourhood of current state.
Last step is determining of the coefficients of the polynomials $B(z^{-1})$, $A(z^{-1})$ using parameters of equation (\ref{eq:DifRov}).  Let us define

\begin{equation}
 \tilde{u}(k)=u(k)-u_0,
\label{eq:uTrans}
\end{equation}

where $u_0$ is constant. Then, equation (\ref{eq:DifRov}) turns into

\begin{equation}
y(k) = -\sum_{j=1}^n a_jy(k-j)+\sum_{j=1}^mb_j\tilde{u}(k-j)+c+\sum_{j=1}^mb_ju_0.
\label{eq:DifRovTrans}
\end{equation}

Equation (\ref{eq:DifRovTrans}) will become constant term free, if equation {eq:u0} is satisfied.

\begin{equation}
u_0=-\frac{c}{\sum_{j=1}^mb_j}.
\label{eq:u0}
\end{equation}

Now, equation (\ref{eq:DifRovTrans}) can be written in following way.

\begin{equation}
  y(k)= \frac{B(z^{-1})}{A(z^{-1})}\tilde{u}(k), \label{eq:DARX}
\end{equation}

where coefficients of the polynomials $B(z^{-1})$, $A(z^{-1})$ are determined by equations (\ref{eq:Koef1}) and (\ref{eq:Koef2}), respectively. Equation (\ref{eq:DARX}) corresponds to the model in Fig. \ref{fig4} with respect to equations (\ref{eq:uTrans}) and (\ref{eq:u0}).
A comprehensive diagram of described technique for nonlinear process control is shown in Fig. \ref{fig8} ($r$ is a set point).

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig8}
  \caption{Activation functions comparison.}
  \label{fig8}
\end{figure}

\section{Example 1}

As first example, the following Hammerstein system consisting of static nonlinearity in series with first order linear system is considered \cite{Nelles01}.

\begin{equation}
  y(k)= 0.1 \arctan\left(u(k-1)\right) + 0.9 y(k-1), \label{eq:Sys1}
\end{equation}

Step-like and sine-like excitations of the system are shown in Fig. \ref{fig9}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig9}
  \caption{Responses of the demonstrative system.}
  \label{fig9}
\end{figure}

To divide this system into linear subsystems according the algorithm described above, it is necessary to design a neural model of the system in the shape of Fig. \ref{fig5}, where neurons in hidden layer of MFN contain linear saturated activation function and output neuron contains linear (identic) activation function. This procedure involves training and testing set acquisition, neural network training and pruning and neural model validating. As this sequence of processes is illustrated closely in many other publications \cite{Haykin99}, \cite{Nguyen03}, it is not referred here. See Appendix A of this paper for the information about handling the missing derivative of linear saturated activation function.
Eventually, as the result of the procedure, neural model with four neurons (three in hidden layer, one in output layer) is designed. Except the number of neurons, the important results are the values of the parameters $w^{1}_{~i,j}$, $w^{1}_{~i}$, $w^{2}_{~1,i}$, $w^{2}_{~1}$, which are necessary for an evaluation of the coefficients $a_j$, $b_j$, and $c$ - see (\ref{eq:Koef1}), (\ref{eq:Koef2}) and (\ref{eq:Koef3}). The resulting MFN is figured in Fig. \ref{fig10}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig10}
  \caption{The weights of resulting multilayer feedforward network.}
  \label{fig10}
\end{figure}

As described above, the structure in Fig. \ref{fig10} can be easily transformed into equation (\ref{eq:DARX}). For this particular case, (\ref{eq:DARX}) becomes (\ref{eq:DARXex}).

\begin{equation}
  y(k)= \frac{\left[0 + b_1 z^{-1} \right]}{\left[1 + a_1 z^{-1} \right]}\tilde{u}(k), \label{eq:DARXex}
\end{equation}

where

\begin{equation}
 a_1=-\sum_{i=1}^3 w^2_{~1,i}(1-|v_i|)w^1_{~i,1},
\label{eq:Koef11}
\end{equation}

\begin{equation}
 b_1=\sum_{i=1}^3 w^2_{~1,i}(1-|v_i|)w^1_{~i,2},
\label{eq:Koef22}
\end{equation}

\begin{equation}
 \tilde{u}(k)=u(k)+\frac{c}{b_1},
\label{eq:uTrans2}
\end{equation}

\begin{equation}
 c=w^2_{~1}+\sum_{i=1}^3\left(w^2_{~1,i}v_i+(1-|v_i|)w^2_{~1,i} w^1_{~i}\right)
\label{eq:Koef33}
\end{equation}

Since there are three neurons in hidden layer of used MFN, vector V can potentially gather up to 27 states - see (\ref{eq:VektorV}). Thus, there are 27 linear models stored in the structure showed in Fig. \ref{fig10}. Transitions between these linear models can be determined by solving the following set of equations.

\begin{equation}
\begin{array}
{l@{\quad}c}
    y_{a~i}^{~1} = -1\\
    y_{a~i}^{~1} = 1\\ \end{array}
    i = 1, 2, ..., S.
\label{eq:Trans1}
\end{equation}

Using (\ref{eq:NNout1}) and considering Fig. \ref{fig10}, equations (\ref{eq:Trans1}) turn to

\begin{equation}
\begin{array}
{l@{\quad}c}
    w^1_{~i,1} y(k-1) + w^1_{~i,2} u(k-1) + w^1_{~i} = -1\\
    w^1_{~i,1} y(k-1) + w^1_{~i,2} u(k-1) + w^1_{~i} = 1\\ \end{array}
    i = 1, 2, ..., S.
\label{eq:Trans2}
\end{equation}

Solving (\ref{eq:Trans2}), there is gathered a map of regions, where each linear submodel is valid. For a reasonable state space, the resulting map is shown in Fig. \ref{fig11}. For better illustration, there is figured step-like response of the demonstrative system and its piecewise-linear model - see Fig.\ref{fig12}.

\begin{figure}[!t]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig11}
  \caption{The map of linear regions.}
  \label{fig11}
\end{figure}

\begin{figure}[!t]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig12}
  \caption{The step-like response of the demonstrative system and its liecewise-linear model.}
  \label{fig12}
\end{figure}

 In Fig.\ref{fig12}, there are marked and numbered the regions, in which particular linear models are used. The numbering corresponds with the numbers in Fig.\ref{fig11}. In addition, the used models are summed up in following table (Table \ref{tab1}).

\begin{table}[!b]
\caption{Used linear submodels - see (\ref{eq:DARXex})}
\label{tab1}
\begin{tabular}{p{1.8cm}p{1.8cm}p{1.8cm}p{1cm}}
\hline\noalign{\smallskip}
Number & $a_1$ & $b_1$ & $c$  \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & -0.8995  & 0.0125  & -0.0860\\
2 & -0.2213  & 0.0162  & -0.9713\\
3 & -0.2213  & 0.0162  & -0.7975\\
4 & -0.8995  & 0.0125  & 0.0878\\
5 & -0.2213  & 0.0162  & 0.9801\\
6 & -0.9029  & 0.0801  & -0.0010\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}

Hence, the technique is able to determine a set of linear models of the nonlinear process and it is even possible to compute the regions of validity of this linear models.

\section{Example 2}

As second example, let's consider a helicopter model. Helicopter model is twin rotor aerodynamic system (see Fig. \ref{fig13}) which is designed to simulate real copter dynamics. As a plant, it is significantly nonlinear system with two inputs (power of main rotor u and power of tail rotor) and two outputs (vertical elevation and yaw motion). All quantities are normalized to interval [-1; 1]. The point of this section is to design a piecewise-linear model of the vertical elevation part of the helicopter model.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig13}
  \caption{Helicopter model.}
  \label{fig13}
\end{figure}

Step-like excitation of the system to be modeled is shown in Fig. \ref{fig14}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig14}
  \caption{Response to a sum of step functions.}
  \label{fig14}
\end{figure}

The division into linear subsystems is performed in the same way as in the previous example. In this case, eq. (\ref{eq:DARX}) turns into 

\begin{equation}
  y(k)= \frac{\left[0 + b_1 z^{-1} \right]}{\left[1 + a_1 z^{-1} + a_2 z^{-2} \right]}\tilde{u}(k), \label{eq:DARXex2}
\end{equation}

since the helicopter model is more complex system than system (\ref{eq:Sys1}), though it is still possible to imagine the state space of the model (\ref{eq:DARXex2}). The resulting MFN is figured in Fig. \ref{fig15} and the state space divided into linear regions is shown in Fig.  \ref{fig16}. Eventually, the step-like response of the original system and its piecewise-linear model is shown in Fig.  \ref{fig17} and parameters of linear models involved in that response are pointed in Table  \ref{tab2}.

\begin{figure}[!b]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig15}
  \caption{The weights of resulting multilayer feedforward network 2.}
  \label{fig15}
\end{figure}

\begin{figure}[!t]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig16}
  \caption{The map of linear regions 2.}
  \label{fig16}
\end{figure}

\begin{figure}[!t]
 \centering
  \includegraphics[width=0.5\textwidth]{Fig17}
  \caption{The step-like response of the demonstrative system and its liecewise-linear model 2.}
  \label{fig17}
\end{figure}

\begin{table}
\caption{Used linear submodels - see (\ref{eq:DARXex2})}
\label{tab2}
\begin{tabular}{p{1.3cm}p{1.2cm}p{1.2cm}p{1.2cm}p{1.0cm}}
\hline\noalign{\smallskip}
Number & $b_1$ & $a_1$ & $a_2$  & $c$  \\
%\noalign{\smallskip}\svhline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & 0.0807  & -1.6916  & 0.7524  & -0.0969\\
2 & 0.0360  & -1.6204  & 0.7805  & -0.1870\\
3 & 0.1999  & -1.5385  & 0.9174  & -0.3993\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}

\section{Conclusions}
The aim of this article is to introduce a new technique for process identification using piecewise-linear neural network. The technique is described in sections 3-5 and the functionality of the technique if demonstrated in sections 6 and 7. The most interesting feature of this approach is that the resulting model effectively stores finite number of linear models (each valid in some region of state space) and these linear models can be locally used e.g. for controller tuning.
The technique can be used in two ways. It is possible to determine piecewise-linear model in a shape of Fig. \ref{fig5} and this structure can be used online for continuous linearization of the process (which is very effective procedure), or the structure of the piecewise-linear model can be divided offline into set of linear models and comprehensively analyzed. Nevertheless, both possibilities bring decent advantages especially for nonlinear process control design.


\begin{acknowledgment}
The authors wish to thank ... xxx xx xx xxx xxx xxxx xxx xxx xx xxxx xxx xxx xxx xxx xxx xx
 xxxx xxx xxx xx xxxx xxx xxx xxx xxx xxx xx.
\end{acknowledgment}

\bibliography{DolezelPaper}

\begin{biography}[photo]{Petr Dolezel} received his Ph.D. degree from the University of Pardubice, Czech Republic, in 2009. He is a research assistant and lecturer at the Faculty of Electrical Engineering and Informatics of the same university. His recent work includes evolutionary and neural computation applied to process control.
\end{biography}

\begin{appendices}{Training of MFN Using Gradient Descent Algorithms}
Training means using a set of observations to find optimal (in some sense) values of weights and biases of trained neural network.
For purposes of described technique, it is required to train MFN with linear saturated activation functions. Generally, family of error back-propagation gradient descent (BPG) algorithms is used for MFN training \cite{Haykin99}. These algorithms try to minimize error function (\ref{eq:Err}) by iterative changing of weights and biases $w^k_{~j,i}$ along negative gradient direction - see (\ref{eq:Grad}) .

\begin{equation}
  E=\frac{1}{2}\sum_{p}\left(o_p-y_p\right)^{2},
\label{eq:Err}
\end{equation}

where $y$ is actual output of the network, $o$ its desired value and $p$ is the index of the actual pattern of the training set.

\begin{equation}
  w^{k}_{~j,i}\left(new\right)=w^{k}_{~j,i}\left(old\right) - f\left(\frac{\partial{E}}{\partial{ w^{k}_{~j,i}}}\right).
\label{eq:Grad}
\end{equation}

Gradient descent is determined analytically. Thus, every compound of the network to be trained has to be differentiable which is not met in this case since linear saturated activation functions have two points with undefined derivative.
There are two possible solutions: either to replace BPG with some non-gradient search technique or to find suitable approximation for derivative of activation function. The second approach is discussed and tested in \cite{Dolezel13} and in simple words, it suggests to approximate linear saturated activation function derivative (\ref{eq:LinSat}) with either sequential derivative of (\ref{eq:LinSat}) or with derivative of hyperbolic tangent activation function ( Fig. \ref{fig7}). The testings in mentioned sources showed that both approaches can be efficiently used and provide similar results.


\end{appendices}

\makeinfo

\end{document}
