<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- saved from url=(0045)http://www.philbrierley.com/code/bpproof.html -->
<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
		
		<link rel="stylesheet" type="text/css" href="stylesheet1.css">
		<script language="JavaScript" src="emails.js"></script><style type="text/css"></style>
		<title>BackProp Algorithm Proof</title>
		<script language="JAVASCRIPT" type="TEXT/JAVASCRIPT">
		<!-- Hide script from old browsers
	
		if (top.location == self.location) {
		top.location.href = "../main.html?code/bpproof.html&code/codeleft.html"
		}
	
		// End hiding script from old browsers -->
		</script>

	</head>

<body class="cornerblue">
<table border="0"><tbody><tr><td>



<p>
<font class="normal">
The algorithm derivation below can be found in Brierley [<a href="http://www.philbrierley.com/code/bpproof.html#ref1">1</a>] and Brierley and Batty [<a href="http://www.philbrierley.com/code/bpproof.html#ref2">2</a>]. Please refer to these for a hard copy.  
</font>
</p>

<hr>

<p align="center">
<font class="large">Back Propagation Weight Update Rule</font><br>
</p>

<p>
<font class="normal">
This idea was first described by Werbos [<a href="http://www.philbrierley.com/code/bpproof.html#ref3">3</a>] and popularised by Rumelhart <i>et al.</i>[<a href="http://www.philbrierley.com/code/bpproof.html#ref4">4</a>].
</font></p><font class="normal">

<p align="center">
<img src="image1.gif"><br>
<font class="small"><i>Fig 1 A multilayer perceptron</i></font>
</p>

<p>
<font class="normal">
Consider the network above, with one layer of hidden neurons and one output neuron. When an input vector is propagated through the network, for the current set of weights there is an output <i>Pred</i>. The objective of supervised training is to adjust the weights so that the difference between the network output <i>Pred</i> and the required output <i>Req</i> is reduced. This requires an algorithm that reduces the absolute error, which is the same as reducing the squared error, where:<br><br>
</font>
</p>

<p>
</p><table align="center">
<tbody><tr>
<td>Network Error</td><td>=</td><td><i>Pred</i>&nbsp;-&nbsp;<i>Req</i></td>
</tr><tr>
<td></td><td>=</td><td><i>E</i></td>
</tr>
</tbody></table>
<p></p>

<p align="right">
<font class="normal">
(1)<br>
</font>
</p>
	

<p>
<font class="normal">				 					
The algorithm should adjust the weights such that <i>E</i><sup><font class="small">2</font></sup> is minimised. Back-propagation is such an algorithm that performs a gradient descent minimisation of <i>E</i><sup><font class="small">2</font></sup>.<br><br>
In order to minimise <i>E</i><sup><font class="small">2</font></sup>, its sensitivity to each of the weights must be calculated. In other words, we need to know what effect changing each of the weights will have on <i>E</i><sup><font class="small">2</font></sup>. If this is known then the weights can be adjusted in the direction that reduces the absolute error.<br><br>
The notation for the following description of the back-propagation rule is based on the diagram below.
</font>
</p>

<p align="center">
<img src="image2.gif">
<br>
<font class="small">
<i>Fig 2 notation used</i><br>
</font>
</p>

<p>
<font class="normal">
The dashed line represents a neuron <i>B</i>, which can be either a hidden or the output neuron. The outputs of <i>n</i> neurons (<i>O</i></font>
<font class="small"><i><sub>1</sub></i></font>
<font class="normal">...<i>O</i></font>
<font class="small"><i><sub>n</sub></i></font>
<font class="normal">) in the preceding layer provide the inputs to neuron <i>B</i>. If neuron <i>B</i> is in the hidden layer then this is simply the input vector.<br><br>
These outputs are multiplied by the respective weights (<i>W<font class="small"><sub>1B</sub></font>...W<font class="small"><sub>nB</sub></font></i>), where <i>W<font class="small"><sub>nB</sub></font></i> is the weight connecting neuron <i>n</i> to neuron <i>B</i>. The summation function adds together all these products to provide the input, <i>I<font class="small"><sub>B</sub></font></i>, that is processed by the activation function&nbsp;&nbsp;<font class="curly"><i>f </i></font>(<b>.</b>) of neuron <i>B</i>.&nbsp;&nbsp;<font class="curly"><i>f </i></font>(<i>I<font class="small"><sub>B</sub></font></i>) is the output, <i>O<font class="small"><sub>B</sub></font></i>, of neuron <i>B</i>.<br><br>
For the purpose of this illustration, let neuron 1 be called neuron <i>A</i> and then consider the weight <i>W<font class="small"><sub>AB</sub></font></i> connecting the two neurons.<br><br>
The approximation used for the weight change is given by the delta rule:
</font>
</p>

<p align="center">
<img src="image3.gif"><br>
</p>

<p align="right">
<font class="normal">
(2)<br>
</font>
</p>

<p align="left">
<font class="normal">
where <img src="image4.gif"> is the learning rate parameter, which determines the rate of learning, and<br>
</font>
</p>

<p align="center">
<img src="image5.gif"><br>
</p>
			
<p>
<font class="normal">
is the sensitivity of the error, <i>E</i><sup><font class="small">2</font></sup>, to the weight <i>W<font class="small"><sub>AB</sub></font></i> and determines the direction of search in weight space for the new weight <i>W<font class="small"><sub>AB(new)</sub></font></i>  as illustrated in the figure below.
</font>
</p>

<p align="center">
<img src="image6.gif">
<br>
<font class="small">
<i>Fig 3 In order to minimise E<font size="-1"><sup>2</sup></font> the delta rule gives the direction of weight change required</i><br>
</font>
</p>

<p>
<font class="normal">
From the chain rule,<br>
</font>
</p>

<p align="center">
<img src="image7.gif"><br>
</p>

<p align="right">
<font class="normal">
(3)<br>
</font>
</p>

<p>
<font class="normal">
and
</font>
</p>
	
<p align="center">
<img src="image8.gif"><br>
</p>


<p align="right">
<font class="normal">
(4)<br>
</font>
</p>


<p>
<font class="normal">
since the rest of the inputs to neuron <i>B</i> have no dependency on the weight <i>W<font class="small"><sub>AB</sub></font></i>.<br><br>
Thus from eqns. (3) and (4), eqn. (2) becomes,
</font>
</p>

<p align="center">
<img src="image9.gif"><br>
</p>


<p align="right">
<font class="normal">
(5)<br>
</font>
</p>			

<p>
<font class="normal">
and the weight change of <i>W<font class="small"><sub>AB</sub></font></i> depends on the sensitivity of the squared error, <i>E</i><sup><font class="small">2</font></sup>, to the input, <i>I<sub><font class="small">B</font></sub></i>, of unit <i>B</i> and on the input signal <i>O<font class="normal"><sub>A</sub></font></i>.<br><br>
There are two possible situations:<br><br>
	1. <i>B</i> is the output neuron;<br>
	2. <i>B</i> is a hidden neuron.<br><br>

<i><font color="blue">Considering the first case:</font></i><br><br>

Since <i>B</i> is the output neuron, the change in the squared error due to an adjustment of <i>W<font class="small"><sub>AB</sub></font></i> is simply the change in the squared error of the output of <i>B</i>:
</font>
</p>

<p align="center">
<img src="image10.gif"><br>
</p>


<p align="right">
<font class="normal">
(6)<br>
</font>
</p>		

<p>
<font class="normal">	
combining eqn. (5) with (6) we get,
</font>
</p>

<p align="center">
<img src="image11.gif"><br>
</p>


<p align="right">
<font class="normal">
(7)<br>
</font>
</p>			           

<p>
<font class="normal">
the rule for modifying the weights when neuron B is an output neuron.<br><br>
If the output activation function, <font class="curly"><i>f</i></font> (<b>.</b>), is the logistic function then:
</font>
</p>

<p align="center">
<img src="image12.gif"><br>
</p>

<p align="right">
<font class="normal">
(8)<br>
</font>
</p>		

<p>
<font class="normal">
differentiating (8) by its argument <i>x</i>;
</font>
</p>

<p align="center">
<img src="image13.gif"><br>
</p>

<p align="right">
<font class="normal">
(9)<br>
</font>
</p>		

<p>
<font class="normal">
But,
</font>
</p>

<p align="center">
<img src="image14.gif"><br>
</p>

<p align="right">
<font class="normal">
(11)<br>
</font>
</p>		
	
<p>
<font class="normal">
inserting (11) into (9) gives:
</font>
</p>

<p align="center">
<img src="image15.gif"><br>
</p>

<p align="right">
<font class="normal">
(12)<br>
</font>
</p>

<p>
<font class="normal">
similarly for the tanh function,
</font>
</p>

<p align="center">
<img src="image16.gif"><br>
</p>
	
<p>
<font class="normal">
or for the linear (identity) function,
</font>
</p>

<p align="center">
<img src="image17.gif"><br>
</p>

<p>
<font class="normal">
This gives:
</font>
</p>

<p align="center">
<img src="image18.gif"><br>
</p>	


<p>
<font class="normal">
<i><font color="blue">Considering the second case:</font></i>
<br><br>
<i>B</i> is a hidden neuron.
</font>
</p>

<p align="center">
<img src="image19.gif"><br>
</p>

<p align="right">
<font class="normal">
(13)<br>
</font>
</p>							

<p>
<font class="normal">
where the subscript, <i>o</i>, represents the output neuron.
</font>
</p>

<p align="center">
<img src="image20.gif"><br>
</p>

<p align="right">
<font class="normal">
(15)<br>
</font>
</p>	

<p>
<font class="normal">
where <i>p</i> is an index that ranges over all the neurons including neuron <i>B</i> that provide input signals to the output neuron. Expanding the right hand side of equation (15),
</font>
</p>	

<p align="center">
<img src="image21.gif"><br>
</p>

<p align="right">
<font class="normal">
(16)<br>
</font>
</p>	

<p>
<font class="normal">	
since the weights of the other neurons ,<i>W<font class="small"><sub>pO</sub></font></i> (<i>p</i>!=<i>B</i>) have no dependency on <i>O<font class="small"><sub>B</sub></font></i>.<br><br>
Inserting (14) and (16) into (13),
</font>
</p>

<p align="center">
<img src="image22.gif"><br>
</p>

<p align="right">
<font class="normal">
(17)<br>
</font>
</p>	

<p>
<font class="normal">
Thus <img src="image23.gif"> is now expressed as a function of <img src="image24.gif"> , calculated as in (6).<br><br>
The complete rule for modifying the weight <i>W<sub><font class="small">AB</font></sub></i> between a neuron <i>A</i> sending a signal to a neuron <i>B</i> is,
</font>
</p>

<p align="center">
<img src="image25.gif"><br>
</p>

<p align="right">
<font class="normal">
(18)<br>
</font>
</p>	

<p>
<font class="normal">
where,
</font>
</p>

<p align="center">
<img src="image26.gif"><br>
</p>

		
<p>
<font class="normal">
where <font class="curly"><i>f</i></font><font class="small"><i><sub>o</sub></i></font>(<b>.</b>) and <font class="curly"><i>f</i></font><font class="small"><i><sub>h</sub></i></font>(<b>.</b>)are the output and hidden activation functions respectively.
</font>
</p>

<hr>
<p>
<font class="normal">
<font color="red"><i>Example</i></font>
</font>
</p>

<p align="center">
<img src="image27.gif"><br>
</p>


<p>
<font class="normal">
Network Output  = [tanh(I<sup><font class="small">T</font></sup> .WI)] . WO
<br><br>
let
<br><br>
HID =  [Tanh(I<sup><font class="small">T</font></sup>.WI)]<sup><font class="small">T</font></sup>	- the outputs of the hidden neurons
<br><br>
ERROR = (Network Output - Required Output)
<br><br>
LR = learning rate
<br><br>
The weight updates become,
<br><br>
<i>linear output neuron</i>
<br><br>
WO = WO - ( LR x <font color="red">ERROR</font> x <font color="orange">HID</font> )
</font>
</p>

<p align="right">
<font class="normal">
(21)<br>
</font>
</p>

<p>
<font class="normal">
<i>tanh hidden neuron</i>
<br><br>
WI  = WI - { LR x [<font color="red">ERROR x WO x (1- HID<font class="small"><font color="red"><sup>2</sup></font></font>)</font>] . <font color="orange">I<font class="small"><font color="orange"><sup>T</sup></font></font></font>  }<font class="small"><sup>T</sup></font>
</font>
</p>

<p align="right">
<font class="normal">
(22)<br>
</font>
</p>

<p align="">
<font class="normal">
Equations 21 and 22 show that the weight change is an <font color="orange">input signal</font> multiplied by a <font color="red">local gradient</font>. This gives a direction that also has magnitude dependent on the magnitude of the error. If the direction is taken with no magnitude then all changes will be of equal size which will depend on the learning rate.
<br><br>
The algorithm above is a simplified version in that there is only one output neuron. In the original algorithm more than one output is allowed and the gradient descent minimises the total squared error of all the outputs. With only one output this reduces to minimising the error.
<br><br> 
There are many algorithms that have evolved from the original algorithm with the aim to increase the learning speed. These are summarised in [5].
</font>
</p>
<hr>
<p align="">
<font class="normal">
<i><font color="red">References</font></i>
<br><br>
[<a name="ref1">1</a>] P.Brierley,  Appendix A in "Some Practical Applications of Neural Networks in the Electricity Industry" Eng.D. Thesis, 1998, Cranfield University, UK.
<br><br>
[<a name="ref2">2</a>] P.Brierley and B.Batty, "Data mining with neural networks - an applied example in understanding electricity consumption patterns" in "Knowledge Discovery and Data Mining" (ed Max Bramer)  1999, chapter 12, pp.240-303, IEE, ISBN 0 85296 767 5.
<br><br>
[<a name="ref3">3</a>] P.J. Werbos, "Beyond regression: New tools for prediction and analysis in the behavioural sciences," Ph.D. Thesis, 1974, Harvard University, Cambridge, MA.
<br><br>
[<a name="ref4">4</a>] D.E.  Rumelhart, G.E. Hinton  and R.J. Williams, "Learning internal representation by error propagation," In Parallel Distributed Processing: Exploration in the Microstructure of Cognition (D.E Rumelhart and J.L. McClelland, eds.)  1986, vol. 1, chapter 8, Cambridge, MA, MIT Press.
<br><br>
[5] "Back Propagation family album" - Technical report C/TR96-05, Department of Computing, Macquarie University, NSW, Australia. www.comp.mq.edu.au/research.html

</font>
</p>




</font></td></tr></tbody></table>



<div id="UMS_TOOLTIP" style="position: absolute; cursor: pointer; z-index: 2147483647; background-color: transparent; top: -100000px; left: -100000px; background-position: initial initial; background-repeat: initial initial;"><img id="ums_img_tooltip" class="UMSRatingIcon"></div></body><umsdataelement id="UMSSendDataEventElement"></umsdataelement></html>