<html>
<head>
</head>
<body>
<wicket:extend>
						<div class="es-detailedProfile es-text">
                        	<!--- Exp Attribute -->
                        	<div class="es-profileGroup">
                            	<!-- heading for Exp -->
                            	<div class="es-profileSubHead"><h2>Publications</h2></div>
                                <!-- repeating element inside pub header -->
                                <div wicket:id="jitem" class="es-attrbDetail">
                                	<div wicket:id="jtitle" class="es-attrbTitle">Reinforcement learning and time perception -- a model of animal experiments.</div>
                                    <div wicket:id="jauthor" class="es-attrbProp1">Stephen Marsland, Jonathan Shapiro, and Ulrich Nehmzow</div>
                                    <div wicket:id="jjournal" class="es-attrbProp2" style="font-style:italic;">Neural Networks, 15(8-9):1041-1058, 2002. </div>
                                    <div class="es-attrbProp2"><a wicket:id="jlink" href="#">Link to full paper</a></div>
                                    <div wicket:id="jabs" class="es-attrbText">
                                    	<!-- user filled HTML details -->
                                        <p>
                                        	The ability to grow extra nodes is a potentially useful facility for a self-organising neural network. A network that can add nodes into its map space can approximate the input space more accurately, and often more parsimoniously, than a network with predefined structure and size, such as the Self-Organising Map. In addition, a growing network can deal with dynamic input distributions. Most of the growing networks that have been proposed in the literature add new nodes to support the node that has accumulated the highest error during previous iterations or to support topological structures. This usually means that new nodes are added only when the number of iterations is an integer multiple of some pre-defined constant. This paper suggests a way in which the learning algorithm can add nodes whenever the network in its current state does not sufficiently match the input. In this way the network grows very quickly when new data is presented, but stops growing once the network has matched the data. This is particularly important when we consider dynamic data sets, where the distribution of inputs can change to a new regime after some time. We also demonstrate the preservation of neighbourhood relations in the data by the network. The new network is compared to an existing growing network, the Growing Neural Gas (GNG), on a artificial dataset, showing how the network deals with a change in input distribution after some time. Finally, the new network is applied to several novelty detection tasks and is compared with both the GNG and an unsupervised form of the Reduced Coulomb Energy network on a robotic inspection task and with a Support Vector Machine on two benchmark novelty detection tasks. Author Keywords: Unsupervised learning; Self-organisation; Growing networks; Topology preservation; Novelty detection; Dimensionality reduction; Mobile robotics; Inspection 
                                        </p>
                                        <!-- user filled HTML details -->
                                    </div>
                                </div>
                            </div>
                        </div>        

</wicket:extend>
</body>
</html>