<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Failed neural network: evaluation',
	'<{subtitle}>' => 'Written in <span title="Data Mining and Machine Learning">CS 4407</span> by <a href="https://y.st./en/">Alexand(er|ra) Yst</a>, finalised and <a href="https://y.st./en/coursework/CS4407/Failed_neural_network~_evaluation.xhtml">archived</a> on 2019-03-20',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<p>
	We were supposed to get our neural networks to reach a point in which they had no more than <code>0.05</code> error.
	I wasn&apos;t able to figure out how to do that last week.
	I ended up with about five times the allowed error rate, and couldn&apos;t get it any lower.
	I had an insufficient understanding of how to use Basic Prop, and Web searches on the parameters didn&apos;t turn up anything useful due to not knowing exactly what I was looking for.
	This week, our reading assignment actually covered what the learning rate and momentum are, but we didn&apos;t have that information last week.
	We were given a link to the proprietary Basic Prop application, then told to use it without any explanation.
	The equations used internally by a neural network were presented to us that week, but none of those equations are even used from within the interface of Basic Prop; they&apos;re used under the hood, where we can&apos;t see.
	In other words, we had no knowledge and were expected to just go in blind.
	Having graded the work of other students from last week and having asked for help from a student outside of class, I was also able to get further information on where I went wrong, so if this exercise were repeated, I could do better.
	However, this write-up is on the failed project I submitted last week.
</p>
<h2>Disclaimer</h2>
<p>
	This paper is on my experiences using Basic Prop, which we were required to use for class.
	It should in now way be taken as an endorsement for Basic Prop.
	Instead, I&apos;d recommend neural network software under a better license, such as any of the following:
</p>
<ul>
	<li>
		<a href="https://briansimulator.org/">Brian</a>
	</li>
	<li>
		<a href="https://genn-team.github.io/genn/">$a[GPU]-Enhanced Neuronal Network</a>
	</li>
	<li>
		<a href="https://grey.colorado.edu/emergent/index.php/Main_Page">emergent</a>
	</li>
	<li>
		<a href="https://ioam.github.io/topographica/">Topographica</a>
	</li>
	<li>
		<a href="https://leenissen.dk/fann/wp/">Fast Artificial Neural Network Library</a>
	</li>
	<li>
		<a href="https://moose.ncbs.res.in/">Multiscale Object-Oriented Simulation Environment</a>
	</li>
	<li>
		<a href="http://nest-initiative.org/">Neural Simulation Technology Initiative</a>
	</li>
	<li>
		<a href="https://neuron.yale.edu/neuron/">Neuron</a>
	</li>
	<li>
		<a href="https://retina.anatomy.upenn.edu/~rob/neuronc.html">NeuronC</a>
	</li>
	<li>
		<a href="https://sccn.ucsd.edu/~arno/spikenet/">SpikeNET</a>
	</li>
	<li>
		<a href="https://simbrain.net/">Simbrain</a>
	</li>
	<li>
		<a href="https://sourceforge.net/projects/xnbc/">X Neuro Bio Cluster</a>
	</li>
	<li>
		<a href="https://tedlab.mit.edu/~dr/Lens/">light, efficient network simulator</a>
	</li>
	<li>
		<a href="https://web.stanford.edu/group/pdplab/resources.html">PDPTool</a>
	</li>
	<li>
		<a href="https://www.cs.cmu.edu/~dst/HHsim/">Graphical Hodgkin-Huxley Simulator</a>
	</li>
	<li>
		<a href="https://www.heatonresearch.com/encog/">Encog Machine Learning Framework</a>
	</li>
	<li>
		<a href="https://www.ini.uzh.ch/~amw/seco/cx3d/">Cortex simulation in {$a['3D']}</a>
	</li>
	<li>
		<a href="http://cbcl.mit.edu/jmutch/cns/">Cortical Network Simulator</a>
	</li>
	<li>
		<a href="http://genesis-sim.org/">General Neural Simulation System</a>
	</li>
	<li>
		<a href="http://ilab.usc.edu/toolkit/">iLab Neuromorphic Vision C++ Toolkit</a>
	</li>
	<li>
		<a href="http://psics.org/">Parallel Stochastic Ion Channel Simulator</a>
	</li>
	<li>
		<a href="http://torch5.sourceforge.net/">Torch 5</a>
	</li>
	<li>
		<a href="http://www.lsm.tugraz.at/pcsim/">Parallel Neural Circuit Simulator</a>
	</li>
	<li>
		... and many more.
	</li>
</ul>
<p>
	There&apos;s no shortage of freely-licensed solutions available.
</p>
<h2>Learning steps</h2>
<p>
	I worked with tuning the parameters before dealing with different numbers of learning steps, but there were issues with the learning steps that are helpful to address before I move onto anything else.
</p>
<p>
	At no point during this course have we been given any information whatsoever on using Basic Prop.
	We were handed this proprietary software that we - or at least I - had never encountered.
	Then we were asked to dive right in without even basic instruction.
	One think that threw off my results were that when you change the step count and re-run the learning process - or even just run the same learning process a second time - the old learning from last time is used again and added to.
	You can change the parameters, you can change the number of learning steps, et cetera, but it all builds iteratively onto the same model.
	This is highly counter-intuitive to someone that knows nothing about this software, but to get a new model, changing number of learning steps doesn&apos;t change the number used in total, but how many will be added to the current model when you next run the process.
	You have to deliberately clear the model to get a clean workspace each time.
	Because of that, most of my initial models all ran together into one, and no usable information was gathered.
	I only noticed the problem because certain parameter settings ruined the model, and even going back to the defaults wouldn&apos;t yield a usable model after that.
	More on that later.
</p>
<p>
	Once I found out the models were running together, and how to prevent it, I got much cleaner and more-repeatable results.
	Due to the assignment asking for a screenshot of the error-reduction graph, as well as Basic Prop having no way to display more than the last five thousand learning steps on its error graph, I needed to run the learning process for exactly five thousand steps at first.
	That allowed the beginning of the graph, which was the most visually-useful part of the graph, to be captured.
	Later parts of the graph stay pretty straight and don&apos;t change much.
	Some further training could be done, so I ran the simulation for an additional one hundred thousand learning steps, as that was the highest number of steps that could be completed in a single training session.
	Further training beyond that point was possible by running the training process more times, but didn&apos;t seem to have a positive effect on my numbers, so I stuck with the data captured at that point, which was after a total of one hundred five thousand learning steps had taken place.
</p>
<h2>Parameters</h2>
<p>
	As mentioned above, we weren&apos;t presented with information on what the learning rate and the momentum are.
	My Web-searching skills were also inadequate for finding this information.
	As such, finding semi-working results was entirely a process of trial and error.
	I tried every possible combination of learning rate and momentum, but as mentioned above, my tests ended up all smudged together due to inadequate knowledge of how to use this piece of software I&apos;d never seen that we were given zero instructions on how to use.
	The default numbers seemed to work okay, though not nearly good enough.
	However, other numbers tended to produce models in which the error rate stabilised at very high numbers.
	The error progress graph developed odd bands at this point, similar to the following:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/error_progress_bands.png" alt="Bands in the error progress graph" class="framed-centred-image" width="381" height="146"/>
<p>
	Once that had happened though, even the default parameter values wouldn&apos;t function.
	The bands stayed and the error rate remained high.
	I eventually figured out that my models were tainted, as mentioned in the previous section, and that none of the information I&apos;d gathered was of any use whatsoever.
	Once a band-generating set of parameters was hit, the model became ruined, and any further parameters I used would continue to display those stable bands.
	Even if there had been a way to salvage such models, there wasn&apos;t a way for me to salvage the data about how I&apos;d reached that state.
	Every test I&apos;d ever run got lumped together, so they all would have contributed to the state I&apos;d reached.
	The only choice I had was to start over.
</p>
<p>
	I lost a lot of time to debugging the issue, and I lost even more to performing all these experiments without knowing the data would be lost.
	If I&apos;d had more time, I could have tried every combination of learning rate and momentum again, but instead, I fiddled with them one at a time this time, leaving the other on the default as I did so.
	I didn&apos;t have time go as in-depth as I had the first time.
</p>
<h3>Learning rate</h3>
<p>
	I found that when the learning rate was too high, my error graph was all over the place.
	It pretty much just looked like a field of static.
	However, if my learning rate was too low, answers weren&apos;t found as quickly.
	There was no real issue to having a low learning rate as long as I threw more training at the model to compensate.
	The default value, <code>0.3</code>, seemed like a good compromise.
	If it was much higher, past knowledge wouldn&apos;t be taken into account as much as it should, meaning that signs of convergence wouldn&apos;t emerge.
	I seemed to get slightly better convergence with the <code>0.3</code> learning rate than I did for smaller values, though I wasn&apos;t sure why.
	At this point, I&apos;m thinking it was more just chance, than anything, or perhaps the fact that I tried the default values more than any other values, so there were more opportunities for a good result to present itself.
</p>
<h3>Momentum</h3>
<p>
	Again, I tried the default momentum more times than any other while I was still trying to figure out how the parameters worked.
	With more attempts, I my best results ended up having the default momentum instead of a lower momentum.
	As for higher momentums, they turned out to be the culprit when it came to the banding I mentioned before.
	If momentum was too high, it seemed to irreversibly damage my model.
</p>
<h2>Hidden layers</h2>
<p>
	Having spent such a large chunk of my time debugging the strange bands that had showed up in my model, as well as on wasted tests that were pulled into the banded models I&apos;d worked with, I didn&apos;t have time to try out each network configuration combined with each learning rate and each momentum value.
	That would have been 11 * 8 * 10 * 11 (9681) models to try out, and that doesn&apos;t include models with bias nodes and networks with no hidden layers, nor does it include the then-mysterious &quot;SRN&quot; option.
	Again, we didn&apos;t learn about simple recurrent networks until the week after we had this exercise, so we didn&apos;t really know what this option was for.
	I tried out the SRN option, but didn&apos;t get good results, so I focussed on the feed-forward networks with hidden layers.
	Without time to try every possible combination of values, I still tried every network configuration, but I used only one set of parameter values.
	As mentioned above, and for the reasons mentioned above, these were the default values.
</p>
<p>
	As expected, using no hidden layers at all led to pretty bad results.
	Using one hidden layer didn&apos;t have great results, no matter the size of the hidden layer, but they were much better than having no hidden layers.
	Using two hidden layers, which was the maximum allowed in Basic Prop, provided the best results.
</p>
<p>
	I ran through each combinations of hidden layer counts, from having one node in each hidden layer to having ten nodes, the maximum, in each layer.
	That&apos;s a total of one hundred combinations.
	As I said, I had no idea what I was doing, so I had to try as much as I could and see what results I could get.
	My understanding at this point is that random chance in the form of the starting weights of the network.
	It seems a network with ten nodes in each layer has a higher chance of success than other configurations.
	However, when I tried each combination once, the test using nine nodes in the first hidden layer and eight in the second yielded the best results.
	From there, I tried to refine the results by continually resetting the simulation and running it again.
	Random chance always changed, so I presented an instance in which I&apos;d better results than in most other instances.
</p>
<h2>Final results</h2>
<p>
	I managed to get the average error level down under <code>0.25</code>, but that was five times the allowed error of <code>0.05</code>.
	I was really beating myself up over it, but I&apos;d spent a lot of hours on my simulation, and had no more time to give.
	I started compiling my results, including the error rates of each individual input/output sequence.
	What I found really surprised me.
	As far as I could see, all outputs were within <code>0.2</code> of where they should be; well within the tolerable range.
	I looked through the data, point by point, and found what was throwing my average error off so badly.
	Two specific output sequences had two outputs each that were off by <code>0.49</code> each.
	It was a mystery at the time how only four outputs could throw off the average so badly, but by looking though the data, I found that when rounded to <code>0</code> or <code>1</code>, all the guessed outputs were perfect.
	The only problem was that these four outputs were very close to the halfway point instead of near their respective integers.
	I had no idea how to fix this though.
</p>
<p>
	After this week, I understand why the average error was so far off.
	The average error is actually based on the average <strong>*square*</strong> error.
	That means that the larger the error, the more it counts against you.
	An error of <code>0.49</code> isn&apos;t a little over twenty-four times as hard on your average error as an error of <code>0.02</code>; it&apos;s a little over <strong>*six hundred times*</strong> as hard on it!
	Squaring is a powerful thing.
</p>
<h2>Conclusion</h2>
<p>
	To sum it up, my results were terrible because I had no idea what I was doing.
	I did my best using trial and error, but we were never given any instruction on how to use Basic Prop successfully.
	I also had no knowledge of what the parameters were supposed to mean, as my Web-searching skills are sub-par when I don&apos;t have at least some idea of what I&apos;m looking for.
	And to top it off, we were finally given the information on what the parameters did in class, but only the week <strong>*after*</strong> our work on our neural networks was due, when such information could no longer be of help on our projects.
	Pretty much everything that could go wrong did go wrong, due to my own lack of skill and experience in this area.
	My failure to get a satisfactory simulation was entirely on me.
	However, with a better course structure - or rather, a presentation of the information we got this week last week instead - even <strong>*I*</strong> might have stood some chance of success.
</p>
<p></p>
END
);
