<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Failed neural network',
	'<{subtitle}>' => 'Written in <span title="Data Mining and Machine Learning">CS 4407</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2019-03-13',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<p>
	We were supposed to get our neural networks to reach a point in which they had no more than <code>0.05</code> error.
	I wasn&apos;t able to figure out how to do that this week.
	I ended up with about five times the allowed error rate, and couldn&apos;t get it any lower.
</p>
<h2>Part 0: the pattern file</h2>
<p>
	We were asked to take letters and numbers on a seven-segment digital display and translate them to their seven-bit $a[ASCII] counterparts.
	Regardless of the case used on the display, we were to use the upper-case $a[ASCII] code, and only seven distinct letters (along with all the digits) were to be accounted for.
	That sounds easy enough, right?
	We were given a specific encoding to use to identify the lit $a[LED]s in the display.
	Intuitively, they should be numbered top to bottom, bottom to top, clockwise, counter-clockwise, or some other easy-to-deduce format.
	Instead, the $a[LED]s were assigned indices according to a slightly-obtuse pattern: clockwise along the top, the right, and the bottom; then counter-clockwise down the left; and finally the centre $a[LED].
	So if the input patters seem a bit odd, that&apos;s why.
	To make it easier to understand, see this diagram of the $a[LED] positions:
</p>
<pre> -0- 
|   |
4   1
|   |
 -6- 
|   |
5   2
|   |
 -3- </pre>
<p>
	With those requirements in mind, I constructed the following pattern file:
</p>
<blockquote>
<pre><code>Number of patterns = 17
Number of inputs = 7
Number of outputs = 7
[Patterns]
1 1 1 1 1 1 0	0 1 1 0 0 0 0
0 1 1 0 0 0 0	0 1 1 0 0 0 1
1 1 0 1 0 1 1	0 1 1 0 0 1 0
1 1 1 1 0 0 1	0 1 1 0 0 1 1
0 1 1 0 1 0 1	0 1 1 0 1 0 0
1 0 1 1 1 0 1	0 1 1 0 1 0 1
1 0 1 1 1 1 1	0 1 1 0 1 1 0
1 1 1 0 0 0 0	0 1 1 0 1 1 1
1 1 1 1 1 1 1	0 1 1 1 0 0 0
1 1 1 0 1 0 1	0 1 1 1 0 0 1
1 1 1 0 1 1 1	1 0 0 0 0 0 1
0 0 1 1 1 1 1	1 0 0 0 0 1 0
1 0 0 1 1 1 1	1 0 0 0 0 1 1
0 1 1 1 0 1 1	1 0 0 0 1 0 0
1 0 0 1 1 1 1	1 0 0 0 1 0 1
1 0 0 0 1 1 1	1 0 0 0 1 1 0
0 1 1 0 1 1 1	1 0 0 1 0 0 0</code></pre>
</blockquote>
<h2>Part one: building the model</h2>
<p>
	As I said above, I wasn&apos;t able to get my model working correctly.
	The best model I could find though used two hidden layers; nine nodes in the first hidden layer and eight in the second:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/network_configuration.png" alt="Network Configuration" class="framed-centred-image" width="300" height="400"/>
<p>
	I used the default learning rate (<code>0.3</code>) and default momentum (<code>0.8</code>).
	I found whenever I strayed from these defaults, my results got even worse than they already were.
	Weight range didn&apos;t seem to have a consistent effect on my results, so I left that at the default too (<code>-1</code> to <code>1</code>).
	The one thing that did seem to help was to boost the number of learning steps.
	Basic Prop is not able to show the full graph if you use more than the default number of learning steps, so my graph below shows the first five thousand, but after taking the screenshot, I then ran the simulation for one hundred thousand more learning steps.
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/error_progress.png" alt="Error progress" class="framed-centred-image" width="381" height="146"/>
<p>
	When I was done, my weights file looked like this:
</p>
<blockquote>
<pre><code>[Weights]
Number of layers = 3
[Layer0]
Number to = 9
Number from = 8
0  -2.63  3.666  3.158  0.021  -3.593  -3.463  -0.46
0  5.169  0.031  1.214  1.366  3.308  -5.093  -2.685
0  6.463  -1.905  3.15  6.508  1.76  -5.595  -5.61
0  -4.857  -1.31  1.582  0.195  2.399  1.109  -0.941
0  1.897  4.137  -2.611  1.446  -3.786  -3.948  -1.679
0  -4.78  4.814  -4.538  -3.504  7.47  1.082  2.301
0  -2.941  -2.706  -0.477  -1.438  2.568  6.363  -1.411
0  -0.373  -6.367  -4.049  1.612  1.446  3.031  1.099
0  3.318  -3.201  4.098  -0.792  4.775  -5.739  -4.534
[Layer1]
Number to = 8
Number from = 10
0  1.88  2.615  2.497  0.995  3.42  -1.275  -3.31  -5.343  3.314
0  -1.549  -3.12  -5.432  0.351  1.31  0.994  -1.862  4.808  -3.294
0  0.358  -1.603  -7.758  -0.075  -2.234  3.275  4.483  1.855  -5.098
0  1.307  -3.102  2.071  0.753  -3.539  5.184  -2.295  -7.416  -3.157
0  -3.691  -4.347  -5.998  4.027  3.124  0.702  0.643  2.411  -2.497
0  1.852  -0.846  -1.22  -4.043  5.649  -3.032  -3.681  0.08  -1.864
0  6.081  1.298  -6.714  -0.098  -2.684  -7.312  1.447  4.976  4.807
0  2.146  1.828  2.221  -3.778  0.415  -2.425  -6.96  -1.344  0.942
[Layer2]
Number to = 7
Number from = 9
0  -6.596  1.432  8.25  1.325  1.758  -0.507  -2.03  -2.216
0  6.455  -2.355  -8.169  -1.252  -2.036  1.402  2.125  2.211
0  6.615  -1.289  -8.148  -1.246  -2.749  1.08  2.067  2.263
0  -5.169  -1.385  -8.871  9.843  4.691  -5.698  -8.01  2.566
0  -4.326  2.663  -0.124  -4.285  -6.248  -1.402  15.08  -3.322
0  -4.625  4.968  0.039  -9.84  2.218  11.92  6.015  -4.523
0  -4.937  -6.194  8.195  -1.979  -9.889  0.447  -3.034  12.446</code></pre>
</blockquote>
<p>
	My console output from testing each input, then all inputs, was as follows:
</p>
<blockquote>
<pre><samp>Pattern: &quot; 1,  1,  1,  1,  1,  1,  0   &gt;&gt;&gt;   0,  1,  1,  0,  0,  0,  0 &quot;
Result: &quot; 0, 1, 1, 0.01, 0.01, 0, 0.01 &quot;
Pattern: &quot; 0,  1,  1,  0,  0,  0,  0   &gt;&gt;&gt;   0,  1,  1,  0,  0,  0,  1 &quot;
Result: &quot; 0, 1, 1, 0.01, 0.01, 0.01, 1 &quot;
Pattern: &quot; 1,  1,  0,  1,  0,  1,  1   &gt;&gt;&gt;   0,  1,  1,  0,  0,  1,  0 &quot;
Result: &quot; 0, 1, 1, 0, 0, 1, 0 &quot;
Pattern: &quot; 1,  1,  1,  1,  0,  0,  1   &gt;&gt;&gt;   0,  1,  1,  0,  0,  1,  1 &quot;
Result: &quot; 0, 1, 1, 0, 0.01, 0.99, 1 &quot;
Pattern: &quot; 0,  1,  1,  0,  1,  0,  1   &gt;&gt;&gt;   0,  1,  1,  0,  1,  0,  0 &quot;
Result: &quot; 0, 1, 1, 0.01, 1, 0, 0 &quot;
Pattern: &quot; 1,  0,  1,  1,  1,  0,  1   &gt;&gt;&gt;   0,  1,  1,  0,  1,  0,  1 &quot;
Result: &quot; 0, 1, 1, 0, 0.99, 0.02, 0.99 &quot;
Pattern: &quot; 1,  0,  1,  1,  1,  1,  1   &gt;&gt;&gt;   0,  1,  1,  0,  1,  1,  0 &quot;
Result: &quot; 0.01, 0.99, 0.99, 0, 1, 0.98, 0.01 &quot;
Pattern: &quot; 1,  1,  1,  0,  0,  0,  0   &gt;&gt;&gt;   0,  1,  1,  0,  1,  1,  1 &quot;
Result: &quot; 0, 1, 1, 0, 0.99, 1, 0.99 &quot;
Pattern: &quot; 1,  1,  1,  1,  1,  1,  1   &gt;&gt;&gt;   0,  1,  1,  1,  0,  0,  0 &quot;
Result: &quot; 0.01, 0.99, 0.99, 0.99, 0, 0, 0.01 &quot;
Pattern: &quot; 1,  1,  1,  0,  1,  0,  1   &gt;&gt;&gt;   0,  1,  1,  1,  0,  0,  1 &quot;
Result: &quot; 0, 1, 1, 0.99, 0, 0, 1 &quot;
Pattern: &quot; 1,  1,  1,  0,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  0,  0,  1 &quot;
Result: &quot; 1, 0, 0, 0.01, 0.01, 0, 0.99 &quot;
Pattern: &quot; 0,  0,  1,  1,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  0,  1,  0 &quot;
Result: &quot; 1, 0, 0, 0.01, 0.01, 0.99, 0.01 &quot;
Pattern: &quot; 1,  0,  0,  1,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  0,  1,  1 &quot;
Result: &quot; 0.98, 0.02, 0.02, 0.01, 0.49, 0.51, 0.98 &quot;
Pattern: &quot; 0,  1,  1,  1,  0,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  1,  0,  0 &quot;
Result: &quot; 0.99, 0.01, 0, 0, 0.99, 0.01, 0.01 &quot;
Pattern: &quot; 1,  0,  0,  1,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  1,  0,  1 &quot;
Result: &quot; 0.98, 0.02, 0.02, 0.01, 0.49, 0.51, 0.98 &quot;
Pattern: &quot; 1,  0,  0,  0,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  0,  1,  1,  0 &quot;
Result: &quot; 1, 0, 0, 0, 1, 1, 0.01 &quot;
Pattern: &quot; 0,  1,  1,  0,  1,  1,  1   &gt;&gt;&gt;   1,  0,  0,  1,  0,  0,  0 &quot;
Result: &quot; 1, 0, 0, 0.99, 0, 0, 0.01 &quot;
Test All: Average per pattern error: 0.24343323678954018</samp></pre>
</blockquote>
<p>
	As you can see, each input was matched to the correct output.
	Mostly.
	The outputs were supposed to be binary, but the network&apos;s guesses were frequently off by up to <code>0.02</code>.
	If rounded, these answers would be perfect, but the network doesn&apos;t allow for any sort of rounding.
	Additionally, at the bottom, you see that the average pattern error was <code>0.24343323678954018</code>.
	Looking closer at the data, you see that there were two pattens with anomalous results.
	Both of them made a guess of <code>0.49</code> where a guess of <code>0</code> should have been made and a guess of <code>0.51</code> where a guess of <code>1</code> should have been made.
	These are the only two cases in which guesses were further off than <code>0.02</code>, but they were <strong>*so*</strong> far off that they threw off the stats of the model as a whole.
	Though I tried repeatedly throughout the week, I couldn&apos;t get my model to behave and produce more-reasonable numbers.
	I&apos;m not sure what I was doing wrong, but I&apos;m hoping to learn how to do it better when I get to grade your work in the coming week.
	I&apos;m hoping you have some insight that I missed when I tried this myself.
</p>
END
);
