<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'class::knn()',
	'<{subtitle}>' => 'Written in <span title="Data Mining and Machine Learning">CS 4407</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2019-02-27',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<p>
	We&apos;re given six data points.
	First, group A contains the points (0, 0), (1, 1), and (2, 2).
	Second, group B contains the points (5.5, 5.5), (5, 6), and (6, 5)
</p>
<h2>Arrangement</h2>
<p>
	First, we&apos;re asked to arrange the points on a graph.
	We get the following:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_4-0.png" alt="Figure 4-0" class="framed-centred-image" width="677" height="699"/>
<p>
	As we can see, group A forms a short path near the origin, in which <var>y</var> is equal to <var>x</var>.
	B forms a shorter, more concentrated path perpendicular to that, in which <var>y</var> is equal to eleven minus <var>x</var>
	Actually creating this graph is a bit quicker if we first define a couple variables that we&apos;ll define in the next section, so we&apos;ll cover the code for the graph in just a bit.
</p>
<h2>The <var>cl</var> parameter</h2>
<p>
	Next, we&apos;re asked to construct the <var>cl</var> argument to pass to the <code>knn()</code> function.
	It only makes sense to define the <var>cl</var> argument within the context of how we&apos;ve constructed the <var>train</var> argument though, so I&apos;ll define both below.
	Assuming we&apos;re passing in the test cases in the same order shown above, which is the three A test cases followed by the three B test cases, the <var>cl</var> parameter is simply a vector containing three &quot;A&quot;s followed by three &quot;B&quot;s:
</p>
<blockquote>
<pre><kbd>train = data.frame(x = c(0, 1, 2, 5.5, 5, 6), y = c(0, 1, 2, 5.5, 6, 5))
cl = factor(c(&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;))</kbd></pre>
</blockquote>
<p>
	You may notice that all the <var>x</var> values had to be put into one vector, and all the <var>y</var> values in another.
	We couldn&apos;t instead use a vector for each data point, pairing <var>x</var> and <var>y</var> values based on what vector they&apos;re in.
	This is because the <code>data.frame()</code> function interprets its arguments as columns, while coordinate pairs are defined in rows.
	With the <var>train</var> and <var>cl</var> variables defined, we can not only use them in the <code>class::knn()</code> function, but also in the <code>plot()</code> and <code>text()</code> functions:
</p>
<blockquote>
	<kbd>
		plot(train, type = &quot;n&quot;); text(train, col = c(&quot;green&quot;, &quot;green&quot;, &quot;green&quot;, &quot;blue&quot;, &quot;blue&quot;, &quot;blue&quot;), labels = cl)
	</kbd>
</blockquote>
<p>
	I wasn&apos;t sure how to calculate the colour data from the label data in an automated way like an advanced R user would no doubt be able to do, but I still thought the colours made the data stand out better visually, so I just entered the colours by hand.
	Setting the <var>type</var> to the string <code>&quot;n&quot;</code> causes no data points to be displayed, so the space is open for the <code>text()</code> function to put letters onto the graph in place of uniform-looking points, making the groups more visually distinct from one another.
</p>
<h2>About using <code>knn()</code> and <code>summary()</code></h2>
<p>
	We were led to believe this week that the <code>knn()</code> function needs to be imported by including the <code>class</code> library.
	This simply isn&apos;t true.
	If you want to call <code>knn()</code> without using a qualifier, then yes, <code>class</code> needs to be imported.
	However, a better (in my opinion) way to call the function is to reference the namespace it&apos;s in.
	Instead of importing <code>class</code> and then calling <code>knn()</code>, you can instead simply call <code>class::knn()</code> directly.
	I prefer this method, as it makes it clear every time you call the function that this function belongs to an external library and even makes it clear which library it&apos;s a part of.
	Installation of the <code>class</code> library may or may not still be necessary.
	On my system, it seemed to have already been installed when I installed R.
	I&apos;m not sure if that&apos;s specific to the Debian package I installed or whether that&apos;s the case for R on all systems.
	Once installed though, it can be accessed using the namespace qualifier and does not need to be imported.
	It doesn&apos;t particularly matter which method you use, but I wanted to explain my use of the <code>class::knn()</code> method of calling the function so my usage of it below could be better understood.
	When I use <code>class::knn()</code>, I am in fact calling the <code>knn()</code> function.
	I&apos;m just not importing it.
</p>
<p>
	Next, I&apos;d like to point out that the <code>summary()</code> function isn&apos;t needed to interpret the output of <code>knn()</code>.
	We were asked to use <code>summary()</code>, so I will, but for educational purposes, I&apos;d first like to demonstrate that it doesn&apos;t help.
	Let&apos;s look at our first test point.
	To test our data point, we use this code:
</p>
<blockquote>
	<kbd>class::knn(train, c(4,4), cl, 1)</kbd>
</blockquote>
<p>
	It results in this output:
</p>
<blockquote>
<pre><samp>[1] B
Levels: A B</samp></pre>
</blockquote>
<p>
	First, the output tells us that the answer is &quot;B&quot;.
	That&apos;s what we&apos;re looking for.
	Next, it tells us what the options were to begin with: &quot;A&quot; and &quot;B&quot;.
	That could be useful as well.
	Now let&apos;s try again, but using the <code>summary()</code> function to parse the result as we were asked:
</p>
<blockquote>
	<kbd>summary(class::knn(train, c(4, 4), cl, 1))</kbd>
</blockquote>
<blockquote>
<pre><samp>A B
0 1</samp></pre>
</blockquote>
<p>
	So what does this tell us?
	While the original result was intuitive, this one took some trial and error to understand.
	I thought it was the number of &quot;votes&quot; for each group.
	In other words, the number of neighbours of each type.
	But try raising the number.
	There&apos;s always a one in one category and a zero in all other categories (in this case, there&apos;s only one other category.
	Instead of telling us the answer directly, then telling us what the possible outcomes were, this method instead cycles through each possible answer and tells us if that potential solution is the actual solution using a binary value.
	Zero means that that answer was rejected, while one means the answer was accepted as the valid solution.
	While <code>summary()</code> can be helpful in some cases, it provides absolutely zero benefit for the singular cases presented in this assignment.
	It provides the exact same information, but in a slightly-obfuscated way.
	It only provides some benefit once we get to the part where we try testing multiple (in our case, four) data points at once.
	Once we have multiple test cases, these numbers will reflect how many of the data points tested fall into each case by adding up the results.
</p>
<h2>Test cases</h2>
<h3>Test 0: (4, 4) - one neighbour</h3>
<p>
	Above, we showed the results of the attempted classification of this data point using only one neighbour, but let&apos;s do it again:
</p>
<blockquote>
	<kbd>summary(class::knn(train, c(4, 4), cl, 1))</kbd>
</blockquote>
<blockquote>
<pre><samp>A B
0 1</samp></pre>
</blockquote>
<p>
	As we can see, this test case belongs to the B case.
	This is because the closest known point, the one at (5.5, 5.5), is in the B class.
</p>
<h3>Test 1: (3.5, 3.5) - one neighbour</h3>
<p>
	Next, we repeat the experiment using (3.5, 3.5) as our test point:
</p>
<blockquote>
	<kbd>summary(class::knn(train, c(3.5, 3.5), cl, 1))</kbd>
</blockquote>
<blockquote>
<pre><samp>A B
1 0</samp></pre>
</blockquote>
<p>
	This data point is closer to the A-class data point at (2, 2), so it gets classified as an A-class point as well.
</p>
<h3>Test 2: (3.5, 3.5) - three neighbours</h3>
<p>
	Now, we&apos;ll use the same data point as last time, but increase the number of neighbours to three.
	It&apos;s worth noting that we should probably try to use numbers of neighbours that are unlikely to cause ties when we can.
	As we have only two options, A and B, any positive odd integer will work perfectly.
	It becomes impossible to avoid ties when there are more than two options, but as long as we have only two options, using odd numbers of neighbours is preferable.
</p>
<blockquote>
	<kbd>summary(class::knn(train, c(3.5, 3.5), cl, 3))</kbd>
</blockquote>
<blockquote>
<pre><samp>A B
0 1</samp></pre>
</blockquote>
<p>
	Now, the data point moves over to the B class, as all three B-class neighbours are closer than the second-closest A neighbour.
	So you have one &quot;vote&quot; for A and two for B.
</p>
<h3>Test 3: several data points - three neighbours</h3>
<p>
	Next, we&apos;re asked to put the following four data points into a matrix and test them: (4, 4), (3, 3), (5, 6), and (7, 7).
	Like before, we need to put this in as a vector of <var>x</var> values followed by a vector of <var>y</var> values, not four vectors of <var>x</var>/<var>y</var> pairs:
</p>
<blockquote>
	<kbd>test = data.frame(x = c(4, 3, 5, 7), y = c(4, 3, 6, 7))</kbd>
</blockquote>
<p>
	Now, we can plug that matrix into <code>class::knn()</code> and <code>summary()</code>:
</p>
<blockquote>
	<kbd>summary(class::knn(train, test, cl, 3))</kbd>
</blockquote>
<blockquote>
<pre><samp>A B
1 3</samp></pre>
</blockquote>
<p>
	As we can see, there are three data points that get classified as belonging to the B group, but only one A-group data point.
</p>
END
);
