<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Ionosphere',
	'<{subtitle}>' => 'Written in <span title="Data Mining and Machine Learning">CS 4407</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2019-03-06',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<p>
	We&apos;ve been given a large dataset to work with, which is not included in this submission.
	We&apos;ve then been asked to run the following commands to get our environment set up for the assignment:
</p>
<blockquote>
<pre><kbd>setwd(&quot;/home/yst/Desktop/CS4407&quot;)
library(rpart)
library(mlbench)
data(Ionosphere)</kbd></pre>
</blockquote>
<p>
	For me, getting the <code>mlbench</code> library working required first installing <a href="apt:mlbench"><code>mlbench</code></a> by running <kbd>sudo aptitude install r-cran-mlbench</kbd>, as it wasn&apos;t installed with the base R installation.
</p>
<h2>Part 0: Decision tree</h2>
<p>
	We can see form the decision tree by running <kbd>rpart(Class~.,Ionosphere)</kbd>:
</p>
<blockquote>
<pre><samp>n= 351 

node), split, n, loss, yval, (yprob)
      * denotes terminal node

 1) root 351 126 good (0.35897436 0.64102564)
   2) V5&lt; 0.23154 77   4 bad (0.94805195 0.05194805) *
   3) V5&gt;=0.23154 274  53 good (0.19343066 0.80656934)
     6) V27&gt;=0.999945 52  13 bad (0.75000000 0.25000000)
      12) V1=0 19   0 bad (1.00000000 0.00000000) *
      13) V1=1 33  13 bad (0.60606061 0.39393939)
        26) V3&lt; 0.73004 8   0 bad (1.00000000 0.00000000) *
        27) V3&gt;=0.73004 25  12 good (0.48000000 0.52000000)
          54) V22&gt;=0.47714 9   1 bad (0.88888889 0.11111111) *
          55) V22&lt; 0.47714 16   4 good (0.25000000 0.75000000) *
     7) V27&lt; 0.999945 222  14 good (0.06306306 0.93693694) *</samp></pre>
</blockquote>
<p>
	For a better visual, we can run <kbd>plot(rpart(Class~.,Ionosphere))</kbd> followed by <kbd>text(rpart(Class~.,Ionosphere))</kbd>:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_5-0.png" alt="Figure 5-0" class="framed-centred-image" width="677" height="699"/>
<h2>Estimating accuracy</h2>
<p>
	The decision tree above was generated using all the data at our disposal.
	As such, we have no data left over to use in estimating the tree&apos;s accuracy.
	Let&apos;s split our data into a training set and a testing set, then start our experiment anew.
	To get the indexes of our training set, we run <kbd>train = sample(1:nrow(Ionosphere), nrow(Ionosphere)/2)</kbd>.
	We calculate our decision tree by running <kbd>tree = rpart(Class~.,Ionosphere,subset=train)</kbd>, then display the tree in text form by entering <kbd>tree</kbd>:
</p>
<blockquote>
<pre><samp>n= 175 

node), split, n, loss, yval, (yprob)
      * denotes terminal node

 1) root 175 59 good (0.33714286 0.66285714)
   2) V5&lt; 0.10743 32  0 bad (1.00000000 0.00000000) *
   3) V5&gt;=0.10743 143 27 good (0.18881119 0.81118881)
     6) V27&gt;=0.9941 25  7 bad (0.72000000 0.28000000) *
     7) V27&lt; 0.9941 118  9 good (0.07627119 0.92372881)
      14) V10&lt; -0.030315 28  7 good (0.25000000 0.75000000)
        28) V15&lt; 0.696705 7  2 bad (0.71428571 0.28571429) *
        29) V15&gt;=0.696705 21  2 good (0.09523810 0.90476190) *
      15) V10&gt;=-0.030315 90  2 good (0.02222222 0.97777778) *</samp></pre>
</blockquote>
<p>
	To see the tree in graphical form, we instead enter <kbd>plot(tree);text(tree)</kbd>:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_5-1.png" alt="Figure 5-1" class="framed-centred-image" width="677" height="699"/>
<p>
	Your tree may be different, as my sample was randomly chosen.
</p>
<p>
	Next, we need to test our tree.
	We could run <kbd>predict(tree, Ionosphere[-train,])</kbd>, which would output the probability of each test case fitting into a particular class alongside the inverse probabilities.
	The problem with those probabilities is that they aren&apos;t hard predictions.
	If we&apos;re going to compare the predictions to the actual answers, we need actual guesses as to what groups the test cases fit into.
	An easy way to get these guesses is to throw out one of the columns, and round the other.
	R stupidly doesn&apos;t index from zero, so we skip zero, and we also skip one, because it&apos;s our bad case.
	We want to look at our good case, so we use <kbd>prediction = round(predict(tree, Ionosphere[-train,])[, 2]) == 1</kbd>.
	I&apos;m not sure how to convert booleans back into group names, but we don&apos;t have to.
	To compare against our known data, we can convert the known data to booleans as well: <kbd>known = Ionosphere[-train, length(Ionosphere)] == &quot;good&quot;</kbd>.
	From there, we compare the two vectors using <kbd>table(prediction == known)</kbd>.
	This returns a table showing how many test cases had correct predictions and how many didn&apos;t:
</p>
<blockquote>
<pre><samp>FALSE  TRUE
   24   152</samp></pre>
</blockquote>
<p>
	Here, <code>TRUE</code> represents the cases in which the test case was predicted accurately, while <code>FALSE</code> represents the cases in which the predictive tree guessed incorrectly.
	If we add the both categories together, and divide by the number of correctly-guessed cases, we get <code>0.8636364</code>, or a little over a 86% success rate.
</p>
END
);
