<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Linear regression',
	'<{subtitle}>' => 'Written in <span title="Data Mining and Machine Learning">CS 4407</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2019-02-20',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<h2>Multiple Linear Regression</h2>
<p>
	We&apos;ll be using data imported into R with the following code:
</p>
<blockquote>
<pre><kbd><var>D</var> &lt;- data.frame(
    x1=c(0.58, 0.86, 0.29, 0.20, 0.56, 0.28, 0.08, 0.41, 0.22, 0.35,
             0.59, 0.22, 0.26, 0.12, 0.65, 0.70, 0.30, 0.70, 0.39, 0.72,
             0.45, 0.81, 0.04, 0.20, 0.95),
    x2=c(0.71, 0.13, 0.79, 0.20, 0.56, 0.92, 0.01, 0.60, 0.70, 0.73,
             0.13, 0.96, 0.27, 0.21, 0.88, 0.30, 0.15, 0.09, 0.17, 0.25,
             0.30, 0.32, 0.82, 0.98, 0.00),
      y=c(1.45, 1.93, 0.81, 0.61, 1.55, 0.95, 0.45, 1.14, 0.74, 0.98,
             1.41, 0.81, 0.89, 0.68, 1.39, 1.53, 0.91, 1.49, 1.38, 1.73,
             1.11, 1.68, 0.66, 0.69, 1.98)
)</kbd></pre>
</blockquote>
<h3>Parameter estimates</h3>
<p>
	To find the parameter estimates, we use the following code:
</p>
<blockquote>
	<pre><kbd>lm(y~x1+x2, data=<var>D</var>)</kbd></pre>
</blockquote>
<p>
	The output is:
</p>
<blockquote>
	<pre><samp>Call:
lm(formula = y ~ x1 + x2, data = D)

Coefficients:
(Intercept)           x1           x2
   0.433547     1.652993     0.003945</samp></pre>
</blockquote>
<p>
	This tells us that <var>̂β<sub>0</sub></var> is 0.433547, <var>̂β<sub>1</sub></var> is 1.652993, and <var>̂β<sub>2</sub></var> is 0.003945.
	We can find <var>̂σ<sup>2</sup></var> with the following code:
</p>
<blockquote>
	<pre><kbd>sigma(lm(y~x1+x2, data=<var>D</var>))^2</kbd></pre>
</blockquote>
<p>
	This gives us the following answer:
</p>
<blockquote>
	<pre><samp>[1] 0.01270523</samp></pre>
</blockquote>
<p>
	<var>̂σ<sup>2</sup></var> is 0.01270523.
	The confidence interval is calculated like so:
</p>
<blockquote>
	<pre><kbd>confint(lm(y~x1+x2, data=<var>D</var>))</kbd></pre>
</blockquote>
<blockquote>
	<pre><samp>                 2.5 %    97.5 %
(Intercept)  0.2967067 0.5703875
x1           1.4554666 1.8505203
x2          -0.1512924 0.1591822</samp></pre>
</blockquote>
<p>
	As we can see, the confidence interval for <var>̂β<sub>0</sub></var> is 0.2967067 to 0.5703875, for <var>̂β<sub>1</sub></var> is 1.4554666 to 1.8505203, and for <var>̂β<sub>2</sub></var> is -0.1512924 to 0.1591822,
</p>
<h3>Reducing the error</h3>
<p>
	The assignment instructions said to reduce the model if appropriate, but I couldn&apos;t find any indication in the textbook of what it means to reduce a model.
	So I assumed what was meant was to reduce the error of the model.
</p>
<p>
	There are two statistics we can use to try to measure the accuracy of our model in terms of.
	First, there&apos;s the residual standard error.
	Second, there&apos;s the R<sup>2</sup> statistic.
	Using the residual standard error, we can see how far off our predictions will be.
	Using the R<sup>2</sup> statistic, we instead see how closely our model fits the data.
	I would have thought these statistics to correlate.
	That is, if we optimise for one, the other should result in a more-favourable number as well.
	I actually didn&apos;t see this when trying to reduce the error though.
	As I brought the R<sup>2</sup> statistic closer to 1 (meaning the model is more accurate), the residual error went up (meaning the predictions are less accurate).
	As I added more exponential components, R<sup>2</sup> and the residual standard error went up together.
	Even basic interaction terms increased both statistics.
	So what do we optimise for?
</p>
<p>
	Following that trend backwards, as well as looking at the chart of all three variables as they related to each other, I found removing <var>x2</var> from the model reduced the residual standard error further, at the code of lowering the R<sup>2</sup> statistic as well.
	Not much is changed about predictions either, showing that <var>x2</var> has little effect on <var>y</var>.
	Perhaps removing the unnecessary variable is what was intended when we were told to reduce the model.
	Moving forward with the rest of the problem, this is the reduction I decided to stick with.
</p>
<blockquote>
	<pre><kbd>confint(lm(y~x1, data=<var>D</var>))</kbd></pre>
</blockquote>
<blockquote>
	<pre><samp>                2.5 %    97.5 %
(Intercept) 0.3450849 0.5270978
x1          1.4710842 1.8313341</samp></pre>
</blockquote>
<p>
	Our confidence interval goes down a bit, but not by much, when we remove <var>x2</var> from the model.
</p>
<h3>Residual analysis</h3>
<p>
	Residual analysis after we&apos;ve built our model is typically carried out using a graph.
	Our new model assumes that <var>y</var> and <var>x1</var> correlate in a mostly-linear way, so if our graph shows this, it means the data fits our model&apos;s assumption.
	As we can see, it does:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_3-0.png" alt="Figure 3-0" class="framed-centred-image" width="677" height="699"/>
<h3>Plot</h3>
<p>
	If we plot the regression line and the confidence intervals, we see out model isn&apos;t perfect, but it&apos;s pretty good:
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_3-1.png" alt="Figure 3-1" class="framed-centred-image" width="677" height="699"/>
<h2>Multiple Linear Regression Simulation</h2>
<p>
	For this simulation, we import values using the following code:
</p>
<blockquote>
<pre><kbd><var>D</var> &lt;- data.frame(
     y=c(9.29,12.67,12.42,0.38,20.77,9.52,2.38,7.46),
     x1=c(1.00,2.00,3.00,4.00,5.00,6.00,7.00,8.00),
     x2=c(4.00,12.00,16.00,8.00,32.00,24.00,20.00,28.00)
)</kbd></pre>
</blockquote>
<h3>Plot</h3>
<p>
	We&apos;re asked to plot <var>y</var> in terms of <var>x1</var> and <var>x2</var>, to see if <var>y</var> can be reasonably explained by the other two.
	Given our two-dimensional workspace, we can&apos;t plot a dependant variable in terms of two independent variables, so we&apos;ll need to settle for plotting each of the three variables in terms of each of the other two individually, for a total of six graphs.
	We could simply plot <code>y</code> in terms of the other two, but R makes it easier to plot everything in terms of everything else, if we want to see multiple comparisons at once.
	If we plot things one at a time, the second graph will overwrite the first, so we can&apos;t see the first any more.
</p>
<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/figure_3-2.png" alt="Figure 3-2" class="framed-centred-image" width="677" height="699"/>
<p>
	It&apos;s a good thing we chose to plot all the relationships, too.
	As we can see, <var>y</var> isn&apos;t really explained by the other two, but either <var>x1</var> or <var>x2</var> might explain the other one, a trend we wouldn&apos;t&apos;ve seen if we&apos;d plotted the two only in comparison to <code>y</code>.
	These independent variables are colinear.
</p>
<h2>Estimates</h2>
<p>
	Next, we&apos;re asked to estimate <var>̂β<sub>0</sub></var> and <var>̂β<sub>1</sub></var> for the model using <var>x1</var>, and <var>̂β<sub>0</sub></var> and <var>̂β<sub>2</sub></var> for the model instead using <var>x2</var>.
	I&apos;ve already shown how to do that above, so for brevity, I&apos;ll combine the commands and outputs:
</p>
<blockquote>
	<pre><kbd>lm(y~x1, data=D);lm(y~x2, data=D)</kbd></pre>
</blockquote>
<blockquote>
<pre><samp>Call:
lm(formula = y ~ x1, data = D)

Coefficients:
(Intercept)           x1
    12.1775      -0.6258


Call:
lm(formula = y ~ x2, data = D)

Coefficients:
(Intercept)           x2
     4.2039       0.2865</samp></pre>
</blockquote>
<p>
	In one model, we get that <var>̂β<sub>0</sub></var> is 12.1775, while in the other, we get that it&apos;s 4.2039.
	That&apos;s quite a difference!
	For <var>̂β<sub>1</sub></var>, we get -0.6258, while for <var>̂β<sub>2</sub></var>, we get 0.2865.
	We see that <var>̂β<sub>1</sub></var> has more impact on <var>y</var> than originally thought, though <var>̂β<sub>2</sub></var> still doesn&apos;t have much to do with <var>y</var>.
	Let&apos;s check our confidence intervals:
</p>
<blockquote>
	<pre><kbd>confint(lm(y~x1, data=D));confint(lm(y~x2, data=D))</kbd></pre>
</blockquote>
<blockquote>
<pre><samp>                 2.5 %    97.5 %
(Intercept) -0.5426374 24.897637
x1          -3.1447959  1.893129
                 2.5 %     97.5 %
(Intercept) -7.5580921 15.9659492
x2          -0.2957889  0.8688246</samp></pre>
</blockquote>
<p>
	Now knowing the confidence intervals, we see that within 95% chance, both <var>̂β<sub>1</sub></var> and <var>̂β<sub>2</sub></var> could have either positive or negative values.
	Anything within that range is well within the realm of probability, so both of these values could be zero!
	We have to conclude that our initial assessment, and not the second assessment, was correct: <var>x1</var> and <var>x2</var> very likely have little to nothing to do with <var>y</var>.
</p>
END
);
