<?php
/**
 * <https://y.st./>
 * Copyright © 2018 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Data, a random variable, and a population',
	'<{subtitle}>' => 'Written in <span title="Introduction to Statistics">MATH 1280</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2018-10-03',
	'<{copyright year}>' => '2018',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<h2>Data</h2>
<p>
	We&apos;re asked to look at the following set of numbers: <code>7.2</code>, <code>1.2</code>, <code>1.8</code>, <code>2.8</code>, <code>18</code>, <code>-1.9</code>, <code>-0.1</code>, <code>-1.5</code>, <code>13</code>, <code>3.2</code>, <code>-1.1</code>, <code>7</code>, <code>0.5</code>, <code>3.9</code>, <code>2.1</code>, <code>4.1</code>, <code>6.5</code>.
</p>
<h3>Average</h3>
<p>
	The average of these numbers is just their sum divided by their quantity.
	The assignment tells us there are seventeen numbers, but we can also double-check that number.
	Doing so, we find there to indeed be seventeen numbers.
	That leaves us with the following maths problem: <code>(7.2 + 1.2 + 1.8 + 2.8 + 18 + -1.9 + -0.1 + -1.5 + 13.0 + 3.2 + -1.1 + 7 + 0.5 + 3.9 + 2.1 + 4.1 + 6.5) / 17</code>.
	Simplifying this, we have <code>66.7 / 17</code>.
	Attempting to perform this division, we don&apos;t get a number with any sign of an end.
	We can still simplify further though: <code>3 + 15.7 / 17</code>.
	I don&apos;t like decimals in my fractions though, so let&apos;s continue further.
	We can multiply the numerator and denominator by ten, giving us our final answer: <code>3 + 157 / 170</code>.
	This fraction cannot be reduced any further, though if we wanted to express our answer as a decimal, we could approximate it as <code>3.923529411765</code>.
	The fraction expression is more accurate and precise though.
</p>
<h3>Median</h3>
<p>
	The next part of the assignment asks a true or false question: when searching for the median, do we have to arrange our values in numeric order?
	The simple answer is yes.
	Arranging our values, either from least to greatest or from greatest to least, would allow us to find the middle value, known as the median.
</p>
<p>
	The more-complicated answer is no though.
	Technically, you could find the median without arranging the values in numeric order, it&apos;d just be a major pain in the neck to do so.
	You could, for example, go through your data and find the largest and smallest values, then eliminate them from your set.
	You&apos;d end up with a set with two less values than before.
	Repeating this process, you could get your set down to one or two values.
	If it&apos;s one value, as would be the case in our seventeen-value set, you&apos;d have your median.
	If you had two values left, your median would be the mean of these two values.
	No data rearranging is needed for this method; only deduction through removal of values.
	I can think of another method of finding the median without arranging the numbers in numeric order as well, but it&apos;s more complicated to explain.
	The first method is enough to prove that numeric arrangement isn&apos;t technically needed, though it should be used because it&apos;s substantially quicker and easier.
</p>
<h3>Interquartile range</h3>
<p>
	We&apos;re asked to find the interquartile range for the data we&apos;ve been presented.
	To do that, we need to first find the first and third quartile.
	Basically, the median - also known as the second quartile - breaks our data set into two smaller data sets, and the first and third quartile are the medians of these smaller data sets.
	This means that, as is discussed above, the easiest way to find the values we need is to sort our dataset.
	That gives us this equivalent dataset: <code>-1.9</code>, <code>-1.5</code>, <code>-1.1</code>, <code>-0.1</code>, <code>0.5</code>, <code>1.2</code>, <code>1.8</code>, <code>2.1</code>, <code>2.8</code>, <code>3.2</code>, <code>3.9</code>, <code>4.1</code>, <code>6.5</code>, <code>7</code>, <code>7.2</code>, <code>13</code>, <code>18</code>.
	First, we find the median so as to know where to split our data and find the other two quartiles.
	<code>2.8</code> is our middle value, making it our median, so our smaller data sets are <code>-1.9</code>, <code>-1.5</code>, <code>-1.1</code>, <code>-0.1</code>, <code>0.5</code>, <code>1.2</code>, <code>1.8</code>, <code>2.1</code>, and <code>3.2</code>, <code>3.9</code>, <code>4.1</code>, <code>6.5</code>, <code>7</code>, <code>7.2</code>, <code>13</code>, <code>18</code>.
	Each of the smaller sets has an even number of elements, so we&apos;ll need to find the middle two values of each and find the mean of them.
	That gives us <code>(-0.1 + 0.5) / 2 == 0.2</code> as our first quartile and <code>(6.5 + 7) / 2 == 6.75</code> as our third quartile.
	Subtracting the first quartile from the third, we get the interquartile range: <code>6.73</code>.
	I&apos;m not sure why we were asked to round; no rounding was even necessary.
</p>
<h3>Formula</h3>
<p>
	The interquartile range is found via this formula (Yakir, 2011):
</p>
<p>
	{interquartile range} == {third quartile} - {first quartile}
</p>
<h3>Outliers</h3>
<p>
	Outliers are said to be data points that lie further than one and a half times the interquartile range away from the area delimited by the first and third quartile.
	That makes the cut-off point for the given data set <code>0.2 - 6.73 == -6.71</code> and <code>6.75 + 6.73 == 13.48</code>.
</p>
<h3><code>summary()</code></h3>
<p>
	The next question asks us if it&apos;s true that the <code>summary()</code> function displays a list of outliers.
	It does not, so this is false.
	Instead, it shows us the minimum value in the data set, the first quartile, the median, the mean, the third quartile, and the maximum value present in the data set.
</p>
<h3>List of outliers</h3>
<p>
	No values from our dataset fall outside the range we discussed as containing the non-outlier values.
	Therefore, there are no outliers in our dataset.
</p>
<h3>Standard deviation</h3>
<p>
	The process of finding the standard deviation of a data set is so bizarre that I can&apos;t figure out what standard deviations could possibly be used for.
	First, you subtract the mean from each data point.
	That seems like a reasonable start.
	But them you square each result.
	... Why?
	I&apos;d understand taking the absolute value of each result, but squaring them instead seems odd.
	Next, you add all the squared differences together, which makes it seem like we&apos;re being rational again.
	But now, instead of dividing by the number of values to find a mean, we divide by the number of values minus one.
	What do we even gain from that?
	It also produces strange results in some corner cases.
	For example, data sets with one data point.
	You&apos;d think with one data point, you&apos;d have zero standard deviation, as all the data is identical.
	However, you instead get a division by zero error.
	Likewise, a dataset with zero data points shouldn&apos;t have a defined standard deviation, so you&apos;d expect a division by zero error here.
	However, you get zero divided by negative one, which is just zero.
	So data sets with one data point have no defined standard deviation, but data points with no data at all have a clearly-defined standard deviation of zero, which we&apos;ll find after the final step.
	I can&apos;t be the only one that sees how broken this formula is.
	Finally, we square root the found value to get the standard deviation.
	The square root of zero is zero, which is why an empty dataset has a standard deviation of zero.
</p>
<p>
	Anyway, now that we&apos;ve discussed how standard deviation works, let&apos;s walk through the steps.
	First, we subtract the mean from all our values.
	Our mean is <code>3 + 157 / 170</code>.
	Our data set now looks like this: <code>-5 - 14 / 17</code>, <code>-5 - 63 / 85</code>, <code>-5 - 2 / 85</code>, <code>-4 - 2 / 85</code>, <code>-3 - 49 / 85</code>, <code>-2 - 123 / 170</code>, <code>-2 -11 / 170</code>, <code>-1 - 14 / 17</code>, <code>-159 / 170</code>, <code>-123 / 170</code>, <code>-2 / 85</code>, <code>3 / 17</code>, <code>2 + 49 / 85</code>, <code>3 + 13 / 170</code>, <code>3 + 3 / 34</code>, <code>9 + 13 / 170</code>, <code>14 + 13 / 170</code>.
	Now we square the data.
	At this point, working with fractions becomes way too laborious, so I&apos;ll use a calculator and estimate in decimals.
	We end up with <code>33.91349481</code>, <code>32.9611072664</code>, <code>25.2358477509</code>, <code>16.1887889273</code>, <code>12.7911418685</code>, <code>7.41761245675</code>, <code>4.26301038062</code>, <code>3.32525951557</code>, <code>0.874775086505</code>, <code>0.523494809689</code>, <code>0.000553633217993</code>, <code>0.0311418685121</code>, <code>6.638200692</code>, <code>9.46467128028</code>, <code>9.53719723183</code>, <code>82.3823183391</code>, <code>198.147024221</code>.
	Adding these numbers, we get about <code>443.695640138</code>, which we then divide by <code>16</code> to get about <code>27.7309775086</code>.
	The square root of that number is about <code>5.26602103192</code>.
	We were asked to round to three decimal places, which gives us a final answer of <code>5.266</code>.
</p>
<h2>A random value</h2>
<h3>Missing value</h3>
<p>
	The table&apos;s probabilities add up to <code>0.85</code>, so the missing probability in the table is <code>0.15</code>.
</p>
<h3>Probability</h3>
<p>
	The second row in a table such as this represents the probability of each outcome.
	As such, the values in that row always add up to <code>1</code>.
</p>
<h3>Data sample</h3>
<p>
	The table could very well represent a data sample, though in some cases, it could represent an actual probability distribution if the distribution of the population is somehow known, such as if the population were defined by a program.
</p>
<h3>Not cumulative</h3>
<p>
	A previous question asked what the values in the table have to add up to every time.
	When working cumulatively, the final table entry is <code>1</code>, but the added values of all table entries aren&apos;t required to sum up to a certain value.
	Therefore, there is no way the second row of the table could be cumulative.
	Furthermore, the final entry in that table isn&apos;t <code>1</code> and the value goes goes down from entry to entry in some cases.
	Both of these traits also prove beyond any doubt that the second row in the specified table cannot be cumulative.
</p>
<h3>p(x ≤ <code>3</code>)</h3>
<p>
	The probability that a random number chosen using this table&apos;s specifications is <code>0.6</code>.
</p>
<h3>p(x = <code>1.5</code>)</h3>
<p>
	<code>1.5</code> doesn&apos;t show up in the table.
	Therefore, there is zero chance of <code>1.5</code> being chosen.
</p>
<h3>Expectation</h3>
<p>
	The expectation of a random variable is simply a weighted mean of its possibilities.
	Therefore, using the table provided, the expectation is <code>2.95</code>.
</p>
<h3>Averaging the first row</h3>
<p>
	The statement that you can just average the top row to find the expectation of a random variable is false.
	As I said, you need the <strong>*weighted*</strong> average, not the true average.
</p>
<h3>Weighted standard deviation</h3>
<p>
	To find the standard deviation when dealing with differing probabilities, we first find the square variance of each point.
	We have our mean already; that&apos;s our expectation mentioned above.
	So now we just subtract that from each value and square the result.
	That gives us a data set with values of about: <code>8.7025</code>, <code>3.8025</code>, <code>0.9025</code>, <code>0.0025</code>, <code>1.1025</code>, <code>4.2025</code>, <code>9.3025</code>.
	Next, we multiply each value by its weight to get: <code>0.87025</code>, <code>0.570375</code>, <code>0.225625</code>, <code>0.00025</code>, <code>0.165375</code>, <code>0.42025</code>, <code>1.395375</code>
	Finally, we add all those values together to get the standard deviation, which is <code>3.6475</code>.
	Rounding to three decimal places as we were asked to do, we get <code>3.648</code>.
</p>
<h3>Variance</h3>
<p>
	The variance is the square of the standard deviation, so we square the standard deviation if we want to find the variance.
</p>
<h2>A population</h2>
<h3>Type &quot;a&quot;</h3>
<p>
	<code>49949</code> of the samples are of type &quot;a&quot;.
</p>
<h3>Median</h3>
<p>
	The meadian reported by R is <code>4.472685</code>, which rounded to three decimal places, is <code>4.473</code>.
</p>
<h3>Variance</h3>
<p>
	R reports a variance of <code>54.91635</code>, which rounded to three decimal places, is <code>54.916</code>.
</p>
<div class="APA_references">
	<h2>References:</h2>
	<p>
		Yakir, B. (2011, March). Introduction to Statistical Thinking (With R, Without Calculus). Retrieved from <a href="https://my.uopeople.edu/mod/resource/view.php?id=155119"><code>https://my.uopeople.edu/mod/resource/view.php?id=155119</code></a>
	</p>
</div>
END
);
