<?php
/**
 * <https://y.st./>
 * Copyright © 2018 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Distributions',
	'<{subtitle}>' => 'Written in <span title="Introduction to Statistics">MATH 1280</span> by <a href="https://y.st./">Alexand(er|ra) Yst</a>, finalised on 2018-10-17',
	'<{copyright year}>' => '2018',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<h2>General questions</h2>
<h3>Definitions</h3>
<p>
	Quartiles are the numbers that separate the dataset into quarters.
	They may or may not be actual numbers within the dataset, depending on how many items the dataset contains (Yakir, 2011).
	The second quartile is also known as the median, and is the number that divides the data in half.
</p>
<p>
	The book doesn&apos;t tell us what quantiles is.
	It only tells us about the <code>quantile()</code> function.
	However, Wikipedia is there to fill in the gap.
	Quantiles are the dividing points for separating a distribution into parts with equal probabilities (Wikipedia, 2018).
</p>
<p>
	The <code>pbinom(q, size, prob, lower.tail = TRUE, log.p = FALSE)</code> function has five arguments.
	<code>q</code> is a vector of the quantiles of your distribution.
	<code>size</code> is the number of trials, which cannot be negative.
	<code>prob</code> is the probability of each trial landing a success.
	<code>lower.tail</code>, which defaults to <code>TRUE</code>, determines whether measurements are taken in terms of the random point being less than or equal to a given point (<code>TRUE</code>), or greater than that point (<code>FALSE</code>).
	<code>log.p</code>, which defaults to <code>FALSE</code>, determines whether returned values are given according to logarithmic scale, instead of as raw values.
</p>
<h3>Probabilities</h3>
<p>
	If you flip four coins, the probability of getting each number of heads is as follows:
</p>
<table>
	<tr>
		<th>
			0 heads
		</th>
		<th>
			1 heads
		</th>
		<th>
			2 heads
		</th>
		<th>
			3 heads
		</th>
		<th>
			4 heads
		</th>
	</tr>
	<tr>
		<td>
			0.0625
		</td>
		<td>
			0.25
		</td>
		<td>
			0.375
		</td>
		<td>
			0.25
		</td>
		<td>
			0.0625
		</td>
	</tr>
</table>
<h3>Cumulative probability</h3>
<p>
	The cumulative probabilities for the above scenario are as follows:
</p>
<table>
	<tr>
		<th>
			0 heads
		</th>
		<th>
			1 heads
		</th>
		<th>
			2 heads
		</th>
		<th>
			3 heads
		</th>
		<th>
			4 heads
		</th>
	</tr>
	<tr>
		<td>
			0.0625
		</td>
		<td>
			0.3125
		</td>
		<td>
			0.6875
		</td>
		<td>
			0.9375
		</td>
		<td>
			1
		</td>
	</tr>
</table>
<h3>True or false</h3>
<p>
	It is true that <code>dbinom(6, size=10, prob=0.5)</code> shows us the probability of having six successes in a scenario with ten trials and a chance of success each time of <code>0.5</code>.
</p>
<h3>Less than two out of three</h3>
<p>
	The probability of getting fewer than two heads when flipping a fair coin three times is <code>0.5</code>.
</p>
<h3>No more than three out of five</h3>
<p>
	The probability of getting no more than three heads when flipping a fair coin five times is <code>0.8125</code> (rounded to two decimal places, this would be about <code>0.81</code>).
</p>
<h2>Four customers per minute</h2>
<h3>Expectation</h3>
<p>
	We&apos;ve told that we&apos;re at a store that sees about four customers per minute, then asked what the expectation (the mean) of the time interval between customer arrivals in our model would be.
	The expectation would be a quarter minute (<code>0.25</code> minutes), which can also be stated to be fifteen seconds (<code>15</code> seconds).
</p>
<h3>Variance</h3>
<p>
	The book tells us that the variance on an exponential distribution is one divided by the number of successes per unit (Yakir, 2011).
	We weren&apos;t really told how or why that is, but we can easily use it to determine the variance in our customer situation.
	If measured in minutes, we find the variance to be <code>0.0625</code> minutes squared.
	If measured in seconds, we find the variance to be <code>225</code> seconds squared.
	These answers are equivalent to one another, though to correctly see that, we have to remember that a square minute has <code>60^2</code> square seconds in it, not just <code>60</code> square seconds.
</p>
<h3>Less than <code>15.5</code> seconds</h3>
<p>
	The probability of the interval between two consecutive customers being less than <code>15.5</code> seconds is <code>0.6441811</code>.
	(Rounded, this would be about <code>0.644</code>.)
</p>
<h3>Between <code>10.7</code> seconds and <code>40.2</code> seconds</h3>
<p>
	I question the boundaries used in the method presented by the book for finding the solution to this, as the method seems to include values equal to the upper limit (equal to is <strong>*not*</strong> between) while not counting values equal to the lower limit.
	According to the book&apos;s method though, the solution would be <code>0.421445</code>.
	I tried messing around with R and couldn&apos;t find a way to exclude this upper boundary, so this solution is probably the closest that we can get.
</p>
<h3>Top five percent</h3>
<p>
	The top five percent of wait times between customers would be about <code>44.93598</code> seconds (rounded further as <code>44.936</code> seconds), which is about <code>0.7489331</code> minutes (rounded further as <code>0.7489</code> minutes).
</p>
<h3><code>pexp(1.2, rate=3)</code></h3>
<p>
	We&apos;re asked to interpret what the output of this function call means: <code>pexp(1.2, rate=3)</code>.
	Simply put, this function call returns the probability that a value observed in an exponential distribution with a mean of <code>1/3</code> would be less than or equal to <code>1.2</code>
</p>
<h2>Normal distribution</h2>
<h3>Greater than nine</h3>
<p>
	If a random variable has a distribution with a mean of <code>7</code> and a standard deviation of <code>3</code>, the probability of a value from that variable being chosen to be greater than <code>9</code> is about<code>0.4120704</code>.
</p>
<h3>Lowest 4%</h3>
<p>
	The cutoff point for the lowest 4% of observed values for that same random variable would be at about <code>1.747942</code>.
</p>
<h3>More than one standard deviation</h3>
<p>
	There is about a <code>0.6827</code> chance that any observed value lands <strong>*within*</strong> one standard deviation of the mean of any given normal distribution (Wikipedia, 2018).
	What that means for us is that there is about a <code>0.3173</code> chance of an observed value falling <strong>*further*</strong> than one standard deviation away from our mean.
</p>
<div class="APA_references">
	<h2>References:</h2>
	<p>
		Wikipedia (2018, October 16). 68-95-99.7 rule - Wikipedia. Retrieved from <a href="https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule"><code>https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule</code></a>
	</p>
	<p>
		Wikipedia (2018, September 6). Quantile - Wikipedia. Retrieved from <a href="https://en.wikipedia.org/wiki/Quantile"><code>https://en.wikipedia.org/wiki/Quantile</code></a>
	</p>
	<p>
		Yakir, B. (2011, March). Introduction to Statistical Thinking (With R, Without Calculus). Retrieved from <a href="https://my.uopeople.edu/mod/resource/view.php?id=155119"><code>https://my.uopeople.edu/mod/resource/view.php?id=155119</code></a>
	</p>
</div>
END
);
