<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 4407: Data Mining and Machine Learning',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		This week&apos;s reading assignment was the following:
	</p>
	<ul>
		<li>
			Chapters one through seven of <a href="https://cran.r-project.org/doc/manuals/R-intro.html">R-intro.pdf</a>
		</li>
		<li>
			<a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>
		</li>
		<li>
			<a href="http://faculty.wiu.edu/C-Amaravadi/is524/res/dm_c_ov.pdf">Forbidden</a>
		</li>
		<li>
			<a href="http://twocrows.com/intro-dm.pdf">3rd ed Intro to DM - body - intro-dm.pdf</a>
		</li>
	</ul>
	<p>
		It looks like we&apos;re using R this term.
		My first task for the week would be to install it, but I still have R installed from a past term, when I took <a href="/en/coursework/MATH1280/" title="Introduction to Statistics">MATH 1280</a>.
		So that&apos;s one step I can skip this term.
		We&apos;re also asked to install Basic Prop.
		Due to licensing issues, I&apos;d rather avoid installing that if at all possible, so I&apos;ll put that off for now.
		If it&apos;s legitimately needed in a future week, I&apos;ll install it at that point, and uninstall it at the end of the term.
	</p>
	<p>
		Next, I glanced over the reading assignment.
		That second $a[PDF] we&apos;re supposed to read is 441 pages long!
		How are we supposed to fit <strong>*that*</strong> into our week!?
		We&apos;ve got two course going at once, that $a[PDF] isn&apos;t the only reading material for the week even in just this course, and of course most of us have our day jobs.
		That is just ridiculous.
		I ended up even skipping my $a[LUG] meeting this week to try to get as much time for reading as I could, even though it&apos;s pretty much the only social interaction I manage to get in each week.
		Next week will be easier though, as I have the week off from work to recover from surgery.
		I&apos;ll be stuck at home, mainly only working on coursework and researching my next procedure.
	</p>
	<p>
		The document on R mentions that R includes the programming language S, by which it means it includes an interpreter for said language.
		That&apos;s interesting.
		I was told R was a replacement for S, so I assumed R&apos;s language was only similar to S.
		I didn&apos;t realise R is (or more accurately, contains) an actual S interpreter.
	</p>
	<p>
		Next, the book covers how to start R from the command line.
		Previously, I was starting R from the applications menu.
		Oddly, the command for starting R is <code>R</code>.
		I just tested that too, and it works.
		Command names are of course case-sensitive, and I&apos;ve never run into any that use upper-case letters.
		I also tried running <code>r</code>, which predictably resulted in the command not being found.
	</p>
	<p>
		The <code>help()</code> function is nice.
		Up to now, I&apos;ve only known the shorter <code>?</code> notation, which to be honest, looks pretty awkward.
	</p>
	<p>
		R&apos;s decision to allow different characters in symbol names based on system locale seems idiotic.
		It prevents compatibility between a script written in one region and a system running in another.
		You can say it allows language support, but it really doesn&apos;t.
		For example, let&apos;s say you wanted a French user to be able to use accented characters in variable names.
		The proper way to support that is to allow the accented characters used in French to be used in symbol names regardless of locale.
		French users then get their accented characters and users outside France are able to run the French R scripts.
		I can&apos;t say $a[PHP] is a very clean language or should be used as an example in hardly any context, but it does get this particular feature right.
		It allows only certain characters from the $a[ASCII] range of Unicode to appear in symbol names, but it allows <strong>*all*</strong> characters outside the $a[ASCII] range.
		All characters used for basic language syntax are within the $a[ASCII] range, so there&apos;s no reason to go through the non-$a[ASCII] sections of Unicode to decide which characters should and should not be allowed in symbol names.
		Of course, going through those sections and creating such a list would still be fine, provided the list of valid symbol characters applied to everyone regardless of system settings.
	</p>
	<p>
		I didn&apos;t know that R offers the option to remove objects from the current environment.
		That sounds especially useful if you&apos;re planning to save the environment for later use.
		You might generate several intermediate variables as you calculate the variables you actually care about, and you might not want to waste storage space on the intermediate variables.
	</p>
	<p>
		The book says that when working on a UNIX system, files with names starting with a full stop are hidden, but claims on Windows and OS X, these files are only hidden by default.
		They&apos;re only hidden by default on UNIX and UNIX-like systems as well though.
		For example, I have my Debian system configured to display these files, as I don&apos;t like files being hidden from me.
		I like to see the truth about what&apos;s in my directories.
	</p>
	<p>
		The <code>assign()</code> function is interesting.
		I&apos;m not sure what the value of it is over the normal assignment operator though.
		It was also interesting to see that there are two arrow-looking assignment operators, one for assigning to a variable on the left and the other for assigning to one on the right.
		I encountered the <code>=</code> assignment operator in <a href="/en/coursework/MATH1280/" title="Introduction to Statistics">MATH 1280</a>, though it was never explained to us, and just sort of came up in some of the later assignments.
		Apparently it doesn&apos;t work in all contexts that <code>&lt;-</code> works though, according to what I read this week, so I wonder in what cases it doesn&apos;t work.
	</p>
	<p>
		The recycling system for vector-based calculations is new to me.
		I&apos;ve only worked with either vectors of the same length or vectors along with constants.
		Different-length vectors were beyond the scope of the statistical work I did before.
	</p>
	<p>
		Based on some of the examples given in the book, we can deduce that functions in R return by reference, at least in many cases.
		You can actually assign a new value to the output of the <code>length()</code> and <code>attr()</code> functions, for example.
		It&apos;s an unintuitive way to get the job done, but it looks like no other option is provided in some cases.
		It&apos;s just something you&apos;ve got to get used to.
	</p>
	<p>
		R&apos;s concept of an array is very different than the arrays we know from other languages.
		The arrays from other languages are more-comparable to R&apos;s vectors.
		An array in R is sort of a cross between a one-dimensional vector and an <var>x</var>-dimensional vector.
		It seems to store values in one long dimension, but certain functions treat the array as having <var>x</var> dimensions, where <code>var</code> is a positive integer (perhaps even just one).
		Arrays can also be indexed using their <code>x</code>-dimensional positions.
		Remember to index from one though, as R counter-intuitively does not index from zero as a normal language does.
	</p>
	<p>
		The fact that values in a list have both numeric and string keys is a bit odd.
		I mean, I&apos;ve seen mixed array key types before, but in this case, the string keys refer to the same values as the integer keys.
		Two keys refer to the same value.
		That said, the use of string keys is optional, so only some or even none of the values might have a string key associated with them, though all values in the list have a numeric key.
		Again, remember that R doesn&apos;t index from zero like normal languages do.
		The abbreviated names option of lists was surprising to read about as well.
		It seems useful for when you&apos;re working with R objects in an interactive session.
		However, it seems like a terribly confusing thing to put in a script.
		I&apos;d recommend that any code saved for later use use the full names of the keys.
	</p>
	<p>
		The next document on my list was the one that was 441 pages long.
		I ended up not being able to finish reading that one in time to include my notes on it in this learning journal submission.
		I&apos;ll be stuck at home for a week though starting tomorrow, as I&apos;ll be recovering from surgery, so I&apos;ll finish reading it then and include my thoughts on it in next week&apos;s learning journal submission.
	</p>
	<p>
		The next page on the reading list blocked me.
		I wasn&apos;t able to access it at all.
	</p>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4407/forbidden.png" alt="Forbidden" class="framed-centred-image" width="476" height="222"/>
	<p>
		Finally, the last of the assigned reading material was on data mining.
		Data mining can be used in positive ways, but when I think of the term, I tend to think of how tech giants mine users for data to sell to their customers, the advertisers.
		We&apos;re just the companies&apos; product, and that&apos;s not okay.
	</p>
	<p>
		The first thing to really stand out to me was that the document talked about how you must build a predictive model using the data.
		This seems like the perfect sort of task for machine learning.
		Then again, maybe I&apos;m noticing that even though I hadn&apos;t noticed it before because I&apos;m in a course on machine learning, so machine learning is on my mind more than it&apos;s ever been.
	</p>
	<p>
		Next, it discussed how patterns don&apos;t always depict a direct cause-and-effect relationship, but merely a correlation.
		Two behaviours might both be effects of some other, unseen cause.
		You&apos;ve also got to weigh costs and benefits of different models.
		Some models are faster, but others are more accurate.
		Do you need quick answers, or do you need to act with precision?
	</p>
	<p>
		It&apos;s interesting to see how On-Line Analytical Processing can be used to confirm or deny relationships between variables.
		Queries can be run against the database to test a hypothesis, and if it&apos;s wrong, a new hypothesis can be searched for.
		This isn&apos;t like data mining though, as data mining instead uncovers patterns for you.
	</p>
	<p>
		One major area data mining seems to be good for is acquiring and retaining customers.
		Unlike the data mining I traditionally think about, this use of data mining doesnt&apos; sell your information to someone else, but instead uses it in-house.
		You&apos;re not the product, but the customer, as you should be.
		It&apos;s still creepy to think about this data being kept about you, don&apos;t get me wrong.
		But at least the people holding the data about you are the ones you gave the data to.
		When you buy something, you know the company you bought from knows who you are, when you bought it, and what you bought.
		You give them that info.
		It&apos;s only if and when they turn around and sell that information to someone else that people you have no idea even know about you suddenly seem to know everything about you.
		Detecting fraudulent purchases on payment cards is another creepy, yet useful application of data mining.
		Again, you know your credit/debit card servicer knows all your purchases, and you&apos;ve chose to give them that information.
		It&apos;s creepy to think they&apos;re keeping all those logs on you and building a profile about you, but the value such a profile provides you as the customer is very likely worth it.
	</p>
	<p>
		Some medical uses for data mining are brought up, but if these are used in the United States, I don&apos;t trust their validity.
		The $a[US] medical industry is known to shoot down working products in favour of products that make more money.
		You expect that from a departments store or factory, but in the medical industry, you assume the goal is to aid in your health.
		Sadly, this isn&apos;t the case, as the corruption in that industry&apos;s gotten pretty bad.
		It&apos;s especially bad with pharmaceuticals.
		Don&apos;t get me wrong, I go to the doctor too when I need something.
		In fact, I have a surgery scheduled for the beginning of next week.
		I shy away from their drugs though, if I can at all get away with it.
	</p>
	<p>
		Neural networks are much less complex than I imagined.
		They&apos;ve only got three layers, ans it&apos;s easy to see the relationship each node has with each other.
		I was expecting something closer to brain-level complexity, with a name like that.
		Or at least, the simpler neural networks don&apos;t seem too complex.
		A neural network can have more than just the three layers if there&apos;s more than one hidden layer, which raises the level of complexity.
		The concept of over-fitting the network to the data is an issue I wouldn&apos;t&apos;ve thought of.
		It makes perfect sense that the network would eventually be a perfect fit for the specific data set, it&apos;s just not something that would have come to mind on its own.
	</p>
	<p>
		Decision trees are simple to follow and understand, though I&apos;m not clear on how they can be generated via data mining.
		Aside from the obvious solution of brute force, of course.
		Like the article says though, they&apos;re good at explaining what they&apos;re doing, unlike neural networks.
		Like neural networks, there&apos;s the possibility of over-fitting the tree to a particular data set, so you need to either prune the tree or limit its growth.
	</p>
	<p>
		Rule induction seems promising.
		I&apos;m not sure how it can be performed in an automated way, but I think it should be possible.
		Again, I guess brute force can be used to find the weights after the basic rules have been established, and rule-establishment is simply a patter of pattern recognition.
		k-nearest neighbour seems reasonable, but like the article says, you can see why it&apos;d take a lot of memory and computation to use.
		I also worry that as new data points are added, the data might lose some accuracy.
		Basically, you&apos;re inferring the data of new data points based on known-valid data points, then inferring even newer data points based on a mix of known-valid data points and inferred data points.
	</p>
	<p>
		The assignment for the week was pretty time-consuming.
		I had to use nearly an entire day off from work to prepare my submission for it.
		I spent so much time compiling the list of commands I needed to run, taking screenshots, correcting for human error (both my own and that of the textbook&apos;s author), and formatting my submission that I didn&apos;t really get to analyse what I was actually telling the computer to do.
		I learned some new things about R, but not as much as I could have if I wasn&apos;t so busy this week.
		I should get a chance to review what I&apos;ve done next week though, when I have nothing but time.
	</p>
	<h3>Discussion post drafts</h3>
	<p>
		The learning journal instructions say to include the drafts of the discussion posts for the week, so here they are:
	</p>
	<blockquote>
		<p>
			Supervised learning is a style of machine learning in which the answer for each data point is known.
			The algorithm can collect data and learn, but the predictions made by the machine afterwards can actually be tested for accuracy using the known answers (Edelstein, 1998).
			Often, this is done by having the machine learn using a small data set, then running it on a large data set to see how accurately the machine gets the new data points correct.
		</p>
		<p>
			On the other hand, with unsupervised learning, there are no known answers.
			The machine is given a data set to work with, but the tester doesn&apos;t have an answer sheet to compare the machine&apos;s results against (Edelstein, 1998).
			Unsupervised learning can be used to cluster data points, for example.
			We might not know the answer before running the algorithm, but we can still kind of see if the computer got close to the right answer.
		</p>
		<p>
			By the way, I&apos;m unclear on what to do for the assignment this week.
			It says to follow the instructions given Section 2.3 of <a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>.
			However, that section doesnt&apos; seem to give any instructions.
			It has some example commands, but it also has their output.
			If we submit our results of running those commands, it&apos;d pretty much just be a copy and paste of what the book already contains.
			That doesn&apos;t seem like much of an assignment, so I can&apos;t help but feel there&apos;s a mistake and we&apos;re supposed to do something besides run those commands and provide their output.
		</p>
		<div class="APA_references">
			<h3>References:</h3>
			<p>
				Edelstein, H. A. (1998). Introduction to data mining and knowledge discovery. Retrieved from <a href="http://twocrows.com/intro-dm.pdf"><code>http://twocrows.com/intro-dm.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			I like your example with character recognition.
			Like you said, character recognition is learned through supervised learning.
			The character is shown to the computer as an image, and the character as a byte sequence is given to the computer as well.
			Each image given to the computer will be slightly different, and it&apos;s up to the machine to decide what attributes of the image determine that a given character is, for example, a zero.
			This is exactly the sort of task that would be impossible for unsupervised learning.
		</p>
		<p>
			It sort of reminds me of how humans perform the same task.
			As children, we&apos;re subjected to symbols we&apos;re told are the same set of 52 letters (sometimes more, if English isn&apos;t your native tongue), as well as several other useful characters such as digits and punctuation.
			These characters come in all sorts of fonts though, and can be seen from all sorts of angles.
			In school, the teachers probably try to keep the letters mostly consistent, but as we walked through stores, saw signs outside such as street signs and billboards, and even saw products within our own homes, all these fonts were everywhere.
			And then there was handwriting, which differs form person to person.
			We had to learn what attributes distinguished each character, and we use that our entire lives.
		</p>
		<p>
			The assistant manager at my workplace has really elegant handwriting, and their upper-case &quot;E&quot;s are by far the fanciest I&apos;ve ever seen.
			I&apos;m working on imitating them and adding them to my own handwriting, actually.
			They look hardly at all like the &quot;E&quot;s I&apos;ve seen in any font ever, yet we all recognise them as &quot;E&quot;s without a second thought.
			Dispute the loopiness of the letter, there&apos;s something about it that distinctly looks like an &quot;E&quot;s and not a backwards three.
			Yet if you were to look at a backwards three, you&apos;d recognise it as such and not think it was an &quot;E&quot;s.
			Their letter &quot;E&quot; retains whatever points we learned that distinguish it as a &quot;E&quot;.
		</p>
		<p>
			Your data-clustering example helped me understand unsupervised learning much better.
			I was thinking that with unsupervised learning, there was something we could set the computer about to learn without yet knowing the answer yet ourselves, and this actually trained the computer for something.
			I think I understand now though that unsupervised learning isn&apos;t really learning at all though, or at least not the way we traditionally think about learning.
			With supervised learning, we&apos;re basically training the computer.
			We&apos;re teaching it.
			The computer learns.
			With unsupervised learning, the computer instead tries to make sense of the data we give it, but doesn&apos;t learn anything that it can apply later.
			Unsupervised learning is less like learning and more like analysis.
		</p>
	</blockquote>
	<blockquote>
		<p>
			It&apos;s worth noting too that in supervised learning, the known right answers aren&apos;t used exclusively to check the computer&apos;s results, but also to built the knowledge base used to predict values.
			The computer is basically shown a number of data points and associated right answers.
			Using that, the computer builds up information on what it means for an answer to be correct, which is what it used for later data points it encounters.
			In other words, the known right answers are used both to train the computer and to check the results of the computer&apos;s training.
		</p>
		<p>
			You make a good point that unsupervised learning is good for finding patterns.
			That may be the key difference in the types of problems supervised and unsupervised learning deal with.
			In supervised learning, the pattern is known and we&apos;re trying to teach that pattern to the computer so it can solve later problems.
			In unsupervised learning, we don&apos;t know the pattern and we&apos;re asking the computer to try to find the pattern for us.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I like your example.
			It illustrates well how supervised learning and unsupervised learning can sometimes be applied to the same problem.
			I&apos;d thought the two to apply to mutually-exclusive types of problems, but with some clever thinking, you can often find alternative solutions using alternative tools.
		</p>
	</blockquote>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		The reading assignment for the week is as follows:
	</p>
	<ul>
		<li>
			<a href="https://christof-strauch.de/nosqldbs.pdf">NoSQL Databases - nosqldbs.pdf</a>
		</li>
		<li>
			Chapters eight through fourteen of <a href="https://cran.r-project.org/doc/manuals/R-intro.html">An Introduction to R</a>
		</li>
		<li>
			<a href="https://my.uopeople.edu/brokenfile.php#/290113/user/draft/702716931/An%20Overview%20of%20Data%20Warehousing%20and%20OLAP%20Technology.pdf"><em>(No file data)</em></a>
		</li>
		<li>
			<a href="https://my.uopeople.edu/pluginfile.php/389189/mod_book/chapter/179438/Data%20mining%20tools.pdf">WIDM-24_LR - Data mining tools.pdf</a>
		</li>
		<li>
			<a href="https://thesai.org/Downloads/SpecialIssueNo3/Paper%204-A%20Comparison%20Study%20between%20Data%20Mining%20Tools%20over%20some%20Classification%20Methods.pdf">A Comparison Study between Data Mining Tools over some Classification Methods - Paper 4-A Comparison Study between Data Mining Tools over some Classification Methods.pdf</a>
		</li>
		<li>
			<a href="http://gifi.stat.ucla.edu/janspubs/2009/reports/deleeuw_R_09a.pdf">deleeuw_R_09a.pdf</a>
		</li>
		<li>
			<a href="http://leavcom.com/pdf/NoSQL.pdf">NoSQL.pdf</a>
		</li>
		<li>
			<a href="http://melekirmak.com/trepon/images/260220121521451.pdf">Trepon Images 260220121521451 Pdf için bir şey bulunamadı</a>
		</li>
		<li>
			<a href="http://web.ist.utl.pt/~ist13085/fcsh/sio/casos/BI-Tech2.pdf">Técnico Lisboa - Página Não Encontrada / Page Not Found</a>
		</li>
	</ul>
	<p>
		One of these is a link to the university&apos;s own <code>/brokenfile.php</code> file.
		It&apos;s not even a redirect from an actual broken file either, I checked.
		The $a[URI] provided in the hyperlink is that of the broken file error page.
		An error page, by the way, that doesn&apos;t actually display any sort of error message.
		It just refuses to send any data.
		To add to that, two of the links are outdated and lead to <code>404</code> pages.
	</p>
	<p>
		I&apos;d thought that NoSQL was a particular database, such as MySQL or SQLite.
		I was guessing that with a name like that, it probably didn&apos;t use a structured query language, but at the same time, the fact that it even mentions $a[SQL] made me think it likely tries to emulate databases that do use $a[SQL] or something.
		It turns out I was way off.
		NoSQL isn&apos;t a product at all, but instead a term coined to refer to non-relational databases.
		Most if not all relational database software options use a form of $a[SQL] to allow users to tell it what to store and retrieve, so NoSQL refers to the fact that a given database doesn&apos;t have that.
		It&apos;s a misleading term, as while $a[SQL] and relational databases are often used together, they&apos;re actually very separate concepts.
		&quot;Relational&quot; refers to the fact that the database tables may relate their data to data on other tables, while $a[SQL] is the language (which admittedly is usually not implemented according to the standard) used to speak with the database software.
		I could build a relational database software that doesn&apos;t use any form of $a[SQL], yet it somehow wouldn&apos;t be considered a NoSQL database.
		In other words, its a complete misnomer.
	</p>
	<p>
		Non-relational databases are useful because they handle very efficiently the types of data relational databases are particularly bad at handling.
		Relational databases need rigid structure.
		Otherwise, you can&apos;t figure out what data relates to what other data in another table.
		Without relations getting in the way, you can store whatever you want in a non-relational database.
		Examples given in the text include emails, word-processing documents, and media.
		Some non-relational databases can also be distributed across multiple machines, something that relational databases don&apos;t seem to handle well.
		Non-relational databases also run faster, due to less happening when queries are run.
		Relational databases have to check to see if an update to the data would break the database&apos;s rules, and if it won&apos;t, they sometimes have to make further changes to other parts of the data that are implied by the change made by the query.
		A non-relational database has neither of these added side-effects of data update.
	</p>
	<p>
		Non-relational databases do have their drawbacks though.
		Without the $a[SQL] provided by relational databases, non-relational databases rely on directly programming queries through the database&apos;s $a[API].
		Supposedly, this is more difficult, though I have difficulty with $a[SQL].
		I think I might do better with direct $a[API] calls.
		You could claim that with $a[SQL], you have a unified language with with to communicate with all relational databases, but that would be a lie.
		No one follows the $a[SQL] standard, so each database vendor provides their own non-standard dialect of $a[SQL].
		Whether you&apos;re learning a new $a[SQL] dialect or new $a[API] functions, you&apos;re not able to jump right from one database software to another.
		Due to the lack of rule-enforcement, non-relational databases are also easier to enter inconsistent or otherwise invalid data into, which can be problematic.
		Likewise, with non-relational databases only gaining popularity recently, there&apos;s a lack of tools for administering them at this time.
	</p>
	<p>
		The next article I read was about which statistical tools replaced which other statistical tools and when.
		It was more of a history lesson than anything useful.
		It did bring up that the field of science is moving toward open source as a means of reproducibility.
		That&apos;s good to hear.
		With open source tools, we can better learn how the tools themselves function, and thus how the data we use the tools on behaves.
		And isn&apos;t that what science is all about?
		Really, closed source tools have no place in science or any sort of school.
		They just hide away their implementations so you can&apos;t learn from them, making them counterproductive in such environments.
	</p>
	<p>
		I didn&apos;t really understand the paper on the study comparing data-mining tools.
		It didn&apos;t help that the table data wasn&apos;t displaying correctly.
		I think it&apos;s a bug in Firefox; I&apos;ve had that issue with other $a[PDF]s on the Web as well.
	</p>
	<p>
		The next paper I read both reiterated the history of data-mining software and broke down what to look for when choosing data-mining software from the multitude of options.
		Again, I don&apos;t think such history lessons are useful, but the point-by-point discussion of what features to look for had value.
		Of course, if you can&apos;t trust your tools, you can&apos;t trust their results, so licensing and source code availability will always be the most important thing to consider when I choose my software.
		The wrong license is a deal-breaker for me.
		The paper did discuss licensing and its importance, which was good to see.
		Once you rule out the badly-licensed software, the other points can be used to figure out which option best suits your specific project.
		Different data-mining programs cater to different groups of users and different use cases.
		Sparsity of the data, how many dimensions it has, and whether or not you&apos;re trying to cluster it are all things to think about.
		You also need to decide whether your task is better suited to supervised or unsupervised learning.
		Do you have a training set to work with that already has an answer key?
		You also need to decide whether you want a command line interface or a graphical interface.
		Less techy people will want a $a[GUI], but people who know how to work a computer well will appreciate the ease of automation that comes with a command line interface.
	</p>
	<p>
		It seems a number of open and widely-implemented data exchange formats are available for importing and exporting data.
		These should of course be used ahead of proprietary formats.
		The major problem with proprietary formats is that you can&apos;t switch tools and keep your data.
		You might want to replace your software with better software from another vendor at some point, but even if you don&apos;t, different tools work better for different use cases.
		You might therefore want to use the same data with multiple tools.
		Open formats make this possible.
		You also need to look at what system you plan to use your tools with when deciding which tools to choose.
		That said, if you choose the open source tools as I recommended, it&apos;s very likely your platform will be supported.
		People have a tendency to port open source tools to their platform, then make the necessary code changes available to everyone, providing support for that project on that operating system available to the public.
		This is different from proprietary tools, in which you have to depend on the vendor themselves to provide support for a given platform (and often, proprietary vendors don&apos;t bother to support very many platforms).
	</p>
	<p>
		The 149-page paper begins by telling us that the NoSQL term originally referred to relational databases that lacked $a[SQL].
		That&apos;s a much more intuitive and reasonable usage of the term.
		However, the term later degraded to instead refer to non-relational databases, which is a confusing use of the term, as I discussed above.
		Seriously, they should have coined their own term instead of hijacking a term already in use.
	</p>
	<p>
		Most of the benefits of non-relational databases discussed by this paper were already covered by the 3-page paper, but the longer paper also brought up that servers can be brought offline or even crash without taking the whole database down with them.
		Obviously, the data stored on that particular server becomes unavailable, but the rest of the data on the other serves remains accessible.
	</p>
	<p>
		It also talks about how one-size-fits-all database solutions are wrong.
		In general, one-size-fits-all solutions for <strong>*anything*</strong> are wrong.
		There are notable exceptions to this, but if you think your solution fits every use case, you&apos;ve probably forgotten to consider several use cases.
		It&apos;s also brought up that one-size-fits-all solutions tend not to perform as quickly or as efficiently as solutions tailored to the actual use case you&apos;re working with.
	</p>
	<p>
		A great point is made too about abstracting away a database&apos;s distributed nature.
		If applications don&apos;t realise the database is distributed, they cannot make the correct decision based on that information.
		Network latency and downed servers cannot be abstracted away, so both can appear as missing data to the application if the application isn&apos;t made aware that it&apos;s not speaking with a single database instance on a single server.
	</p>
	<p>
		One issue I had with this paper is that it says we should do everything in $a[RAM] and have zero {$a['I/O']}.
		Keeping everything in memory is one thing.
		Assuming you have enough $a[RAM], it&apos;ll speed things up considerably.
		However, you still need disk writes.
		If the power goes out or your system crashes, you&apos;ll want a semi-recent copy of the data to read back into memory once you&apos;re back online.
		You might not want to write to disk with every change, but you will want to write changes to disk periodically.
	</p>
	<p>
		The paper also discusses that while non-relational databases may be developer-friendly, they&apos;re not as administrator-friendly.
		They lack the sort of query language that makes ad-hoc access of data easy.
		With only the $a[API] to update or view data, administrators have to write a program each time they want to make a change to the data or even just check on existing data.
		Of course, this could easily be accounted for if developers wrote some sort of administration panel for the administrators to manage the database with.
		However, the paper goes on to explain that access of data in a distributed system is more difficult too.
		I don&apos;t really have a solution to that problem.
		Exporting data from a distributed database also comes with difficulties.
	</p>
	<p>
		I ended up not being able to get back to that 441-page document from last week.
		I thought life would just sort of stop for my during the week of the surgery, and I&apos;d have all the time in the world for coursework.
		That didn&apos;t happen.
		I still had things I needed to get done, and while I did have the week off from work, I also slept a lot more.
		Part of that was because I needed more rest to heal properly, but another part is that I usually don&apos;t get enough sleep because I&apos;m far too busy.
		For the first time in a long while, I actually got the sleep I needed.
		I also found myself too tired to get any coursework done unless I kept my body moving.
		I had to get out and take a walk each day, or I didn&apos;t feel well enough to even attempt any reading assignments.
		And those walks took time too.
	</p>
	<p>
		Research for the paper this week also took longer than expected.
		Finding open-access journals is difficult, and we needed peer-reviewed journals for our papers this week.
		The assignment instructions recommended checking one of the libraries that lends their resources to this school, but those sites don&apos;t allow outsiders to view their contents.
		This presents two major problems.
		First of all, search engines can&apos;t get in there.
		These libraries have internal search features, but they rather suck.
		I can spend hours trying to find something in there, and turn up empty-handed.
		Without open access, regular search engines can&apos;t get in, and without using an actual search engine, you can&apos;t reasonably find what you need.
		Secondly, without open access, it doesn&apos;t matter what I cite anyway, because no one reading my paper will be able to check my sources for validity or further information.
		Citing pages that exist only in walled gardens doesn&apos;t do a lick of good.
		That makes the university&apos;s partner libraries utterly and completely useless.
		So I had to look elsewhere.
		I never did find any published journal sources to describe the specific analytical packages we were told to research for our papers, either.
		I ended up having to use the main websites for these projects to get a feel for them.
		In my defence though, we were told to use at least one journal resource, but not required to make all our sources come from journals.
	</p>
	<p>
		It&apos;s also worth noting that such journal sources themselves are often incredibly bias.
		I&apos;m not sure how bad it is in the tech industry, but the medical industry in the United States is completely corrupt.
		In the medical industry, good medical science often gets swept under the rug if it&apos;s not profitable.
		If it&apos;s not going to make some corporation rich, big pharmaceutical companies will often make sure word doesn&apos;t get out, even to the point of threatening doctors and scientists that would otherwise try to bring it to light.
		And many drugs aren&apos;t actually anywhere near as effective as they&apos;re claimed to be, they are just in use because they make the pharmaceutical companies money.
		These money-makers are what gets published, not the things that actually work best.
		It wouldn&apos;t surprise me if the tech industry suffered similar issues.
		The fact that information comes from published journals doesn&apos;t actually make it any more valid than information from other sources, and in fact often makes the information <strong>*less*</strong> valid.
		There&apos;s a reason I don&apos;t often cite published journals except for assignments that specifically request it.
	</p>
	<p>
		Anyway, I found the reading material didn&apos;t even mention much of what was on the essay assignment and absolutely none of what was on the discussion assignment was mentioned.
		Normally, I focus on trying to complete the reading material first, or at least a good chunk of it, before moving on to the other assignments.
		That way, I&apos;m better prepared for them.
		I see that&apos;s not going to work in this course.
		I need to work on the discussion assignment and written assignment first.
		Then, in the time remaining, I need to read as much as I can from the reading assignment.
		Starting next week, that&apos;s the approach I&apos;ll take.
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		The reading assignment for this week is that 441-page document again: <a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>.
		Except this time, we&apos;re only supposed to read chapter 3 and section 4.3 instead of the entire thing.
		I went back and took another look at <a href="#Unit1">Unit 1</a>&apos;s reading assignment, and it seems we were actually only supposed to read chapter 2.
		Somehow, I missed the chapter designation, so I thought the whole book&apos;d been assigned to us to read.
		Wow.
		Now I feel foolish.
		It&apos;s no wonder the reading assignment seemed like far too much for my to handle that week.
	</p>
	<p>
		The first equation presented to us this week was <var>Y</var> ≈ <var>β<sub>0</sub></var> + <var>β<sub>1</sub></var><var>X</var>.
		I immediately recognised that as the slop/intercept equation (<var>Y</var> = <var>m</var><var>x</var> + <var>b</var>), but with different variable names, a different arrangement, and an approximate equality instead of a true equality.
		So I figured this week shouldn&apos;t be too difficult.
	</p>
	<p>
		One thing that struck me though was that the book next discussed estimating the unknown variables by taking samples from various markets.
		If you use different markets, there are many more variables at play that you&apos;re not accounting for.
		For example, the average income level of the population and how gullible they are.
		With high income and high gullibility, people are more likely to buy into the messages presented by your advertisements.
		You obviously can&apos;t account for everything, but if you&apos;re going to take samples from multiple markets and build an equation for dealing with all those markets, you should make sure your markets are fairly similar.
		For example, they at the very least need to have similar average income levels.
	</p>
	<p>
		The &quot;∑&quot; symbol always throws me off.
		I never did formally learn what it means, though I thin it involves summing variables denoted with subscripts.
		At first, I thought the equations for minimising the residual sum of squares calculated something for some value that we were trying to minimise, and that you had to guess and check repeatedly until you couldn&apos;t make the number go down any further.
		This would not only be time-consuming, but also not very precise.
		At what precision level do you give up trying to get closer?
		An exact formula for the values we need would be much better.
		After working through the meanings of the variables though, I think that&apos;s exactly what these equations are.
		They give us the values we need directly.
		I think the first of the two equations just sums up the multiplied differences between the the average <var>x</var> and specific <var>x</var>es and the average <var>y</var> and specific <var>y</var>s, then divides all that by the sum of squared differences between the average <var>x</var> and specific <var>x</var>es.
		That&apos;s all hard to follow when written out like that, but it&apos;s usable in the equation.
		Then the second equation just takes the output of the first equation and uses it in a straightforward way.
	</p>
	<p>
		The book talks about how to calculate the standard error of the population mean: we divide the variance (which is equal to the square of the standard deviation) by the number of data points, then square root the result.
		It says this standard error figure is how far off our least squares line is from the population regression line.
		However, it&apos;s worth noting that, again, this is an <strong>*estimate*</strong>.
		We don&apos;t actually know how far off our line is.
		Also, hypothetically, we could end up with a data sample in which all our data points are above the population regression line or all below.
		These data points might even be in a perfectly-straight line.
		That would give us an exceedingly low standard error, yet our least squares line may still be way off.
		In practice, this isn&apos;t going to happen very often, but it&apos;s important to remember that it could, which means that we must treat this standard error figure as exactly what it is: only an estimate.
		We&apos;re given more-complex formulas for dealing with the standard error of the other estimated numbers, but the idea is the same: we&apos;re only estimating how far off we likely are, not making an absolute determination.
	</p>
	<p>
		The R<sup>2</sup> statistic seems to be very useful in determining how well regression even works for the problem at hand.
		Either due to too high of a variance or due to the data not being all that linear, R<sup>2</sup> may be too low, and that problem can be caught.
		I mean, we can sort of gage how well our least squares line fits the data when we graph it, but there&apos;s no objectivity in that.
		What one person says is an okay fit, another may say is no fit.
		R<sup>2</sup> allows us to measure the fit precisely, giving us a more-objective look at it.
		We still have to determine how much of a fit we require, which is still pretty subjective, but we can objectively compare the fit of one least squares line to its data against another least squares line and that other line&apos;s data.
	</p>
	<p>
		When the book started talking about using multiple predictor variables in a single equation, I was morbidly curious as to how it planned to pull that off.
		When they showed us the equation for solving such a system, a simple <var>ŷ</var> = <var>̂β<sub>0</sub></var> + <var>̂β<sub>1</sub></var><var>x<sub>1</sub></var> + <var>̂β<sub>2</sub></var><var>x<sub>2</sub></var> + ... + <var>̂β<sub><var>p</var></sub></var><var>x<sub><var>p</var></sub></var>, I knew this was likely going to be over my head.
		If the equation for solving for the prediction was to just multiply the estimations by the predictor values and add them together, I could see no manageable way to derive those estimations.
		I&apos;m decently good at maths, but I&apos;m not that good.
		Such a formula would be over my head.
		Thankfully, the author seemed to agree with me, and simply skipped over the logistics of how to actually find such values by hand.
		We&apos;re not expected to learn that at this time.
	</p>
	<p>
		The book then uses the example of binary gender as a predictor of credit worthiness.
		First of all, gender isn&apos;t binary.
		Second, they claim the options to be male and female.
		Male and female are sexes, not genders.
		(Sex also isn&apos;t binary, as there are the occasional intersex people, though it&apos;s more often binary than gender is.)
		As someone of non-binary gender, I find it annoying when people try to shove me into boxes I don&apos;t fit into.
		There&apos;s a reason I use the name Alex: it&apos;s a name available to people of any gender as a shortened version of gendered names.
		Alexander is usually for men.
		Alexandra, Alexandria, Alexa, and Alexia are usually for women.
		There&apos;s even Alexis though, a gender-neutral name given to men and women alike.
	</p>
	<p>
		The use of an interaction term was something I wouldn&apos;t&apos;ve thought of.
		I didn&apos;t think linear regression could account for interaction between predictor variables.
		I thought a different type of model would be needed to get a better fit.
		Of course, as finding the values for even just a simple multiple linear regression is above us at this point, we certainly didn&apos;t discuss how to find the proper value for the interaction term, either.
	</p>
	<p>
		I don&apos;t understand the syntax used for the <code>lm()</code> function.
		I&apos;m not sure what the <code>~</code> operator is in R, but ignoring that, the bigger problem is that we use a plus sign in the first argument to combine the variables.
		This should add the variables together before passing the result to the function.
		Yet somehow, it seems R is keeping these as separate data sets for it to operate on within the function.
		That makes no sense to me whatsoever.
		It&apos;s like the expression, and not the value of the expression, is being passed into the <code>lm()</code> function.
		Actually ...
		Is that actually what&apos;s going on?
		Is the <strong>*expression*</strong> being passed and not the value?
		I mean, part of the output of <code>lm()</code> function is my expression repeated back to me.
		Maybe R passes expressions and not values as arguments.
	</p>
	<p>
		During the quiz this week, my Internet connection got really spotty.
		The connection stayed up long enough to load the page, but <strong>*not*</strong> the images on it.
		As a result, I couldn&apos;t see the graphs I needed, and because the equations we were supposed to use were also included in image form instead of basic text, I couldn&apos;t see those either.
		As the page had already loaded, my timer had started, but I couldn&apos;t actually work on the quiz!
		It took me about half an hour to get the connection back up long enough to load everything I needed, and I had no idea the precise time I&apos;d started the quiz, so I had to just rush through it as quickly as I could, for fear I&apos;d miss my deadline.
		Finally, after I&apos;d submitted my answers, I was told I had about thirteen more minutes to finalise them.
		I could have gone back over my work a little bit, but I had no way to know if my connection would die again and I&apos;d lose my chance to finalise any answers, ending up with a grade of zero.
		So, I submitted my rushed answers.
		It&apos;s a good thing I did, too.
		Within a couple minutes, my connection died again, and I couldn&apos;t get it back up in time.
		Needless to say, I didn&apos;t do well on the quiz at all.
		At least I got above a 75% though, and I believe that&apos;s passing.
		I&apos;ll be glad once I can afford a better Internet connection, though I won&apos;t be able to spare money for that until I get a couple other things taken care of first.
	</p>
	<p>
		I&apos;m not the worst off this week though.
		One student submitted the wrong paper last week, so for the grading this week, they&apos;re not going to do well.
		I think what happened is they submitted the paper for this course to their other course, and the paper for their other course to this course.
		It&apos;s either that or they submitted the same paper to both courses, but that seems unlikely.
		If the papers were simply reversed as I think they were, they&apos;re not going to do well in <strong>*either*</strong> of their courses this week.
	</p>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		I struggled with the unit assignment last week.
		I just didn&apos;t understand what it was asking for at times.
		And when I did understand what was asked of us, I sometimes didn&apos;t know what the best way to accomplish the goal was.
		So it was helpful to get to read the answer key when grading other students&apos; work this week.
		One of the things I tend to do is compare my own work to the answer key, as if to grade that as well.
		I don&apos;t do it every time, but I do it when either I struggled with the assignment or I see something in the answer key that I don&apos;t remember doing in my own work.
	</p>
	<p>
		The reading assignment for this week is the following:
	</p>
	<ul>
		<li>
			<a href="https://rstudio-pubs-static.s3.amazonaws.com/316172_a857ca788d1441f8be1bcd1e31f0e875.html">kNN(k-Nearest Neighbour) Algorithm in R</a>
		</li>
		<li>
			<a href="https://ww2.coastal.edu/kingw/statistics/R-tutorials/logistic.html">R Tutorials--Logistic Regression</a>
		</li>
		<li>
			Chapter Four of <a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>
		</li>
	</ul>
	<p>
		Ugh.
		My power and Internet connection went out on me on the day I finally had time to get to the reading assignment.
		It&apos;s been a busy week.
		Thankfully, I&apos;d already loaded the above pages in my Web browser on day zero, so I was able to read them even without an Internet connection, as my laptop&apos;s battery slowly drained.
		I feared I might not be able to complete my unit assignment though, even having three days left, due to the lack of power and connection.
		Thankfully, I&apos;d gotten everything in both my courses done on day zero, save for my reading assignment in this course, discussion assignment in both courses, and unit assignment for this course completed.
		Both discussions were well under way by this point in the week, so really all I needed to do was wait and make my daily posts.
		All I had left to really focus on were the unit assignment for this course and the reading assignment that would make the unit assignment understandable.
		I normally keep my lights off during the day, but I turned one on just so I&apos;d notice when the power came back on.
		It really didn&apos;t take as long as I&apos;d feared.
	</p>
	<p>
		The k-nearest neighbour algorithm seems pretty simple.
		I mean, if you were to implement it, it might get complex because you&apos;d need to work with programming circles and whatnot, which would require use of complex formulas and pi, then detecting what points fell into the circle, what size your circle needed to be, and a bunch more.
		Or maybe not.
		If you cycled through every single data point, you could simply calculate the distance away from the prospective data point and use the Pythagorean theorem.
		Then just pick the data points with the lowest resulting numbers, as they&apos;d be the nearest neighbours.
		The article calls the distance calculated this way the &quot;Euclidean distance&quot;, but the formula given is clearly the Pythagorean theorem, so you only really need middle school maths to figure out the distance.
		Instead of the Euclidean distance, the article says the Hamming distance is used in some cases, but no explanation of what that is is given.
		But anyway, the concept is rather simple.
		It&apos;s a classification problem, but it&apos;s clearly a case of supervised learning, not unsupervised learning.
		You&apos;ve got your training set, and you use it to get the computer to make educated guesses about unclassified data points.
		I would assume the new data points get added to the training set, but that&apos;d taint the training data with guesses, so maybe not.
	</p>
	<p>
		To use the k-nearest neighbour algorithm, you obviously need to decide on a value for <var>k</var>.
		Apparently, this value is normally set to the square root of the number of available data points in the training set.
		It&apos;s not really explained why this is, but I guess it offers a good balance between taking too little into account and taking too much into account.
		If too few neighbours are used, trends in the data won&apos;t really be considered.
		If too many are used, it&apos;s basically just a popularity contest between data point types.
		Another thing mentioned is that to prevent one property of the data points from getting weighed more heavily than the other, some sort of scaling should be used on all the data points&apos; properties.
		That would be where the min-max scaling and z-scores from the discussion assignment come in.
	</p>
	<p>
		Speaking of the discussion assignment, for whatever reason, we were limited in how long our responses could be this week.
		On the one hand, writing less meant I couldn&apos;t go into details really.
		I felt like I hardly wrote anything, yet I only came out four words under the limit.
		On the other hand, it made my job easier.
		I&apos;ve got a lot that needs to get done each week, so keeping the discussion post in this course down to a minimum saved me time and effort.
	</p>
	<p>
		The article also says the k-nearest neighbour algorithm is slow and doesn&apos;t always give much insight.
		I guess I can see that.
		If we&apos;re looping over all the training data each time, that&apos;d really slow down our implementation.
		As for lack of insight, guesses at the classification of new data points doesn&apos;t seem very helpful in many cases.
		I can see it having potential for some tasks though.
		This is a course on data mining, and the phrase &quot;data mining&quot; has a very negative connotation for me, so please excuse my negative example.
		But if you&apos;ve creepily recorded data on past customers and want to classify who might buy a different product you offer, you might classify customers as having bought or not bought your item, then use it to figure out if it&apos;s worth it to try to advertise that item to a pending customer before they finalise their order.
		And if not, try that same thing with another product, and so on, until you figure out which product you should try to obnoxiously up-sell to them.
	</p>
	<p>
		The unit assignment was much easier than last week&apos;s.
		There was a bit of trial and error involved, but the intent of what was wanted was clear, and I built my solutions to meet the goals.
		Last week, I didn&apos;t really understand what the assignment was getting at, most likely meaning I hadn&apos;t absorbed the material for that week even close to as well as I should have.
		I learned some useful techniques while trying to format my assignment, too.
		I found data points can be made invisible on a graph in R, and that labels can be added right where the data points would be if visible.
		This allows, for example, for different symbols to be used to represent different types of data points.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		The reading assignment for the week is as follows:
	</p>
	<ul>
		<li>
			Chapter Eight of <a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>
		</li>
		<li>
			<a href="https://www.math.unipd.it/~aiolli/corsi/0708/IR/Lez12.pdf">Microsoft PowerPoint - TextCategorization.ppt - Lez12.pdf</a>
		</li>
	</ul>
	<p>
		The discussion board assignment says it requires significant research, but then goes and limits our word count, including references, to five hundred words.
		Like the five-hundred-word limit last week, this actually significantly cuts the amount of research that can go into the topic.
		I can&apos;t use too many references, as they each cut into my word limit, so little research is actually the name of the game.
		Normally, I don&apos;t like word limits for this reason, but the discussion posts are always a pain because of how little time there is to complete them, so it works in my favour.
		We need to reply quickly, so as to allow other students to respond, yet we don&apos;t have the topic until the start of the unit.
		So I usually try to get the initial discussion posts for both of my courses completed within the first couple days, if I can.
		I tried to stay brief, but found I still was over budget on words, and had to cut out some things.
		And my post was already based only on the reading assignment.
		Had I done <strong>*any*</strong> research outside the reading assignment, I wouldn&apos;t have been able to fit my work into the required word count.
	</p>
	<p>
		The concept of information gain as a measure of entropy reduction is pretty useful.
		If you can find the right classifications, you can better understand your data and draw conclusions from it.
		Measuring the information gain allows you to compare classifications to see which one is a better fit for branching your decision tree with.
	</p>
	<p>
		The presentation this week mentioned that using an odd number of neighbours prevents ties when using the k-nearest neighbours algorithm, but this actually only makes a tie less probable.
		I actually discussed this last week in my unit assignment submission.
		If there are only two groups, an odd number of neighbours will prevent ties.
		However, for any other number of neighbours used, a tie is still possible.
		For example, one neighbour may &quot;vote&quot; for a classification in group A, while the remaining number of &quot;votes&quot; may be split evenly between group B and group C.
	</p>
	<p>
		Decision trees really aren&apos;t anything new to me.
		The measure of information gain I mentioned above is something I hadn&apos;t heard of, but aside from that, decision trees are basically like flow charts, but without the possibility of travelling backwards.
		You can only go in one direction, so your path always terminates.
		They&apos;re good for classifying things, but there&apos;s nothing they can do that a flowchart can&apos;t.
		I might even go so far as to say that the graphical representation of a decision tree is a type of flowchart.
	</p>
	<p>
		I&apos;m unclear on how tree-pruning works though.
		From the looks of it, we cut off the subtrees, keeping those and discarding the part with the root of the main tree.
		But without the root to connect them, I&apos;m not sure how the subtrees are supposed to relate to one another in any meaningful way.
	</p>
	<p>
		Bagging and random forests are unintuitive.
		In bagging, you take samples from your full sample, and use these samples to build trees.
		Because each tree isn&apos;t working with the full data set, it&apos;s less accurate than a tree using the full data set would be.
		However, you&apos;re able to get many trees that predict things differently that way.
		Averaging the results together, you get a more-accurate result than had you used the single, better tree.
		Random forests take this a step further, and not only limit which data points a tree has access to, but what branching criteria are available.
		Again, this reduces the accuracy of individual trees.
		However, when tree results are averaged together, it has the result of making sure all (or at least most) criteria were taken into account in the result.
	</p>
	<p>
		When I looked at the unit assignment at the beginning of the week, I was quite intimidated and thought the assignment would be pretty difficult.
		When I sat down to work on it after reading the material for the week though, it was actually a breeze.
		To start with, the entire first half of the assignment practically did itself.
		The instructions told me exactly what commands to enter, and all I had to do was copy text output into my submission, take a screenshot, and troubleshoot a single missing library; I found <code>mlbench</code> wasn&apos;t available on my system.
		It was in the package manager though, so I just ran <kbd>sudo aptitude install <a href="apt:r-cran-mlbench">r-cran-mlbench</a></kbd> on the command line.
		I assumed I&apos;d have to restart R for R to find it, but much to my surprise, just rerunning <kbd>library(mlbench)</kbd> worked.
		I also expected <kbd>data(Ionosphere)</kbd> to fail, as our data file was called <code>Ionosphere.txt</code>, and we didn&apos;t include that <code>.txt</code> extension.
		It worked anyway though.
		It was like clockwork that&apos;d been missing a cog, and that cog had been put in place, so everything was fine now.
	</p>
	<p>
		The second half of the assignment wasn&apos;t intimidating either once I&apos;d read the material.
		In fact, the instructions even told us which functions to use.
		We just needed to know what arguments to pass in.
		I admit I struggled a bit in figuring out the code to use between the functions mentioned in the instructions, and my result probably isn&apos;t as clean as it could be, but I arrived at a reasonable answer to submit.
	</p>
	<p>
		When I went to grade last week&apos;s work, I got a nasty surprise.
		The first thing we were to grade was whether the student had included a detailed log of their R session.
		The assignment instructions never asked for that though!
		I only included parts I thought were relevant, and didn&apos;t include the part that generated the graph I&apos;d included.
		It&apos;d be nice if the assignments told us everything we were expected to include in our submission, but this isn&apos;t the first time at this school I&apos;ve been blind-sided like this.
	</p>
	<p>
		I&apos;d missed something when I went through the work myself, too.
		We were shown in the instructions that we could define each coordinate pair as a two-item vector, but that didn&apos;t seem like a useful thing to do, given that the functions we were working with would interpret the intended rows as columns and intended columns as rows if we then tried to group those variables into a data frame.
		I ended up not doing that, and instead creating one vector of <var>x</var> values and another vector of <var>y</var> values.
		The first student whose work I graded this week though showed me the missing piece: the <code>rbind()</code> function.
		I&apos;m not sure if I missed that in the reading assignment or if this was something we were expected to find outside of class, but it allows the data points to be defined the way the assignment instructions grouped them and still get usable data.
		That said, they got the rest of the assignment wrong because they failed to assign groups to any of the data points and instead used the point names as classes, resulting in six classes instead of two.
		It yielded &quot;correct&quot; results when only one neighbour was used to determine a point&apos;s class, but didn&apos;t allow for using multiple neighbours, as no two points in the training set had the same class name.
		The second student pretty much had the same solution that I&apos;d had, so they&apos;d defined an <var>x</var> vector and a <var>y</var> vector, except their graph-generation code was cleaner than mine, and only needed one line instead of my two.
		Unlike my graph, theirs wasn&apos;t colour-coded, but I&apos;d be interested to see if I can get the colours onto the graph using their method.
		If so, their method would be a more-correct way to achieve the effect.
		The third student used the same method as the first student, but actually did the work correctly, showing that the <code>rbind()</code> function does do what we needed it to do for lass week&apos;s assignment.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://arxiv.org/ftp/cs/papers/0308/0308031.pdf">0308031.pdf</a>
		</li>
		<li>
			<a href="https://my.uopeople.edu/pluginfile.php/389237/mod_book/chapter/179471/neural-networks-ebook.pdf">neural-networks-ebook.pdf</a>
		</li>
		<li>
			<a href="http://home.thep.lu.se/pub/Preprints/91/lu_tp_91_23.pdf">lu_tp_91_23.pdf</a>
		</li>
	</ul>
	<p>
		I&apos;ve found a way to squeeze just a little more time for coursework out of my week.
		I really hate to skip my weekly $a[LUG] meetings, though I did have to skip one this term to fit in a heavy study session.
		They&apos;re the only socialising I&apos;ve managed to fit into my week.
		I&apos;m just so busy.
		I can&apos;t get much studying done at the meetings either, as I don&apos;t study well with people around.
		However, I can get the grading for the week done during those meetings.
		That way, I don&apos;t have to grade when I&apos;m at home, leaving me more time to study.
		I think I&apos;m going to do that during future terms, too.
		There&apos;s no real reason not to, and allows me to be productive without having to give up my meetings.
	</p>
	<p>
		It seems that in my own solution for last week, I jumped through unnecessary hoops.
		The return value for the prediction function listed probabilities instead of actual guesses, so I used the <code>round()</code> function to round the probabilities into hard predictions.
		It seems the <code>table()</code> function would have done something with the probabilities to make them usable on its own.
		One student didn&apos;t answer the final question, which was one of the main ones we were grading on, and another submitted a partial R session that didn&apos;t accomplish anything.
	</p>
	<p>
		As I was loading the pages for the discussion assignment and unit assignment to see what my tasks were for the week, the first thing I noticed was a link to a Basic Prop example below the main unit links.
		My heart sank.
		You may remember that in <a href="#Unit1">Unit 1</a>, I&apos;d hoped to avoid downloading and running this nasty bit of software due to licensing issues.
		Reading over the unit assignment confirmed my suspicion though.
		We&apos;ve got to use this noxious $a[JAR] to get our work done.
		There doesn&apos;t seem to be a way around it.
	</p>
	<p>
		This is the third week in a row in which our initial discussion post has had a word count cap placed on it.
		Last week, I put my post into LibreOffice Writer and found my word count to be below the maximum, but when I posted it, the university&apos;s word counter counted it to be above.
		It was counting words that weren&apos;t there, so I had to delete the post, rework it, and repost it.
		I had my suspicions as to what it was doing, and I tested that this week.
		It&apos;s counting the markup used for formatting.
		For example, that means that each paragraph costs me two words.
		If I take three paragraphs and combine them into one, that saves me four words, because there are two less paragraphs.
		Hilariously, I could use this to artificially inflate my word count.
		I could make a post that&apos;s clearly only got a single word get counted as having thousands of words.
		However, that&apos;s not at all useful.
		It&apos;d be much nicer to be able to get words dropped from the count, but I don&apos;t yet have a way to do that arbitrarily.
		There are a few hacks I could do to drop the word count slightly, but it wouldn&apos;t be enough to really help me cover topics with the level of detail they deserve.
		While the word counter likes to cleat me, I don&apos;t have a good way to cheat the word counter back.
		That means that again, doing extra research, which would add more references and thus eats my word count, does more harm than good.
	</p>
	<p>
		It was interesting to read that a node in a neural network may itself contain a neural network.
		I&apos;ve got to wonder how such recursion occurs.
		Does the programmer set up the nodes to contain neural networks from the beginning, to make each node better at learning?
		Or does the learning process sometimes involve switching the algorithm used by a given node into a neural network?
		The reading material doesn&apos;t really cover this.
		Bidirectional node connections are also something I&apos;d love to see some elaboration upon.
		If the flow of the data isn&apos;t unidirectional, how does it terminate instead of looping indefinitely?
	</p>
	<p>
		Later, the material talked about a network using only zeros and ones as data being passed between neurons and only boolean logic within the neurons.
		That seems like a bad use of a neural network.
		Why not use logic gates instead, which are designed exactly for that purpose?
	</p>
	<p>
		I&apos;m unclear as to what the activation function is.
		On the surface, it looks like the activation determines whether a given neuron outputs a value at all.
		That makes no sense though.
		The output is fed into other neurons alongside output of other neurons, which means there <strong>*always*</strong> has o be output.
		So if the neuron is &quot;inactive&quot;, does it just output a zero, regardless of the value that would otherwise be provided by the output function?
	</p>
	<p>
		I&apos;m also not clear on how the back-propagation algorithm works.
		The material talked about how to determine how much error is caused by the weights assigned each input, but not how to use that information to calculate better weights.
		I didn&apos;t understand the mathematics presented in the other reading material either, and as that document was almost entirely formulas and explanations I didn&apos;t get about the formulas, I didn&apos;t understand what that document was talking about.
		It would have really helped to see some programming code instead of equations using unfamiliar symbols.
	</p>
	<p>
		The idea of normalising data once again came up.
		Last time, the issue was that data with a small scale would dominate data with a large scale when searching for neighbours in a k-nearest neighbours search.
		Put simply, if you&apos;ve got neighbours placed centimetres apart in the north/south direction but kilometres apart in the east/west direction, you&apos;re going to have bands of neighbours at the same latitude, while longitude will be effectively ignored.
		This time, data with large scales has a similar effect over data with smaller scales in a learning context, though I&apos;m having trouble actually visualising why.
		I imagine it&apos;s likely for similar reasons as with a k-nearest neighbours search, but I&apos;m not sure.
		Actually, on second thought, it probably has more to do with attempts to minimise error.
		Data with a higher scale would output a higher error when the data&apos;s even a lower percentage off from what it is predicted to be.
	</p>
	<p>
		Another key point was the use of the regularisation parameter to avoid over- or under-fitting the model to the data.
		I&apos;ve seen no way to make use of this in Basic Prop though, which is really too bad.
		The assignment has us working with very specific data under the context that we&apos;ll never see data that isn&apos;t in the set.
		This is <strong>*exactly*</strong> the sort of situation in which intentionally over-fitting the data would really come in handy.
		The book says the problem with over-fitting the model is that the network will learn the imperfections in the data set as well as the actual trends, but for the unit assignment, there <strong>*are no*</strong> imperfections in the data set, because the data set isn&apos;t a collection of observations, but rather, the very definition of what we&apos;re looking for.
		We&apos;re not looking at handwriting, in which there are infinite examples of a representation of each digit.
		We&apos;re looking at a seven-segment digital display in which only seventeen of the possible one hundred twenty-eight combinations will ever be seen.
	</p>
	<p>
		Cross-validation sounded like it might help when I started attempting to complete the unit assignment, but after reading about what it actually is, I saw how detrimental it would be to our use case.
		It splits the data and uses only part of it for the training, while the other part for error reduction.
		That works when your data consists of samples, but not so much when your data is a complete definition of the problem.
		You need the full set for training if you&apos;ve got the definition of the problem right there.
		Otherwise, you&apos;re not using part of the rules of the situation in the training phase, so those rules don&apos;t get learnt.
		That must be why when I used cross-validation, it actually increased my error rate.
	</p>
	<p>
		From what the third paper talks about, it sounds like knowledge and memories are encoded as neuron strengths, which I guess makes about as much sense as anything else.
		I&apos;m not sure how learning takes place though.
		Neurons that transmit more often become stronger and ones that transmit less frequently become weaker, and we can easily see a parallel to muscle strength, but what even determines which neurons fire in the first place?
	</p>
	<p>
		I ended up going into the assignment pretty much blind.
		The first part of the assignment was simply formatting the case data in a way the software would understand it.
		It was busy work with no actual challenge.
		I&apos;d say that the data file should have been just presented as a part of the assignment, but there were actually a couple hidden reasons why we needed to do this ourselves.
		First, it shows we understand the data format.
		It&apos;s not a complicated format, but formatting it ourselves lets us show that we&apos;re not completely incompetent.
		Second, we were given $a[ASCII] values, but we were given them in decimal.
		And we needed them in binary.
		Again, translating them by hand is incredibly simple, though tedious.
		But doing it lets us prove we even understand how base translations work.
		That said, I just pulled out my calculator.
		<a href="apt:galculator">Galculator</a> has the option to display answers in binary, octal, decimal, and hexadecimal.
		So all you have to do is put it in binary output mode, then ask it, for example, what <code>72</code> is equal to.
		No equation needed, just ask it to operate on a single number.
	</p>
	<p>
		Basic Prop is obnoxious to work with.
		Every time you open the dialogue to edit the the network layout, the number of outputs gets reset to zero.
		So if you don&apos;t remember to set it back to seven <strong>*every single time*</strong>, you have to go back and fix that afterwards.
		Also, you can&apos;t leave the network layout dialogue open and run the simulation, even though that dialogue is in a separate window.
		Until you close it, the main window stops responding.
		So not only are you not able to leave it open to tweak things little by little, but you also aren&apos;t able to leave it open to stop the output layer from resetting to zero.
		And finally, every time you modify the network layout, the pattern file unloads, even if you remember to fix the output so the pattern is compatible with both your old and new layouts.
		So you&apos;ve got to open the file load dialogue time after time and reload the pattern.
		There doesn&apos;t seem to be a way around that.
	</p>
	<p>
		I couldn&apos;t seem to get my error down to zero, which was frustrating, as I know I could do it if I were using logic gates instead of a neural network.
		I might even be able to do I with a neural network if I could cut some of the connections and/or adjust the weights by hand, though no guarantees on that.
		I didn&apos;t even have to work with any of the equations that confused me, yet I couldn&apos;t get my network to produce accurate enough results.
		Eventually, I found that if I used two hidden layers instead of one, the output stopped resetting, so I ended up using two hidden layers in all my experiments, not knowing if that was helpful or harmful to my solution, just to make progress more feasible.
		I went back later and tried all one-layer solutions, but no solution I could five yielded an error rate under <code>0.05</code> like the assignment wanted.
		The best I managed to do was get a solution with an error rate under <code>0.25</code>, which is honestly quite terrible, and even then, my solution only sometimes produced an error rate that low.
	</p>
	<p>
		Unable to produce an even somewhat-acceptable solution using the required software, I started looking at how I&apos;d produce my desired result using logic gates.
		I came to the conclusion that I&apos;d need a lot more than just two internal layers to get the proper solution from logic gates.
		A logic gate solution is very much possible.
		I have no doubt whatsoever about that.
		However, it might only be possible as basically a series of &quot;if this exact bit pattern OR if this exact bit pattern OR ...&quot; statements.
		That type of solution would be incredibly messy, and I can&apos;t even translate it into a working set of weights for this neural network that has a maximum of four layers, including the input and output layer.
		I was hoping that if I could find a working set of weights, I could use it to try to reverse engineer a neural network setup that&apos;d reproduce my weights.
		No such luck.
		I tried to find exploitable patterns in the bit sequences too, but to no avail.
		Even if I could find such bit pattens, I wouldn&apos;t know how to use them.
		The simple fact is that while the neural network is fed bits, it&apos;s not outputting pure bits.
		It&apos;s outputting numbers near <code>0</code> or <code>1</code>, but because these numbers are slightly off, they&apos;re going to produce error no matter what I do, unless I can find a way to set the weights such that the network always outputs integers.
		And I&apos;m not even sure that&apos;s possible.
		Eventually, I had to give up and settle for submitting my janky solution.
		I&apos;ll just have to accept the bad grade, then take a look at the answer key when it&apos;s made available next week.
	</p>
	<p>
		I was feeling pretty down on myself for being unable to get my solution into a reasonable error range.
		As I went back through the steps to get the screenshots and console data I needed for my write-up though, I noticed that my network&apos;s guesses were within <code>0.2</code> of the correct solution.
		So why was my average error so far off?
		Well, it turned out that two of the guesses had two very bad data points each: a <code>0.49</code> and a <code>0.51</code>.
		Those two outliers really messed up my stats.
		Overall, my network performed rather decently, but I just couldn&apos;t get it to get <strong>*everything*</strong> perfect, and that was the problem.
		Each and every answer though, if rounded to a <code>0</code> or <code>1</code>, was exactly correct.
		That made me feel a lot better.
		I&apos;ll still get a poor grade for this, but at least my network isn&apos;t nearly as far off as it looked at first.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://neuralnetworksanddeeplearning.com/chap2.html">Neural networks and deep learning</a>
		</li>
		<li>
			<a href="https://pdfs.semanticscholar.org/f7de/25eb027083d4e7144c1ef6831baff35d6d06.pdf">metalearning.dvi - 25eb027083d4e7144c1ef6831baff35d6d06.pdf</a>
		</li>
		<li>
			<a href="https://web.stanford.edu/group/pdplab/pdphandbook/handbookch8.html">7 The Simple Recurrent Network: A Simple Model that Captures the Structure in Sequences</a>
		</li>
		<li>
			<a href="https://www.iis.sinica.edu.tw/page/jise/2005/200511_09.pdf">Microsoft Word - 92-21-edited.doc - 200511_09.pdf</a>
		</li>
	</ul>
	<p>
		The first student whose work I graded this week had some useful insights as to how I should have completed the assignment.
		However, the second student did even worse than I did as far as error level, and also had no write-up whatsoever.
		The third student used an alternative program instead of Basic Prop.
		I would be all in favour of that, and I would have used an alternative program myself if I thought that was an option.
		I really didn&apos;t want to run Basic Prop on my machine.
		However, they again had no write-up to explain their screenshots, and those screenshots had none of the information the grading instructions were asking for.
	</p>
	<p>
		Something occurred to me as I was reading this week.
		In the past, when I&apos;d heard about machine learning, I&apos;d always thought of it as terribly inefficient.
		I thought all the training data had to be stored with the program that learned from it.
		Without those &quot;memories&quot;, it wouldn&apos;t have learned anything.
		This meant any copies of the program had to store these huge data sets, which not only took up quite a bit of disk space compared to human-tuned algorithms, but also meant keeping all that somewhere accessible and frequently accessed, meaning probably in $a[RAM].
		The frequent access itself would take a toll on efficiency as well.
		That&apos;s not how machine learning works at all though.
		It builds a model, then throws out the data.
		At least with the neural networks we&apos;ve been looking at, what&apos;s important is these weight values, which cost next to nothing to store alongside the main model.
		Learning from even terabytes of data doesn&apos;t mean keeping terabytes of data stored with every copy of the program.
	</p>
	<p>
		There was also quite a bit on the training parameters, which weren&apos;t explained at all last week.
		It would have really been nice to have this information <strong>*before*</strong> completing last week&apos;s unit project.
		I tried some Web searching on my own, but couldn&apos;t find anything helpful due to not knowing exactly what to look for.
	</p>
	<p>
		The page on multi-layer perceptrons was a good read.
		I&apos;m unclear on how they&apos;re not just another name for the networks we&apos;ve been looking at already though.
		The paper talked about the need to scale handwriting images before classifying them.
		I wouldn&apos;t have thought of that, myself.
		People write in different sizes though.
		That would really throw off the results if scaling weren&apos;t used to normalise the images, especially given the data the model takes into account.
		Only some parts of the image seem to even be used, and they look like cross-sections.
	</p>
	<p>
		I really found the idea of using manipulating matrices to simulate a neural network&apos;s calculations to be interesting.
		I have to admit, I&apos;m not good at manipulating matrices.
		However, this method still seemed less complex than trying to figure out weight changes individually.
		The maths were still above my head though.
		When the page started talking about finding partial derivatives, I knew it&apos;d lost me.
		I&apos;ve never studied partial derivatives.
		I&apos;ve heard the term, but that&apos;s all.
		We&apos;re expected to already know how to use partial derivatives, and there&apos;s no time to dive into that topic in addition to everything else by the end of the week.
		I plan to take <span title="Calculus">MATH 1211</span> and <span title="Discrete Mathematics">MATH 1302</span> starting on 2019-11-14 (four terms from now), so maybe one of those will help.
		Further down the line, in late 2020 (the second term of the 2020-2021 school year, but no date for that term is known yet), I&apos;m hoping to take <span title="Statistical Inference">MATH 1281</span> as well.
		That course is on statistics though, so it probably won&apos;t explain partial derivatives.
		The courses on discrete mathematics and calculus seem more promising in that regard.
	</p>
	<p>
		The concept of metalearning caught me by surprise.
		Normally, you program a machine to perform a task.
		With machine learning, you instead program a machine to learn how to perform the task on its own using examples.
		With metalearning though, you instead program the machine to learn how to learn how to perform the task.
		It&apos;s another step back from the problem, and it&apos;s even more instruction to the computer to figure it out and not make the programmer fine-tune the details.
		Using machine learning to set the learning parameters was much less impressive than the phrase &quot;learning to learn&quot; implied, but it&apos;s still an impressive idea that I wouldn&apos;t have come up with on my own.
		Using machine learning to come up with a learning algorithm, which was also talked about by the material, is more of what the phase led me to consider when I read it.
		However, generating the learning algorithm through machine learning appears to be much less useful than I originally thought, due to the necessary components of learning such an algorithm.
		Namely, you need examples of the algorithm, and the machine learns to imitate said algorithm.
		In other words, it doesn&apos;t come up with a novel learning algorithm.
		Instead, it imitates the algorithm you already have.
		But if you already have the learning algorithm, what good does it do to have the computer approximate the algorithm?
		You might as well just use the algorithm outright.
	</p>
	<p>
		Confusingly, it seems a trained back-propagation network can outperform a programmed back-propagation network.
		I&apos;m not sure how this could be possible.
		That improved performance comes with the risk of over-fitting though, which of course degrades performance.
		Additionally, a programmed back-propagation algorithm is faster than a learned one.
	</p>
	<p>
		Due to the obnoxious word limit, I ended up having to delete about half my discussion post before I posted it.
		I also ended up removing some helpful links, as they added to my word count.
		The same links were in the references section, and I left those links intact, but the duplicate links up where they were relevant had to be scrubbed.
		They didn&apos;t add to the actual word count whatsoever, but they did add to the word count as calculated by the university&apos;s word counter, and I couldn&apos;t afford the word cost.
		I think my resulting post turned out a bit scattered-feeling, too.
		Most of the context for what I had to say had to be removed because explaining where I was coming from was prohibitively expensive.
		As I deleted words, and even entire sentences, to approach the word limit, I encountered another quirk of the word counter: contractions count as three words, not one.
		You&apos;ve got the part before the apostrophe, the part after, and the apostrophe itself.
		Yes, the university&apos;s word counter is counting my apostrophes as words.
		I&apos;d thought I was saving words by using contractions, but I was actually making it harder to fit in what I wanted to say.
		Instead, if I write with no contractions, my post&apos;ll look really clunky, but it&apos;ll be less expensive.
		I&apos;ll have to keep that in mind for next week, if we&apos;ve got another word-limited discussion assignment that week.
	</p>
	<p>
		The essay this week was pretty therapeutic for me.
		We were given no instructions on how to use Basic Prop last week, and just thrown right into using it.
		Also as mentioned above, the meanings of learning rate and momentum weren&apos;t covered until this week.
		I really had no chance at success.
		But this week, we were to write up our process and our conclusions about the project.
		It was an awesome chance to vent about everything that went wrong.
		I included almost none of that in my submission last week, as it wasn&apos;t relevant there, but it&apos;s the only thing that <strong>*is*</strong> relevant this time.
		We were supposed to explain how we&apos;d reached our successful models and what we&apos;d tried along the way.
		But I had no successful model.
		What I&apos;d tried was obviously a part of that, but how my models came up short was exactly what needed to be explained to get across just why no successful model to explain was ever found.
		Given more time, or any actual information during the period we performed this exercise, I could have no doubt found a working solution.
		But that wasn&apos;t the case.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			Chapter Ten of <a href="https://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf">Driver.dvi - ISLR First Printing.pdf</a>
		</li>
		<li>
			<a href="https://www.ee.columbia.edu/~dpwe/papers/PhamDN05-kmeans.pdf">PhamDN05-kmeans.pdf</a>
		</li>
	</ul>
	<p>
		I had ... four essays to grade this week?
		That&apos;s odd.
		Normally, we each grade three students and are in turn graded by three students.
		I certainly don&apos;t at all mind, but it was an unexpected twist this week.
		I assumed the system was set up such that it could only allow students to grade three peers and be graded by three peers.
		It turns out the number&apos;s actually variable.
	</p>
	<p>
		The first student essay I graded looked like a write-up for the previous unit assignment, not this one.
		I guess I should have given them a zero for not submitting the correct assignment, but because the two assignments were on the same project, they had some similarities that allowed me to give them credit for three of the grading aspects.
		The first was the aspect about explaining the results of the project.
		They told how the project went, so I could give them points for that.
		The next was grammar.
		We were asked to grade their use of proper grammar, and the paper they submitted had sentences and thus grammar to grade.
		And finally, the paper met the length requirement.
		The second student <strong>*definitely*</strong> submitted the wrong assignment.
		Their submission included all the data, including the weights file, which the first student had omitted.
		The third student submitted a paper on mobile ad-hoc networks.
		It must&apos;ve been meant for their other class.
	</p>
	<p>
		The K-means clustering algorithm is interesting.
		It&apos;s a bit of a drawback that you need to know the number of clusters before you even try to apply the algorithm, but for several types of problems, that&apos;s not an issue.
		If you know your data classes, you know how many data classes there are, so plugging that number in isn&apos;t hard.
		The article on the K-means algorithm states that usually, a human tries several values of K and visually compares the results, but the fact that that needs to be done seems like exactly the reason K-means clustering shouldn&apos;t be used on those types of problems.
		The article suggests that it&apos;s rather difficult for a human to visually analyse the results when applied to multi-dimensional datasets.
		Two dimensions would be a bit difficult to analyse visually.
		Anything beyond that would be nearly impossible.
		Again, this points to the possibility that perhaps the K-means clustering algorithm is the wrong tool for such jobs.
	</p>
	<p>
		As I read on, I saw that the K-means algorithm depends on randomness.
		I guess randomness is useful at times.
		However, I don&apos;t think it&apos;s great for classification or sorting problems.
		At this point, I questioned the usefulness of K-means clustering even for problems in which the correct value of K is known.
	</p>
	<p>
		As I read through the material for the week, I had the usual problem of not understanding the complex equations and the parts of the material linked to them.
		At least I think I have enough understanding to implement both hierarchical clustering and k-means clustering.
		However, I did have an epiphany about the roles of supervised and unsupervised machine learning.
		In supervised learning, we&apos;re trying to teach the computer to perform a task.
		I already understood that from previous weeks.
		In unsupervised learning, the computer learns nothing, and the processing done had no effect on future processing.
		Again, I already knew that too.
		However, I think I have a better idea of the usefulness of unsupervised learning.
		Instead of trying to teach the computer, we&apos;re trying to get <strong>*the computer*</strong> to teach <strong>*us*</strong>.
		As the material puts it, it&apos;s an exploratory process, use so we can better understand where to go from there.
		We&apos;re not looking for the machine to learn something it can use to solve future problems for us.
		We&apos;re looking for the computer to make some sense of the data so we will know what to tell the computer to do next, or even so we can apply outside the computer what <strong>*we*</strong> learn from the computer.
	</p>
	<p>
		The textbook explained how K-means clustering works better than the other article we read.
		It&apos;s attempting to reduce the amount of variance within each cluster in the answer it provides.
		I guess I sort of guessed something similar, but not quite in those words.
		The goal is to segment the data into clusters in which the data points are grouped very closely to the cluster centres.
		The measured variance gets squared though, which is something that&apos;s often done in these sorts of problems, but also something I almost never think to do.
		It&apos;s possible this is done for ease.
		I mean, to find the difference, we have to square the deviation on the X direction and add it to the square of the deviation in the Y direction.
		The square root of the sum is the distance from the cluster centre, if we&apos;re working in two dimensions.
		It&apos;s a bit more complex when adding even more dimensions.
		But what if we just don&apos;t take the square root of the sum, using the sum itself?
		In that case, we&apos;d be using the squares of the distances, but it&apos;d take less processing power.
		More likely though, I think the squaring is used to give much higher weight to outliers, increasing the tendency to try to eliminate them by either moving the cluster centres or changing which cluster the point belongs to.
	</p>
	<p>
		Another thing the textbook explained better is where the randomness mentioned by the article comes in.
		To find the optimal solution, all solution s would need to be tried.
		This isn&apos;t feasible.
		Unfortunately, there&apos;s no other way to get the best answer though.
		So instead, the K-means clustering algorithm settles for a pretty good solution that most likely isn&apos;t the absolute best.
		When trying solutions though, there&apos;s got to be a place to start, so the points are placed into random clusters to begin with, and the cluster choices are improved from there.
		Different starting clusters will result in slightly-different ending clusters.
		To put it simply, a locally optimal solution is found, not an absolutely optimal solution.
	</p>
	<p>
		Hierarchical clustering seems more useful than K-means clustering when you don&apos;t know the number of clusters you need.
		Sometimes you know this information, but other times, you don&apos;t.
		Instead, a tree-based dendrogram is created.
		I&apos;d also argue that the lack of randomness makes hierarchical clustering a better solution for some problems.
		The resulting dendrograms remind me very much of the evolutionary tree seen in taxonomy.
		I went over that in my discussion post for the week though, so there&apos;s no need for me to repeat that here.
	</p>
	<p>
		The section of clustering by different types of observations was interesting.
		It mentioned splitting groups of people by gender or by nationality, saying that one isn&apos;t a subset of the other.
		However, I&apos;m not sure how a dendrogram would be used to put these people into a tree anyway.
		Creating a dendrogram requires numerical values so we can compare how similar the leaves are.
		This doesn&apos;t seem like a valid example.
		To take it a step further, this was compared to how K-means clustering might behave given the same people.
		Again though, K-means clustering requires data points with numeric characteristics.
		You can&apos;t cluster by gender or by nationality using that algorithm either.
		It would have been much better to use a valid example, so we could understand why a dendrogram might not perform as well as K-means clustering in some cases.
		As it stands, I can&apos;t be sure that the claim that hierarchical clustering isn&apos;t always the better option even holds water.
	</p>
	<p>
		It&apos;s also worth noting that the book labels the genders as male and female.
		Male and female are sexes, not genders.
		Masculine and feminine are genders.
		Gender describes the brain and the mind, while sex describes the sex organs.
		There&apos;s also a third option for both though.
		Someone who is intersex has genitals that don&apos;t match the classification for either male or female.
		Someone who is non-binary, such as myself, isn&apos;t fully masculine nor fully feminine, so we&apos;re not men or women, but something in between.
		It&apos;s why I use the name &quot;Alex&quot;.
		It&apos;s short for both &quot;Alexander&quot; and &quot;Alexandra&quot;, not to mention &quot;Alexis&quot;, which is already a gender-neutral name.
		I also tend to stylize my name as &quot;Alexand(er|ra)&quot;, using the same notation as a regular expression.
	</p>
	<h3>Epilogue: Unit 9</h3>
	<p>
		The instructions that came with the final exam told my proctor I could only use a four-function calculator.
		These calculators lack the capacity to work with logarithms and they don&apos;t have the e constant, Euler&apos;s number.
		Without those options, three of the questions on the exam were impossible to solve.
		Two dealt with logarithms, and one asked us to take Euler&apos;s number to some power.
		I don&apos;t have Euler&apos;s number memorised or something.
		I didn&apos;t know we&apos;d need it.
		I had to just skip those three, even though the equations were laid out in the final exam question.
		It should have been as easy as plugging in the numbers, but the only calculator the proctor allowed me to use simply couldn&apos;t do what needed to be done.
		There was another question I had to skip too, but that one was all on me.
	</p>
	<p>
		One question in particular stood out to me.
		It was asking us about the structure of a neuron.
		We were being tested on biology in a computer science course!
		Why do we even need to know about a neuron&apos;s structure?
		It wasn&apos;t a big deal, as I did remember reading about three of the parts we were tested on and the fourth, the nucleus, is the same in every cell.
		If you&apos;ve ever studied cells in your life, you should know where the nucleus is.
	</p>
	<p>
		I forget what the question was, but there was one easy question that appeared twice on the exam.
		I think the school tries to randomise which questions appear on the tests for each student, but the randomisation algorithm doesn&apos;t make sure questions only appear a maximum of once.
		This isn&apos;t the first time that&apos;s occurred on me.
	</p>
	<p>
		And finally, one question asked about R2 instead of R<sup>2</sup>.
		I didn&apos;t know what it was talking about at first.
		If the testing platform is unable to handle superscript, which I highly doubt, I&apos;d recommend calling it &quot;R squared&quot; instead of &quot;R2&quot;.
		It would make it much easier for future students taking the course to understand what was being asked of them.
	</p>
</section>
END
);
