<?php
/**
 * <https://y.st./>
 * Copyright © 2018-2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 3303: Data Structures',
	'<{copyright year}>' => '2018-2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		According to the book, this course will be about how to effectively structure computer data structures so data can be used efficiently.
		This should be very informative.
		A few of the complaints I&apos;ve heard about certain languages is about inefficient data structures.
		It&apos;d be nice to understand what articles about that sort of thing are even talking about.
		And then there are $a[PHP] arrays.
		$a[PHP] arrays are a very nice feature to work with that are absent from most languages.
		Most languages provide a construct with ordered key/value pairs, and maybe a second structure to allow named keys, but no structure that allows ordering of named keys.
		I&apos;ve always thought there must be a downside to $a[PHP]&apos;s method ever since learning how hash maps work.
		Perhaps something I learn in this course would help me understand how $a[PHP] arrays work, and thus understand why other languages don&apos;t have them.
	</p>
	<p>
		The book also covers that costs of a computation are not limited to something such as storage space or processing power.
		Time costs are import as well.
		I&apos;ve had to abandon ideas before due to time costs being too high.
		The latest such task was a feature for a script I was writing that would allow you to tell if a string could exist that matches multiple, specific regular expressions.
		In the end, I had to remove that feature instead of completing it, as the computations needed to make it work, even if/when I could figure out how, would have taken far too much time to perform.
		Even a minute&apos;s worth of computation would be pushing it, given the context, and the process would likely require several minutes, if not hours.
	</p>
	<p>
		The book&apos;s bank example really drives home the point that depending on context, we may be willing for some operations of our data structure to be slower than we would in other contexts.
		We&apos;re willing to wait quite a while for our accounts to be created or destructed, but we insist that updates to the account be fast because of how often they occur.
		Another example would be changing one&apos;s legal name.
		To update my name in the legal system, I had to wait about a month.
		If I had to wait a month to update information on my credit union account, I&apos;d be furious!
		At a minimum, I update my account there twice a month, a a month&apos;s wait time would be ridiculous.
		For a name change hearing, it was perfectly reasonable though, as I&apos;ll hopefully only have to deal with such an update once in my life.
	</p>
	<p>
		The section that discusses how different systems can implement integers differently got me thinking.
		There&apos;s an integer implementation called one&apos;s compliment that wastes one of it&apos;s possible values because it uses a whole bit to store the sign.
		Basically, it results in one possible value being <code>0</code> and another being <code>-0</code>, which are the same number.
		Additionally, this implementation is bad for computations, as positive and negative numbers have to be detected and treated differently in operations.
		Two&apos;s compliment, which sees much more use in today&apos;s world, has neither problem.
		Integers can also be implemented not only as a primitive, but also as a more-complex type based on some primitive type.
		For example, I was writing some code in Lua and needed integer support.
		The version of Lua used by the program I work with doesn&apos;t have integer, only floats.
		The loss of integer precision was intolerable though, while basic integer wrapping would be fine.
		I used the program&apos;s own 32-bit integer implementation, complete with wrapping, which is built on top of 64-bit floats.
		Later though, I built my own implementation, which gave me a wider range of integers to work with than 32-bit integers could support.
		Two main operations needed to be supported by the system.
		The only one to modify the data was incrementation by one, so I just needed to check to see if the value was equal to the maximum pre-precision-loss value, then either set the value to the minimum pre-precision-loss value or add one to the current value, depending on the result.
		The other was just value retrieval, so I set that to return infinity if the value was negative (in other words, if it had wrapped) and it&apos;s true value otherwise.
		Come to think of it though, as I only need the precision during incrementation and storage, I should probably modify that code to instead return the best float approximation of the true value instead of infinity when the value&apos;s too large to be precisely represented.
	</p>
	<p>
		The book discusses the use of a visitor subroutine to avoid writing a separate subroutine for each depth layer in a data structure.
		This is indeed a powerful tool.
		However, the book misses one of it&apos;s greatest strengths.
		Admittedly, that strength only applies to a sort of corner case, but it&apos;s a corner case that comes up often enough: we may not know the depth of the tree.
		Imagine, for example, a file system that we need to traverse.
		The use may have nested any number of directories in any position.
		Some leaf nodes will bee much deeper than others.
		If we need a separate subroutine for each depth, we&apos;ll need to hard code a limit at which our program no longer functions beyond.
		Using a single subroutine though, we can handle any node depth we encounter.
	</p>
	<p>
		The discussion on problem definition really hit home.
		For some projects, I don&apos;t adequately define the problem before I start work on a solution, and I end up having to rework things later when I do that.
		I&apos;m getting better about that though.
		In my current pet project, I&apos;m making sure to define the problems I have long before I try coding solutions.
		Once I have the problem defined, I still need to work through the solution though before I can tell the computer how to do what I need it to.
	</p>
	<p>
		On the topic of domain and range, it&apos;s worth noting that while the domain of a function may or may not be infinite, the range cannot be infinite if the domain is not also infinite.
		Multiple inputs may map to the same output, but multiple outputs may not be mapped to the same input.
	</p>
	<p>
		Most of the second chapter was review as intended, but I did learn a couple new things.
		For example, I&apos;d never heard the term &quot;powerset&quot; before.
		Proof by contradiction was new to me as well.
	</p>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		When I read &quot;Should I buy a new payroll program from vendor X or vendor Y?&quot;, the first thing that popped into my head was &quot;It depends. Which one is open source?&quot;.
		Seriously, I don&apos;t trust software from anyone if the source code is hidden from the public.
		I don&apos;t mind paying, but if the source code is hidden, you don&apos;t know what malicious things the software may or may not be doing behind your back.
	</p>
	<p>
		The section on types of growth rates was all review for me.
		I don&apos;t recall where I learned about algorithm growth rates, but it&apos;s something I&apos;ve known about for a while.
		Some algorithms remain linear in their resource consumption as the input size grows, but others grow exponentially, leading resource usage to get out of control for larger inputs.
	</p>
	<p>
		I hadn&apos;t considered data distribution as a factor.
		It makes a difference though when trying to understand the average case, as well as consider how likely the best and worst cases are.
		I also hadn&apos;t considered comparing the switching of algorithms to the switching of computers.
		Making that comparison really shows you whether it&apos;s the algorithm or the computer&apos;s speed that&apos;s holding you back.
	</p>
	<p>
		The discussion assignment told us to use big-Oh notation, so I read the textbook up to that point, then made my initial post.
		I often don&apos;t have time to read the entire assigned material in one or two days, but I need to get my initial discussion post in by them to keep my coursework on schedule.
		Reading further though, it looks like there&apos;s also a big-Omega and big-Theta notation, both of which could apply to the discussion assignment well.
		In particular, big-Theta would be the most appropriate.
		However, we were asked to use big-Oh, so I guess my submission is what we were asked to have it be.
	</p>
	<p>
		I mean, aside from the part that I was dead wrong about problem 0.
		My answer was that there was no solution.
		I learned from other students that I&apos;d done it wrong though, and that there is in fact a valid solution to the problem.
	</p>
	<p>
		Some of the simplification methods for cost growth formulas make use of basic mathematical properties.
		Others though use engineers&apos; logic of &quot;well, it&apos;s close enough&quot;
		Normally, I&apos;d disagree and say that &quot;close enough&quot; isn&apos;t actually close enough.
		However, cost growth formulas are themselves an estimation.
		They&apos;re not used in anything you&apos;d actually code into the machine.
		Instead, they help you determine which algorithms are most likely not to be worth the time it takes to implement them.
		In such cases, &quot;close enough&quot; often is really close enough.
	</p>
	<p>
		The book discusses not only time constraints, but disk space constraints and memory constraints.
		I faced such constraints when working on a Web spider I built.
		I&apos;ve had to shelve the spider due to time constraints keeping me from continuing development.
		While I was working on and testing it though, I found such disk/memory constraints to be problematic.
		I don&apos;t recall whether it was disk space or $a[RAM], but I found I was using too much.
		I had to drastically cut what the spider kept track of.
		Instead of storing every $a[URI] it encountered, I had to change it to only temporarily store $a[URI]s of the pages it found on a given site that pointed to other pages on that same site.
		For external links, everything but the port, domain, and scheme was just dropped.
		And once it finished crawling a site, I had to have it drop all full $a[URI]s within the site, keeping only one reference to the home page of the site.
		It made the indexing less complete than I wanted it to be, but the spider was still pretty good at discovering new websites.
	</p>
	<p>
		Lookup tables are a good thing to learn about, though this was all review for me.
		I actually use lookup tables in my code all the time to make my code run faster.
		It does cost more $a[RAM], and I&apos;m aware of that, but when I use them, I use them because I feel the trade-off is not only worth it, but sometimes even necessary.
		For example, there&apos;s a game I play when I need to de-stress called Minetest.
		It&apos;s got a great $a[API] to build content against.
		One of my main plugins operates on certain types of in-game items.
		So when the game starts, it runs through the list of items and creates a look-up table of the ones with qualities it cares about.
		Out of probably a couple hundred items, it only needs data on maybe about three dozen.
		It then checks that look-up table when certain events occur, and if the item triggering the event isn&apos;t in the table, the code knows it can safely ignore the event instead of looking up the information on the item and recomputing which events relating to that item are even relevant.
		And within each table element is a subtable recording which events related to that item matter, so even when encountering an item with known-relevant events, the relevant events don&apos;t need to be recalculated.
		A player can trigger an event that needs to be checked for importance a few times per second, so especially in multiplayer mode, this time-saving is vital.
	</p>
	<p>
		The book makes a good point about not wasting time trying to optimise code blindly.
		Some spots will need the optimisation much more drastically.
		Once the worst of it&apos;s fixed, there comes a point when the host of human labour exceeds the savings on time and computer labour.
		The cost of conditionals is something I hadn&apos;t given much thought to as well.
		If the savings of skipping some part of the work doesn&apos;t outweigh the cost of the conditional, attempting to skip pointless work will actually lead to more work for the machine.
	</p>
	<p>
		In the discussion assignment, I was reminded of the importance of graphing.
		I failed to find the solution to part zero, and I would&apos;ve found it easily had I pulled out a graphing calculator.
		A few students did in fact use graphing calculators, and posted their graphs as a part of their discussion write-up.
		It seems though that I&apos;m the only student that figure out that the solution to part one was O(n!).
		Or am I the only one to <strong>*mistakenly think*</strong> that the answer is O(n!)?
		I got the first answer wrong, so it certainly wouldn&apos;t be out of the realm of probability that I got the second answer wrong too.
		It&apos;d be nice if a definitive answer to the problem was provided after week&apos;s end, though I doubt it will be.
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		It turns out the solution to last week&apos;s discussion exercise <strong>*was*</strong> in fact provided.
		Also, it seems I was the one wrong.
		At first, I still thoroughly disagreed with the solution provided.
		Simply put, we know how many times the inner loop will run, and it isn&apos;t n<sup>2</sup> times.
		However, I got something majorly wrong as well.
		In fact, how badly I got this wrong greatly exceeds how far the provided solution is from reality.
		I messed up the definition of a factorial.
		I was using addition instead of multiplication, and factorials are about multiplication.
		I guess now that I think on it though, the provided solution is exactly right.
		Basically, the solution should be O(n<sup>2</sup>/2).
		However, we drop constant terms such as divide-by-two, and that leaves us with the answer nearly everyone in the class besides me got.
		My bad.
	</p>
	<p>
		This week was all about lists, queues, and stacks.
		All three of these constructs are things I&apos;m already very familiar with.
		I use lists all the time, and I&apos;ve used queues on numerous occasions.
		As for stacks ...
		I can&apos;t remember any time I&apos;ve ever used a stack, aside from the calling stack.
		However, stacks aren&apos;t exactly a difficult concept to understand, and you usually learn about them when you learn about queues, so I&apos;ve known about them for years.
		They&apos;re just not something that tends to be useful in the type of code I write.
		Sometimes, I need to process things in the order they arrive, and other times, order simply doesn&apos;t matter.
		When order doesn&apos;t matter, I use queues because they&apos;re what I think to use first, and things have to be dealt with in <strong>*some*</strong> order.
	</p>
	<p>
		The book&apos;s choice to implement the concept of a current position in a list implementation seems rational, until you see what they use it for.
		It&apos;s the type of thing you could use for iteration over a list, so it seems like a valid feature.
		However, they instead use it for rearranging lists.
		For example, the method that moves something to the beginning of the list takes the item in the current position and moves it.
		That means you need to set the current position, then perform the move.
		Why not just have the moving method take the current position, as represented by an integer, as a parameter?
		That&apos;d be a lot easier to use in practice.
		The book&apos;s implementation also uses the current position for insertion and deletion.
		Again, pass the position into those methods as an argument.
		You really only need the current position for the <code>next()</code> and <code>prev()</code> methods.
	</p>
	<p>
		The current position concept for use in additions, deletions, and access begins to make more sense when looking at linked lists, as opposed to array-powered lists, until you realise that this is still an $a[API] failure; a problem in the interface.
		As the book discussed, it&apos;s important to be able to access <strong>*any*</strong> element of the list.
		This includes items we&apos;re not currently pointing at.
		Without random access, a list doesn&apos;t function as a proper list.
		This makes linked lists unsuitable for use in implementing lists.
		They can still be used for list implementations if you abstract away the inability to access random elements, though they&apos;re pretty slow for that kind of use.
		Linked lists are much more suited to things such as queues and stacks, where random access is neither needed nor permitted.
	</p>
	<p>
		The idea of a free list is interesting.
		It&apos;s not something I would have thought of, myself.
		It trades space for time, by not actually deleting old nodes, as it&apos;s computationally expensive to generate new nodes.
		So old nodes get saved and recycled.
		The drawback is, of course, that the size needed to store the list never shrinks.
		It can only grow.
		Well, the total space used by all the linked lists of a specific type can only grow.
		The linked lists are able to share a free list, after all.
	</p>
	<p>
		The book started out saying an array-based stack could use either end of the array as the top.
		I immediately thought that would be a bad idea.
		You shouldn&apos;t use index zero as the top, as it&apos;d mean always copying all the other elements for both pushes and pops.
		What do you gain from using that end?
		But then the book covered how bad of an idea that is, so the book and I were on the same page again.
	</p>
	<p>
		As a side note, I liked how the book mentioned that some problems require recursion &quot;or some approximation of recursion&quot;.
		I used to think some problems couldn&apos;t be solved without recursion, but the thing is, you can always approximate recursion.
		Computers can&apos;t actually handle recursion by default.
		Instead, operating system designers approximate recursion, which then the system and other programs use.
		If recursion can be built fro non-recursion, recursion can&apos;t be a basic building block.
		It can in fact be approximated and imitated.
	</p>
	<p>
		Just like I don&apos;t think linked lists make good list implementations, I didn&apos;t think arrays make good queue implementations.
		You need to keep shifting elements, which is inefficient.
		A linked list is pretty much the only good way to build a queue.
		The concept of not keeping the enqueued items on one end of the array is interesting though, ans certainly would help to lessen the problem.
		At least that way, only some enqueue operations would require moving any items.
		But then, the book continued, and explained circular queue implementations.
		That&apos;s actually rather clever!
		I believe a circular queue implementation removes all arguments I have against using an array as a queue.
	</p>
	<p>
		The book&apos;s take on dictionaries is that a key from a key/value pair duplicates a part of the value&apos;s record.
		But why does this necessarily need to be the case?
		Personally, when I implement dictionaries, I omit the key from the record in the value.
		For example, if I was storing employee records and the keys were the employee $a[ID] numbers, the records in the values would <strong>*not*</strong> include these $a[ID] numbers.
	</p>
	<p>
		It seems we&apos;re using Jeliot to develop Java code in this course.
		This will be the third Java course I&apos;ve taken at this school, and in each course, we&apos;ve used an entirely different $a[IDE], for no apparent reason.
		It&apos;d be nice if the school was more consistent between courses.
		At least Jeliot is under a free license.
		I don&apos;t run proprietary software on my computer, so if Jeliot was under a proprietary license, I&apos;d be stuck on a different $a[IDE] than the rest of the class, and that might interfere with communication and assignment completion.
		Anyway, Jeliot didn&apos;t seem to want to run from the <code>.jnlp</code> file the assignment instructions linked to.
		What even <strong>*is*</strong> a <code>.jnlp</code> file?
		The software website also offered a basic $a[JAR] file though, and that runs just fine.
	</p>
	<p>
		The first thing I noticed about Jeliot was the large, blank section and the tiny code section.
		After that, I noticed you can&apos;t see the white space by default, and that Jeliot uses spaces for indention by default.
		Ew.
		After replacing the spaces with proper tab characters, I looked for a setting to make white space visible, but found no such option.
		In fact, Jeliot doesn&apos;t seem to have many options at all.
		This certainly isn&apos;t an editor I&apos;d ever choose, but it&apos;s ... usable.
		Or so I thought.
		I wrote up most of the program we need for the week&apos;s assignment, then saved and closed it.
		When I reopened the program and loaded the code, I found every last one of my tab characters had been converted into spaces.
		Using spaces for indention is not proper coding style, and I will not write ugly code.
		I&apos;ve switched to my usual code editor, Geany, which doesn&apos;t have Java-specific functionality.
		(It&apos;s a text and code editor, and not an $a[IDE].)
		I could of course use Eclipse, but that $a[IDE] requires setup of entire project directories just to write one Java file.
		From this point forward, I&apos;ll be writing my code in Geany, then pasting it into Jeliot for error-checking.
		I&apos;ll be submitting my Geany version though, which uses proper indentation.
	</p>
	<p>
		Building linked lists really isn&apos;t hard, so the assignment this week was a breeze.
		I think I&apos;ve done it before in another course, though I can&apos;t say for sure.
		I&apos;d check my personal records, but the university is censoring my main method of accessing them right now.
		Trying to search through my old work without the main access method is unnecessarily cumbersome, so I tend not to do it.
	</p>
	<p>
		For the most part, the discussion this week was just a retelling of the week&apos;s material.
		That&apos;s how it tends to be in the discussion forum.
		Sometimes, someone will come up with a gem we can all learn from, but usually, the discussion boards are just a thing that forces me to complete the reading material quicker than I&apos;d like to, to make sure I keep my posts released on schedule.
	</p>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		Most of the section explaining the basics of binary trees was review, though I didn&apos;t know the links between noset in a tree are called &quot;edges&quot;.
		Full and complete binary trees are easy enough to understand.
		It&apos;s worth noting that a tree can be both full and complete, but that one does not imply the other.
		For example, a full tree may have leaves sticking out in odd spots, making it not complete.
		A complete tree is perfectly balanced (<strong>*maybe*</strong> that fact warrants a special name, but we already have a name for trees with that property: balanced), but there&apos;s potentially a branch with only one leaf node attached in the final layer, making it not complete.
	</p>
	<p>
		The book tells us that the number of leaves in a full binary tree is the same for all binary trees with the same number of branches.
		The wording was a bit odd though, so I thought it was saying that the number of leaves was equal to the number of branches.
		I knew that couldn&apos;t be right.
		In a full binary tree consisting of the branches making up one big chain, if there are n branches, there are <strong>*not*</strong> n leaves.
		There are n+1 leaves!
		I&apos;d just misunderstood though.
		All binary trees with n branches have the same number of leaves as other binary trees with n branches.
		As it&apos;s incredibly easy to prove that a continuous chain of branches will always have n+1 leaves in a full tree, we can therefore deduce that in a full tree of any configuration, there is one more leaf than there are branches.
	</p>
	<p>
		When the book moved on to discuss tree traversal, I immediately thought of inorder traversal.
		It seems like the most logical way to traverse the tree in the context of trying to preserve a relationship between the data points held in the tree.
		I&apos;d forgotten about preorder traversal and postorder traversal though.
		Both have their usefulness in preserving the relationship between nodes instead of the relationship between the data points, which can be useful as well.
		The section on where to put null tree checks was informative as well.
		It made it very clear why the checks need to be done on the incoming value, not the values about to be used to recursively call the traversal function.
		At least in Java, a variable&apos;s type doesn&apos;t protect against null values, so the function could easily be called on an empty tree even before recursion.
	</p>
	<p>
		I find the idea of using a separate node type for leaves as for branches to be bizarre.
		A leaf is defined by the fact that it has no children, not the fact that it&apos;s using a different node type.
		This separate node type prevents the adding of children later, meaning that if a node needs to no longer be a leaf node, it must be swapped out for a new branch-type node.
		Likewise, if a branch node has it&apos;s leaves removed, while there won&apos;t be a technical reason why the node needs to be swapped out, consistency dictates that we should probably swap it out for a leaf-type node.
		It&apos;s foreign to me too to think that the leaves might store different types of data than the branches.
		And in the context of Java, we&apos;d need to know ahead of time the data types associated with a node&apos;s methods.
		They&apos;d need to have the same parent class, meaning that they&apos;d need to return the same data types, regardless of what data type is stored internally.
	</p>
	<p>
		The book claims a binary tree can be implemented as an array.
		This removes all the overhead, or at least most of it.
		Unless the array is the perfect size for the data, which would prevent the adding of more data, there will be a little bit of overhead.
		However, there&apos;s something very important to note here: there&apos;s no tree structure.
		We can <strong>*emulate*</strong> a tree structure by performing a binary search on the array, but the array is in no way actually a binary tree.
		After all, a binary tree is defined by its structure.
		If there is no tree structure, there is no binary tree.
		That said, the book is trying to claim this is somehow a tree, and the assignment instructions specifically say we can build our implementation using an array, so I alomst took advantage of that option.
		Using an array, we know exactly how many &quot;levels&quot; our &quot;tree&quot; has, as well as see easily where in the &quot;tree&quot; each data point should belong to avoid adding another &quot;level&quot; to the &quot;tree&quot;.
		The easiest way to keep track of the unused positions in the array is to keep all the used positions clumped together on one end, and the unused positions clumped together on the other.
		<del>This doesn&apos;t make for a balanced &quot;tree&quot;, but as long as the array is always of the size necessary to implement the fewest number of &quot;levels&quot; the &quot;tree&quot; could have while still retaining all the data, the worst case in binary search is only equal to the worst case for a balanced tree.
		This worst case just comes up a little more often than it should.
		The main problem, instead, is insertion.
		Insertion becomes very inefficient.
		In the worst case, not only is the array already full, but the value inserted is at the beginning.
		The values from the old array must then be copied to a new array of size 2n-1, then all the values shifted by one in order to make room for the new value.
		In a &quot;tree&quot; implemented like this, just after an insertion that bumps up the array size, one of the &quot;root&quot;&apos;s &quot;children&quot; is &quot;empty&quot;.
		The &quot;root&quot; is the worst place to have &quot;empty&quot; &quot;children&quot;.</del>
		Still, since the topic for the week is binary trees, I decided to use a real binary tree in my assignment submission.
		<ins>A few hours after deciding not to use an array, I realised my array implementation wouldn&apos;t work anyway.
		It&apos;d result in &quot;nodes&quot; other than the &quot;root&quot; not having &quot;parents&quot; in some cases.
		All sorting efficiency I&apos;d been squeezing out of the array would actually break the &quot;tree structure&quot;.</ins>
	</p>
	<p>
		It looks like we&apos;re not going to be covering how to keep trees balanced as items are added to them this week, unfortunately.
		So the only real option is to deal with the random insertion order by having a naturally-growing tree, with branches and leaves in unpredictable places.
	</p>
	<p>
		Max-heaps and min-heaps are types of binary trees I&apos;ve never heard of.
		Normally, the value of each child of a binary tree has a different relationship with the parent.
		One child is greater, while the other is lesser.
		With a min- or max-heap though, both children have the <strong>*same*</strong> relationship with the parent though.
	</p>
	<p>
		Huffman coding trees are interesting.
		I&apos;d thought that schemes such as Unicode were inefficient because not all bit sequences are valid and the bit encodings of characters are of differing lengths.
		However, I now see it&apos;s a matter of optimising for use with certain characters.
		I guess $a[ASCII] characters do tend to get used in a lot of languages, so having them represented by shorter byte sequences is useful.
	</p>
	<p>
		When building my code for the assignment, I ended up refactoring some stuff to make it more efficient before the first run.
		I didn&apos;t realise it, but I&apos;d left some code in place that was no longer valid.
		Jeliot complained about something being not implemented, but wouldn&apos;t tell me what, or even what line the problem was on.
		Worst of all, Jeliot kept highlighting a valid function call, making it look like that function was potentially not implemented.
		However, there was a compilation error whenever that function was renamed, meaning that Jeliot was in fact finding the function and did know it to be implemented.
		I finally broke down and started an Eclipse project to use Eclipse&apos;s debugger instead.
		Eclipse complained about having all the classes in the same file, but after sifting out those errors, I found the real problem reported: I was trying to examine an attribute of an integer, but integers aren&apos;t part of a class and don&apos;t have methods.
		Jeliot can&apos;t even report such a simple error with anything more specific than &quot;not impleented&quot;, nor can it even point to the correct line so I could find the problem myself.
		There was another error as well, where I accidentally tried to use <code>self</code> instead of <code>this</code> to get an object to reference itself.
		Eclipse too complained about that.
		When I tried to use Jeliot&apos;s search feature to find where I&apos;d done that, Jeliot couldn&apos;t even find the word <code>self</code>.
		Eclipse, of course, could, and I was able to use that to figure out where to look in the copy open in Jeliot.
		Jeliot couldn&apos;t even find a simple string in the source file.
		I swear, Jeliot is not a usable tool for development.
		The fact that you need to wait for Jeliot&apos;s curtain close animation before it&apos;ll let you touch the source code is also just plain idiotic.
		There&apos;s no need for that.
		I understand locking up during the opening animation, honestly.
		It buys time for compilation to complete.
		Locking up during the closing animation only serves to annoy though.
	</p>
	<p>
		Jeliot likewise isn&apos;t handling the scanner in the way I&apos;d expect.
		I set my code up to catch the exception I expected it to throw if the user entered non-numeric data.
		In fact, that exception was to be the signal that the user was done with data-entry.
		However, Jeliot simply didn&apos;t allow such data to be entered in the first place, so no exception could ever be thrown.
		I was going to test in Eclipse to see if this was Jeliot being idiotic again or if I just don&apos;t understand how to use the <code>nextInt()</code> method properly.
		However, transferring between Jeliot and Eclipse was too much of a bother at this point.
		Jeliot wants all the classes in a single file, while Eclipse won&apos;t run the code unless they&apos;re in separate files.
		Another odd quirk of Jeliot is that it doesn&apos;t seem to support the <code>close()</code> method of scanner objects, you you have to just leave them open.
		It also couldn&apos;t recognise the output of <code>next()</code> to be anything other than a generic object, so I couldn&apos;t pass it directly to another method.
		Instead, I had to declare a <code>String</code>-type variable, store the value there, then pass the variable into the method I needed the value passed into.
	</p>
	<p>
		Besides me, no student posted to the discussion board until the second-to-last day.
		I was starting to worry everyone had abandoned ship and I was the last student in the class!
		I mean, not literally, but it was a bit annoying to have to wait to get my own replies in.
		I typically make one reply per day in the combination of both my courses, and when other students wait until the last minute to post, I&apos;m stuck waiting until the last minute to reply.
	</p>
	<p>
		I took longer to complete the binary sorting tree assignment than I should have.
		Maybe stress had something to do with it, but I feel like I should have been able to complete it a few hours quicker than I did.
		As for grading last week&apos;s assignment, one of the other students mentioned in their write-up that they couldn&apos;t get Jeliot running.
		I wonder if they have a similar computer set-up as I do.
		For me, the provided instructions didn&apos;t work.
		I had to combine some ingenuity with some prior knowledge to get it running.
		Just in case, I left instructions to repeat what worked for me in my feedback notes, so hopefully they&apos;ll read and try that.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		Like last week, it took a while for students to reply in the discussion forum, though unlike last week, it didn&apos;t mess with my posting schedule because students in my other course responded in a timely manner.
	</p>
	<p>
		The book quickly got into the idea that K-nary trees could be implemented with the children in linked lists instead of arrays.
		I suppose this works, though it seems like a highly-unintuitive choice, to me.
		I guess what this option brings to the table though is that children can be added and removed without the need to construct an entirely-new array of a differing size each time.
		It&apos;s a good option for a changing, mutable tree, though for something more stable, I still think an array would be the better option.
	</p>
	<p>
		The lack of an in-order traversal option is also something I wouldn&apos;t have thought of right away, but the book mentions it.
		I&apos;m not sure how one would use a K-nary tree for sorting anyway, but in-order traversal is simply not an option.
		I mean, between what two children would you process the parent?
		Only pre-order and post-order traversal even seem to have a discernible meaning in non-binary trees.
		The book does mention that a definition could be made up, but such a definition typically wouldn&apos;t be useful.
		I think if you make a definition up like that though, it wouldn&apos;t be in-order traversal.
		It&apos;d be something else, and should be referred to by some other name.
	</p>
	<p>
		The parent pointer method of holding a tree together caught me off-guard.
		It&apos;s pretty much the reverse of how one would normally bind the tree&apos;s nodes into a unified structure.
		Usually, form the parent, you can find all the descendents.
		This method though takes the child, and allows the finding of all the ancestors.
		It sounds like it works for some use cases, but I&apos;m guessing not many.
		It certainly takes care of the issue of a parent node only having a fixed number of pointers for children though.
		Parents don&apos;t even need pointers for their children in this implementation.
		Instead, children have the pointers, which works because the children have a fixed number of parents: one.
	</p>
	<p>
		The section on compact serialisation of trees was informative.
		It all made sense, but it&apos;s still difficult to remember it all.
		The main takeaway seems to be to try to omit as much recoverable data as possible and squeeze information into unused bits when feasible.
	</p>
	<p>
		It looks like we were supposed to include a sample of our program&apos;s output with last week&apos;s code submission.
		The instructions were a bit unclear, and I ended up leaving that out.
		Oops.
		Maybe I was the only one to think them unclear though; all three students I graded work for did in fact include such output.
		It seems all four of us decided to use a linked node implementation.
		I wonder if anyone used an array instead.
		I would think not.
		Given the nature of the assignment, the linked node implementation was the intuitive choice.
		For the most part, the other students did well, though one seemed to get the iteration count output wrong.
		The first student output the <strong>*running*</strong> iteration count instead of the total iteration count.
		I marked it as correct because technically, the correct total count did get output in the end, but only after incorrect total had been output several times.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		The instructions for this project say that we&apos;ll learn about what brute force is in CS 1304.
		No such course seems to exist at this school though.
		I&apos;m guessing it&apos;s a typo, and is supposed to say <span title="Analysis of Algorithms">CS 3304</span>, a course that has this course as a prerequisite.
		Either way though, I already know what brute force is.
		It&apos;s what I use when I have no clue how to solve a problem the correct way.
		It&apos;s a strategy of having the computer compute all possible answers, and check each one for correctness.
		It&apos;s about the least-efficient solution possible, but it gets results, if you&apos;re willing and able to wait for it to complete.
		When I code these types of solutions, I often leave them running when I go to work, and I get home to find the answer still hasn&apos;t been found.
		Thankfully, I&apos;ve usually come up with a better way to find the answer by then, so I just tear apart the brute force solution and replace it with something much more efficient, and get my answer quite quickly.
		For me, the brute force solution is often a starting point to get me thinking about the problem, and doesn&apos;t actually get me the answers I&apos;m looking for because it&apos;s too slow.
	</p>
	<p>
		Brute force doesn&apos;t only apply in the realm of computers, either.
		For example, sometimes, I find old combination locks.
		If they&apos;re the type that allows you to flip one digit and try again without entering all the digits again, I sometimes use brute force to find the combination to them when I&apos;m stuck away from my computer and unable to get anything actually useful done.
		I just keep them in my bag, and when stuck in waiting rooms or the like, pull it out and start flipping digits.
		The trick is to set the combo to <code>0000</code>, and try each combination, incrementing by one each time.
		You don&apos;t need to keep track of a list of what combinations you&apos;ve tried, as you&apos;re trying them in order, and you&apos;ll eventually hit the right one.
		Given how easy it is to brute force this sort of lock, I really have no faith in them.
		A combo lock that does require re-entry of the entire combo each time would be more secure, I would think.
		Personally though, I prefer locks that use physical keys.
		If me, someone that has no lock-picking skills or training whatsoever, can figure out how to easily (though admittedly not quickly) find the combo, it&apos;s not a very secure option.
	</p>
	<p>
		It&apos;s easy to see why insertion sort and bubble sort aren&apos;t very efficient.
		They move records around without regard to whether each move actually puts any record into the place it actually belongs.
		True, each move is guaranteed to get the order <strong>*closer*</strong> to correct, which allows the process to eventually terminate, but many more moves are performed than are actually needed.
		Selection sort is what I&apos;d do intuitively, with no proper training as to how to sort things.
		I&apos;d look ahead, and find the record I actually needed to move into place instead of just blindly moving records closer to where they needed to go, not knowing if that move was correct.
		Of course, with what we&apos;re learning this week, we can do better than selection sort.
	</p>
	<p>
		The book calls it unfortunate that bubble sort is taught to beginning programmers, when that sorting method is particularly terrible.
		It&apos;s not the only thing stupidly taught to beginners though.
		Many times at this school, I&apos;ve noticed bad practices taught to us.
		And that&apos;s only with the subjects I&apos;m <strong>*familiar*</strong> with.
		How many bad practices have I learned from the school, thinking they&apos;re the right way to do things?
		I have no way to know.
		One course that particularly comes to mind for me is <span title="Web Programming 1">CS 2205</span>.
		We were taught so many poor Web-development behaviours that it&apos;s really no wonder the Web is a mess these days, cluttered with poorly-designed pages with invalid markup.
		The schools aren&apos;t teaching students the importance of page accessibility or markup validation.
	</p>
	<p>
		Shell sort is difficult to grasp.
		I mean, the process itself is relatively simple, and it&apos;s certainly straightforward.
		It&apos;s just hard to see how it improves on insertion sort.
		I think what it&apos;s basically doing though is just leaping elements around further.
		The theory, I think, is that you can skip over a bunch of unnecessary moves, and get the element much closer to where it needs to be, even if you don&apos;t get it quite right.
		The next iteration fine-tunes the result a bit, and so on, eventually moving things on the one-item scale to finish the job.
	</p>
	<p>
		Merge sort seems really inefficient to me.
		It must not be as bad as it seems though, given that it&apos;s not listed as one of the three slow algorithms of the ten presented this week.
		Quick sort seems to have even more unnecessary moving of items though.
		Furthermore, quick sort uses recursion, which is also know to typically be inefficient.
		However, it&apos;s still the best sorting option for general purposes.
		It just goes to show that unintuitive solutions can sometimes be the best.
		That said, quick sort&apos;s great efficiency is based in luck.
		With a bad roll of the die, it actually is as bad as insertion sort, bubble sort, and selection sort.
	</p>
	<p>
		Heap sort seems to have a large amount of overhead.
		It&apos;s probably good for special uses, but not for most sorting tasks.
		The book seems to try to explain why a max-heap is used instead of a min-heap, but doesn&apos;t actually explain anything.
		It claims that the reason is because the values are inserted into the array starting at the end, which would require a max-heap to get the right sorting order.
		However, you could just as easily insert values starting at the beginning, which would require a min-heap.
		No actual explanation is given as to why we should insert from the end or why a max-heap-based implementation is better.
	</p>
	<p>
		Bin sort is a bit confusing.
		From what I think I&apos;m getting, bare bones bin sort expects no more than one value to have each key.
		The number of keys it expects though is one for each bucket, and these keys must be directly sequential.
		In other words, if your keys are 0 through n-1, yet your records are somehow out of order, bin sort can arrange them for you.
		A modified bin sort can handle larger and/or duplicate keys, but the cost is great overhead.
		Bin sort doesn&apos;t seem all that useful for most cases.
		Bucket sort seems to have less overhead, but at the cost that it can&apos;t fully sort anything on its own.
		It only breaks down the problem so some other sorting algorithm can finish the job.
		Radix sort seems to combine bin sort and bucket sort; it uses bucket sort to break down the problem, then uses bin sort to sort the partly-sorted values.
	</p>
	<p>
		Jeliot is becoming increasingly annoying.
		When I run code, I&apos;m usually interested in the output, so I want it to run quickly.
		All I need to know is if the output is what I expect it to be.
		Jeliot slows the process down considerably with its animation of the process.
		Additionally, that output isn&apos;t even useful, as whenever Jeliot encounters an issue, the animation disappears.
		If it left the animation up, it&apos;d be useful for debugging.
		I could see the state the program was in when things went wrong.
		However, the one case in which the animation would be useful is the one time in which it isn&apos;t displayed.
		If the animation even had the option to take steps backward, it could be of use.
		When I notice a value is wrong, I can&apos;t go back and see <strong>*why*</strong> it&apos;s wrong.
		The only option is to reset from the beginning, which isn&apos;t even remotely helpful, so I end up just debugging by hand and ignoring the animation altogether.
		Jeliot also has this annoying bug where if you&apos;re not staring at it, some of the output doesn&apos;t show up in the console correctly.
		I thought my code wasn&apos;t outputting the array contents at the end like it was supposed to, which baffled me, as it did output the swap count, which was the code line directly before the one outputting the array contents.
		There was nothing between to cause branching or other unexpected behaviour.
		It turns out the output is actually invisible sometimes if Jeliot isn&apos;t on the active desktop.
		(I was working on my learning journal entry on desktop 0 where I tend to keep my code editor, while Jeliot ran on desktop 1.
		The assignment instructions were open on desktop 3, where I typically keep my Web browser and email client.)
		If you select the text with the cursor after returning to the desktop Jeliot is running on, it gains visibility.
	</p>
	<p>
		With some slight modification, I think quick sort would be even easier to implement.
		If we didn&apos;t make sure the pivot point ended up in the correct position, we could instead just sort it into one of the two sub-arrays (namely, the one holding values greater than or equal to the pivot point).
		I&apos;m not sure if this would be more or less efficient though.
	</p>
	<p>
		The assignment instructions say we&apos;re measuring the efficiency of our algorithms based on the number of swaps performed.
		Using that metric alone seems misleading to me, but I did my best to optimise in that regard.
		If that&apos;s what the goal of the assignment is, that&apos;s what I&apos;ll do.
		Strangely, by this metric alone, the optimal sorting mechanism would be one that looks at the entire array, finds the correct value for a given index, and performs only the needed swap.
		At most, you&apos;d have n-1 swaps at that point.
		But that wouldn&apos;t be one of the three sorting options we&apos;re allowed to implement this week.
		So basically, I just added checks to avoid unnecessary swaps.
		Things already in the correct sub-array get skipped over and no item ever gets swapped with itself.
	</p>
	<p>
		I got my number of swaps down to twenty-two, and I figured that was good enough.
		But then I started thinking about how I&apos;d write up my description of my implementation, and my mind wandered to what the optimal solution would be.
		Using half-swaps and creating a lot of overhead with iterations where the algorithm plans its moves, we could really get the amount of swapping down.
		(&quot;Half-swaps&quot; being where we swap a value into the array, and instead of putting that value where the values swapped in once was, we directly swap the old value into the spot it belongs, removing that value and so on.
		In most cases, no two values would directly change places.)
		Using full swaps though, the number of necessary swaps is n-1, assuming no value starts in its correct location.
		Given that context, my swap count is fantastic!
		There are twenty-one array items, so my algorithm made only two unnecessary swaps.
		Given that my algorithm doesn&apos;t have any of the necessary look-ahead loops to implement any sort of planning mechanics, that&apos;s pretty darn good.
		The goal of this assignment is to prove we understand how to implement sorting mechanics, so I&apos;d say I&apos;ve done that well enough to be done tweaking the code.
	</p>
	<p>
		The discussion this week was uneventful.
		We pretty much just discussed quick sort.
		Maybe it was a bad idea for my to choose quick sort as the algorithm I implemented for the unit assignment.
		I learned about quick sort inside and out by implementing it, but I got nothing out of the discussion.
		The discussion simply didn&apos;t cover anything I hadn&apos;t already seen and worked with myself.
		If I&apos;d chosen a different sorting algorithm to implement, I&apos;d&apos;ve learned about that algorithm instead, and maybe I&apos;d&apos;ve gotten something out of the discussion.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		I don&apos;t know what the first student I graded work for this week was thinking.
		They included a screenshot of their code running, but there was nothing of note in the shot.
		The code hadn&apos;t completed, so they weren&apos;t showing the output or the sorted array.
		In fact, it looked like it&apos;d been taken within seconds of the program being started.
		So to get their output so I could grade, I ran their code.
		It couldn&apos;t even complete!
		An error came up very quickly, revealing an uninstantiated variable, causing execution to halt.
		Clearly, they didn&apos;t even fully run their code to see if it could complete, let alone see if it had the right output.
		And to top it off, no description or analysis was provided.
	</p>
	<p>
		The next submission I graded looked like it was probably well-written, but they didn&apos;t include their output.
		So I tried to run their code myself so I could check the output for correctness.
		A syntax error prevents the code from compiling.
		My guess is that they developed this code outside of Jeliot.
		I don&apos;t blame them; Jeliot majorly slowed me down in developing my own solution.
		However, they should have at least tested in Jeliot before submitting, so they would know it would run for other students.
		Alternatively, they could have provided their output and I would have graded based on that.
	</p>
	<p>
		The third person again didn&apos;t provide their own output, but their code actually ran.
		It didn&apos;t output the number of swaps performed though.
	</p>
	<p>
		I was surprised to read this week that $a[RAM] is referred to as primary memory, while disk storage is only secondary.
		As the book says, disk memory is persistent.
		In my mind, that makes it the more important memory, so it seems more worthy of being labelled as primary.
		$a[RAM], on the other hand, is volatile.
		It loses its contents soon after the power is cut.
		This process can be delayed (for example, by freezing the $a[RAM]), but the data is still lost rather quickly compared to disk memory.
		$a[RAM] is merely working memory.
		It&apos;s not the file you put in your file cabinet.
		It&apos;s the scratch paper you throw out after using it to work out your calculation.
		$a[RAM] may be orders of magnitude faster than disk memory, but I would by no means consider it to be the main storage of the machine.
	</p>
	<p>
		The book discusses Windows versus UNIX in terms of disk layout, but I think it&apos;s missing a major detail.
		It mentions how Windows uses a file allocation table.
		That&apos;s not a property of Windows though.
		That&apos;s a property of certain disk formats, such as the File Allocation Table ($a[FAT]) format.
		I don&apos;t know what format a UNIX system uses by default; I&apos;m on Linux, not actual UNIX.
		But if a UNIX system was instead using a $a[FAT]-formatted disk, it too would be using a file allocation table.
	</p>
	<p>
		I knew about hard drive buffering, though it&apos;s been a while since I studied it, so it was nice to have a refresher on that.
		I don&apos;t think I knew about input and output having separate buffers though.
		It makes perfect sense why it&apos;s done that way, but isn&apos;t something I&apos;ve thought about before.
		I&apos;ve definitely not heard of having two input and two output buffers before.
		I can see why that would speed things up quite a bit.
		You can&apos;t read a buffer while it&apos;s being written to and expect consistency, so having one to read from and one to write to can at times make sure both operations are used in parallel instead of one operation waiting on the other.
	</p>
	<p>
		I was curious about how items could be sorted outside $a[RAM], which this week&apos;s reading assignment teased could be done.
		It turns out it can&apos;t though.
		Instead, the tactic usually used is to read part of the data needed, sort that, then write it back to disk and read the next part.
		Obviously, one item might end up moved on disk many times before it ends up in the right spot, but then again, things tend to get moved around quite a bit in $a[RAM] during the sorting process too.
		It&apos;s not much different, aside from the fact that reading from and writing to the disk is so much slower.
		One sorting method mentioned doesn&apos;t actually sort the data at all.
		Instead, it creates a sorted index so data can be found quickly later, but the data itself is still out of order.
		However, that sorted index file is sometimes only the beginning, used to plan out how the data should be arranged.
		When this is the case, the data does in fact get sorted.
	</p>
	<p>
		The idea of using your own buffer pool system to handle large files was interesting.
		The main buffer pool may be too small, but you can definitely create your own buffer pool system external to the main buffer pool and use that to pretend you have more $a[RAM] than you do.
		You&apos;re pretty much reinventing the wheel at that point, as the main buffer pool is already trying to make it appear like you have more $a[RAM] than you do, but if you understand how pages and buffer pools work well enough, this is an easy solution to implement.
		Even with paging, something such as quick sort shouldn&apos;t be too hard to implement.
		Like the book said too, if you implement your own buffer system, you can better tune it to your problem as well.
		It seems the biggest time-savers though are to focus on reducing disk reads/writes and to perform activities using different components ($a[RAM], hard drives, processor) running in parallel.
	</p>
	<p>
		The discussion assignment for the week was a bit odd.
		We were to run code that deals with buffer pool policies, but the pools generated were so small that I&apos;m not sure they really got the point across.
		With only two pages in the buffer pool, no page-replacement policy is going to be effective without carefully-crafted inputs.
		Even having just three would be better than two, though I think you&apos;d need at least four or five for a good demonstration.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		When the textbook was talking about checking items in a sorted list to rule out many possibilities in a single comparison, I was sure it was going to discuss how we should use a binary search on sorted lists, not a basic sequential search.
		Instead, it goes on into a discussion of jump searching.
		I&apos;m not sure what advantage a jump search has over a binary search though.
		In a binary search, you can rule out about half the remaining options in a single comparison.
		Jump search doesn&apos;t have nearly that kind of effectiveness.
		Jump searching certainly does provide better efficiency than basic sequential searching though.
	</p>
	<p>
		The idea of arranging records by access probability is interesting.
		It seems a bit like a cop-out, rather than organising the records properly, but I can see why it would be beneficial for certain use cases.
		You do have to know the access probabilities to make it work though.
		The count and move-to-front methods of automated sorting have exactly the problems you&apos;d intuitively expect from them, so they&apos;re not very interesting.
		With counts, access pattern changes aren&apos;t handled well due to built-up counts from past access, and the move-to-front method pops even rarely-used records to the front that one time they actually do get used.
		For linked list implementations, the problem stops there, but for arrays, that causes a lot of value shifts.
		The transpose method caught my attention though.
		Like the book said, there&apos;s a corner case in which it performs poorly: accessing two adjacent records repeatedly will cause them to jump back and forth instead of migrating tot he front.
		Other than that though, it seems like a rather decent algorithm that balances short- and long-term access patterns well, and doesn&apos;t even require the extra overhead of storing access counts.
	</p>
	<p>
		The sections on bitmaps and hash maps were mostly review for me.
		The book&apos;s take on hash maps was a little different than what I&apos;d previously learned though.
		The version of hash mapping I&apos;d previously learned used an array of linked lists.
		First, a hash value from zero to n-1 (where n is the number of array slots) was computed to determine where the item would fall.
		Then, the item would be added to that linked list.
		Looking up a value by key then involved computing the hash again to find the right linked list, then performing a sequential search on that linked list.
		If the array size was close to the right size for the amount of data, the linked list dealt with would have somewhere from zero to two items, so the sequential search would be quick.
		This book&apos;s hash maps are different though.
		At first, I thought it was saying that a hash value is calculated and the records arranged in an array by hash value, smallest to largest.
		This allows for a binary search based on hash value to be performed.
		However, as the book continued, it turns out the book&apos;s method instead involves remapping keys to different slots when hash collisions occur, so multiple attempts still need to be made to find values in some cases.
		As expected though, either method of hash mapping does not make searches based on key ranges or partial key matches very efficient.
		All items in the hash map have to be checked for these types of searches.
		Items have an order, but they have no <strong>*meaningful*</strong> order, so checking records in order is difficult, and also requires checking all items to see which one should come next, and the same applies to finding the record with the smallest or largest key.
		Hashing works great when you just need an unordered key/value store though.
	</p>
	<p>
		Much to my surprise, once the book got done explaining hashing, it then moved on to cover both types of hashing.
		The form I was familiar with is known as open hashing, while the type the book introduced is known as closed hashing.
	</p>
	<p>
		When I read about dealing with hash collisions by choosing a new slot to put one of the items in, I just sort of assumed the new slot would be chosen similarly to how the first slot had been chosen: via some sort of hash.
		This would be pseudorandom probing.
		However, there&apos;s also another option it seems: linear probing.
		Linear probing chooses a new slot based on a very predictable sequence by adding a certain number to the home position of the item repeatedly until an empty slot is found.
		As long as the number of slots in the array cannot be evenly divided by the number chosen, just about any number can be chosen.
		One also makes a fine choice.
		Either way, each slot of the array will be tested (after wrapping around one or more times) and lead back to the home position if all slots are full.
		However, linear probing isn&apos;t a very &quot;hashy&quot; method of operation; it&apos;s basically sequential access, and not hash-calculated access.
		Hashing is useful to begin with for a reason, so intuitively, adding components to our algorithm that aren&apos;t very &quot;hashy&quot; will take away some of the benefit of hashing.
		As it turns out, linear probing caused the problem of primary clustering, which results in many values having to get checked when performing a lookup or setting a new key.
		It builds a linear search into the hash table.
		In other words, hash collisions need to be dealt with as we&apos;d expect: via pseudorandom probing.
	</p>
	<p>
		It&apos;s worth noting the possibility of quadratic probing, but it&apos;s not really a good idea because it doesn&apos;t put all slots on the probe sequence, so even with an empty slot available, it&apos;s possible no empty slot will be encountered.
		For certain table sizes though, quadratic probing using the right equation will in fact visit every slot though, and with a reduced cost over pseudorandom probing.
	</p>
	<p>
		As discussed above, long chains of full slots can occur, resulting in many data comparisons before finding the right data or a free space.
		In the case of linear probing, one of these chains causes several hash values to chain together so that the next value added on any of these hash values would all set their sights on the same empty slot.
		This is what&apos;s known as primary clustering.
		Using a better probing method eliminates that, but still has the problem of secondary clustering.
		Secondary clustering is caused when values with the same hash value follow the same probing sequence.
		This can be avoided by taking account the original key value in the probing sequence, and re-hashing it.
		This is known as double hashing.
		Again, this is just something I sort of assumed needed to be done.
		Intuitively, I wouldn&apos;t think to ignore the original key and assign the same probing sequence to every key with the same home position.
	</p>
	<p>
		The book discussed the cost of various operations, but when it mentioned deletion, I immediately thought of the impact in regards to the aforementioned hash collision resolution.
		What happens if record A is inserted, displacing the later-inserted record B that shares its home position, then record A is deleted?
		Unless the whole array gets resorted upon deletion, we can&apos;t correct for this, as there could be a chain of records in home positions in the probing sequence before we reach the displaced record B.
		The possibility of deletion seems to throw out all assumptions made about being able to stop the probing sequence once we encounter an empty slot.
		Thankfully, the next section covered this issue.
		A &quot;tombstone&quot; value is put in place of the deleted record, alerting the search code to know it needs to probe a little further.
		It also said that for insertion, the entire probing sequence must be traversed up to the point of an empty slot to verify that a duplicate key doesn&apos;t get inserted.
		I hadn&apos;t even considered that possibility.
		Tombstones of course lengthen the probing sequence needed for finding records, but periodic reorganisation of the array can fix that.
	</p>
	<p>
		I&apos;ve known about primary keys from database software such as MySQL, but for this week&apos;s reading, I learned what that actually means.
		Primary keys are unique, just like they have to be in MySQL databases, but keys in general don&apos;t have to be unique.
		I thought that if they weren&apos;t unique, they couldn&apos;t be keys at all, as data can always be looked up by key.
		Instead, data can only be looked up by primary key, but can be searched for using any key.
		Non-unique keys are known as secondary keys.
	</p>
	<p>
		ISAM seems like a rather hacky solution.
		It requires periodic updates to the databases indexes in a process easily comparable to defragmenting a hard drive.
		Like with defragmenting a hard drive, if your system is properly functional, you shouldn&apos;t need to do it.
		In the case of defragmenting, it&apos;s mostly a Windows-only problem caused by poorly-constructed filesystems ($a[FAT] and $a[NTFS], both of which were predictably developed by Microsoft).
		Despite countless issues such as easy corruption of data and constant fragmentation, even modern Windows uses $a[NTFS].
		On a proper filesystem, such as $a[ext4], fragmentation is almost non-existent.
		I see how ISAM was a good start though.
		The field of database management was fairly new, so development had to begin somewhere.
		At least ISAM kept data easy to find, even if it did come at the cost of a constant need for maintenance.
	</p>
	<p>
		Binary search trees on disk seem like a pain.
		On the one hand, you can optimise them for updates, or on the other, you can optimise them for access.
		One set of operations or the other is going to be slow though.
		If you optimise for access, you end up rebalancing the tree after any update, which can involve moving a lot of nodes to other blocks on disk.
		If you optimise for insertions, you don&apos;t rebalance or rearrange the data, so looking up values can require accessing many blocks on disk.
	</p>
	<p>
		The operations of 2-3 trees are a bit difficult to picture in my mind.
		I couldn&apos;t figure out how their strange operations could keep the tree perfectly balanced until the end, when it was explained that they gain and lose levels at the root, not at the leaves.
		The amount of data in each node is variable though, which is a bit strange.
		It seems almost disorderly and wastes space in nodes that store less than the maximum amount of data, but it avoids the need to make as many changes to the tree structure as you otherwise would need to during an update.
		B-trees fall into pretty much the same boat, being a superset of 2-3 trees, and have the same advantages and drawbacks.
	</p>
	<p>
		I&apos;m not sure what the advantage of a B+-tree is.
		The textbook says it&apos;s just like a B-tree, but without usable data in the internal nodes.
		Searches in B+-trees must therefore continue until they reach the leaves, every time, while searches in regular B-trees don&apos;t have this requirement.
	</p>
	<p>
		The finals exam mostly went well.
		Each of the exams had a single question I wasn&apos;t sure what the intent was though.
		I forget the exact questions, as my exams this term are proctored, so I had to take them at the testing centre instead of at home, but there was a technically correct answer and an answer I thought the exam was likely intending.
		Some of the answer keys at this school seem to favour technically-incorrect answers based on generalisation of the topic.
		I went with the technically-correct answer in both cases, but even if they get marked wrong, I should be fine.
		There wasn&apos;t a lot of challenge to the tests.
	</p>
</section>
END
);
