<?php
/**
 * <https://y.st./>
 * Copyright © 2018-2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 4402: Comparative Programming Languages',
	'<{copyright year}>' => '2018-2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		The reading material for the week is broken down into three sections, each with multiple things that need to be read.
		I&apos;ve never seen that in any of my courses here at the university before.
		There are also three ungraded quizzes instead of one big one, which again, isn&apos;t something I&apos;ve seen before at this school.
		I wonder why this was done and if this is going to be a common occurrence in this course.
	</p>
	<p>
		The book starts off telling us that the wrong question to ask when encountering a new programming language is what the language can do.
		I&apos;d never even thought to ask that though.
		The book says all programming languages perform the same computations.
		This isn&apos;t quite true.
		All <strong>*Turing-complete*</strong> languages perform the same computations.
		There are also domain-specific languages, which have a more-limited computation set.
		When encountering a general-purpose programming language, my first question is whether it can be written in, compiled, and run using only free (as in freely-licensed and unencumbered, not as in gratis) software.
		If the language doesn&apos;t function in my sterile Debian environment, it is of zero use to me.
		My second question is of the permanence of functions and (if available) classes.
		While I&apos;ll write in any language that passes the first test, I tend to prefer for my own projects languages that treat functions and classes as permanent structures over languages that treat them as replaceable objects/variables.
	</p>
	<p>
		Many parts that came next were review for me.
		For example, the use of assemblers.
		In a previous course, I was required to write in assembly and use an assembler in a very basic assembly language called Hack.
		I never could get the assembler functioning.
		I ended up building my own assembler from scratch in $a[PHP] and using it to complete the assignments.
		It was a highly educational experience for me.
		The book makes a good point that each layer of abstraction causes loss of detail and loss of control though.
		It&apos;s a usability trade-off.
		Do you want to be able to specify every detail at the cost of <strong>*having*</strong> to specify every detail, or do you want to be free to specify things at a higher level at the cost of accepting the implementation chosen by the computer?
		Which option is better often depends on the project, but as a general rule of thumb, human labour is more costly than computer labour, so working at a higher level is more cost-effective for programs in which such fine details aren&apos;t important.
	</p>
	<p>
		I had no idea that Pascal programs had to be written in a single file.
		That&apos;s rather inconvenient.
		Ada seems like it was developed the right way; which is to say, without hardware-specific hacks and especially without attempts to standardise bugs into features.
		Too languages and systems work to preserve backwards compatibility at the cost of keeping poorly-designed features around that should really be fixed.
	</p>
	<p>
		The book also emphasises the need to follow standards when programming.
		I highly agree.
		The book also says that if standards must be broken, they should be broken only in a few, well-documented modules.
		In most cases, I&apos;d argue that if you think you&apos;ve got to break standards, you&apos;re doing something very wrong.
		Think long and hard before breaking standards, and maybe get some advice from people that tend to strictly follow standards before proceeding in this direction.
	</p>
	<p>
		Next, the book discussed hardware, which again was review for me.
		I&apos;ve already taken a course that was entirely about hardware and machine instructions, so this was stuff I already knew.
		This, by the way, was the same course in which I built the assembler mentioned above.
		Memory speeds, caches, registers ...
		We covered all that.
	</p>
	<p>
		The language evolution chart was interesting, but was in a format that made it effectively useless unless printed out and assembled.
		I have no printer, and due to the license terms of the chart (commercial use is forbidden), wouldn&apos;t print it anyway.
		In the provided $a[PDF] form, I couldn&apos;t follow lines between segments, so I couldn&apos;t draw any information from it.
	</p>
	<p>
		The next reading assignment discussed how programmers choose their languages not by rational means, but for cultural reasons.
		To a great extent, I fall victim to that mentality.
		First and foremost, I need a language that functions in freedom.
		This is for highly-practical reasons.
		However, there are so many languages with free interpreters/compilers that build code that functions on my sterile Debian machine.
		That first and most important limitation doesn&apos;t rule out much; mainly only C# and whatever language Adobe Flash interprets.
		From there, my choice in what language to use is very much based on my impression of the language.
		For example, I don&apos;t program in Python because Python has a condescending attitude.
		Try running a script that begins with <code>from __future__ import braces</code> and you&apos;ll see what I mean.
		It&apos;s perfectly acceptable for a language to choose to use indention instead of braces to denote blocks.
		However, it&apos;s <strong>*not*</strong> acceptable for the interpreter and the language itself to deliberately mock users that don&apos;t like the design choices of the language.
		It&apos;d be acceptable in the manual or something, but not in the interpreter.
		This just goes to show how snotty Python is.
		Furthermore, Python doesn&apos;t even take itself seriously.
		For example, try running a script headed by <code>from __future__ import antigravity</code>.
		You get an Easter egg: a $a[URI] of a Web comic.
		Easter eggs don&apos;t belong in compilers or interpreters.
		I don&apos;t use Python if I can help it, because I don&apos;t like the attitude behind it.
		Instead, I&apos;ll use just about anything else, provided it works.
	</p>
	<p>
		Like the reading material says, Java and $a[PHP] are a bit poorly-designed.
		At least they don&apos;t mock you though.
		I tend to use $a[PHP] for anything non-graphical, as it has a nice hybrid $a[OOP] and non-$a[OOP] thing going for it, that allows objects to be used, but doesn&apos;t have classes or functions be objects themselves.
		I love that feature, even if it&apos;s not theoretically the most useful feature a language could have, and I&apos;ve never found another language that shares it.
		That is, aside from Hack.
		Hack is a language based on $a[PHP] and modified by Facebook.
		This isn&apos;t the same Hack I mentioned above, which is an assembly language.
		I mainly don&apos;t use Hack though because it&apos;s not available in Debian&apos;s repositories.
		I don&apos;t trust Facebook at all, so I&apos;d definitely need the code gone over by someone I do actually trust, such as the Debian developers, before I&apos;d ever adopt it for my own use.
		Because $a[PHP] doesn&apos;t have the functions needed for graphical applications, I&apos;ll likely turn to Java when I start building graphical applications.
		It works, it&apos;s cross-platform compatible, and it doesn&apos;t mock people like Python does.
		It&apos;s also a language I already have some experience with, due to courses here at this university.
		It does have major problems though, such as the capacity to have multiple methods that share the exact same method name in the exact same namespace, provided they take different arguments.
		That terrible feature leads to a lot of issues when debugging, as the error messages claim you&apos;re trying to call non-existent method, when you&apos;re really just passing the wrong arguments into a method call.
		I don&apos;t like Java, but with the right $a[IDE], it becomes tolerable.
	</p>
	<p>
		After reading that article, I&apos;m a bit more interested to try Perl.
		It might meet my needs, and if it does, it&apos;d mean being free of $a[PHP]&apos;s inadequacies.
	</p>
	<p>
		The textbook claims to use Extended Backus-Naur Form to specify rules, but it&apos;s using rich text qualities (bold and italics) to get some of the information across.
		The $a[RFC]s use Extended Backus-Naur Form.
		They&apos;re plain text documents with no rich text formatting.
		For that reason, I&apos;m pretty sure Extended Backus-Naur Form does not use rich text qualities to denote anything, and the book is using a non-standard variation on Extended Backus-Naur Form, after having told us that following standards is important.
		The Wikipedia article on <a href="https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form">Extended Backus-Naur Form</a> confirms too that it&apos;s the $a[RFC]s that have it right, and not the textbook.
		Boldface and italics aren&apos;t valid in Extended Backus-Naur Form.
	</p>
	<p>
		The book also claims that syntax rules are easier to learn if given in the form of sequence diagrams, but I&apos;d have to disagree.
		Real Extended Backus-Naur Form is easier to understand than both sequence diagrams and the book&apos;s bastardised Extended Backus-Naur Form, and sequence diagrams are harder to understand than both real Extended Backus-Naur Form and the book&apos;s bastardised Extended Backus-Naur Form.
		I think some people think diagrams make everything easier, but in many cases, they do not.
		Diagrams have a time and place, but that time and place isn&apos;t everywhen and everywhere.
		(The book&apos;s author, thankfully, does not seem to think stuffing the book with diagrams for everything is a great idea; but in general, some people do try to attach a diagram to everything they can find a way to.)
	</p>
	<p>
		The book also discusses needing to be careful when dealing with languages that are case-sensitive.
		I&apos;d argue the opposite.
		Case sensitivity is intuitive to any real programmer.
		<code>M</code> and <code>m</code> are different characters.
		They even have entirely different bit sequences.
		There is absolutely no reason to mix the two up.
		Rather, what you need to be careful of is case <strong>*insensitivity*</strong>.
		When a language drops case insensitivity on you, it causes several potential issues.
		In such cases, using a standardised case may be required to make sure nothing unexpected happens.
		For example, let&apos;s say you&apos;re writing a class autoloader for $a[PHP], which annoyingly makes class names (and function names) case insensitive.
		You don&apos;t know for sure what case a user will specify a class name by.
		You either need to standardise what case is used for referencing class names, so the autoloader always receives the expected case and knows right where in the filesystem to look, or you need to program the autoloader to in some way standardise the class names it receives before it searches the filesystem for them.
		After all, on a reasonable filesystem (read: most filesystems not developed by Microsoft) <code>/dir/MyClass.php</code> and <code>/dir/myclass.php</code> are entirely different files, as they very much should be.
		Any sane language is case sensitive.
		(And yes, that means that $a[PHP] isn&apos;t exactly sane, but anyone with any experience with $a[PHP] can tell you that.)
	</p>
	<p>
		The book mentions that a variable must be assigned a type, and gives a seemingly valid reason for it.
		The compiler needs to know how much memory to assign the variable.
		However, this only applies to strongly-typed languages.
		Strong typing is useful for more than just the compiler&apos;s work.
		It also helps catch certain types of mistakes, when done well.
		(Java does not do this well.)
		However, not all languages are strongly-typed.
		You can write a program in some languages in which a variable switches its value to one of another type or even one in which the type of the initial assignment isn&apos;t even known at compile time due to a conditional statement.
		Clearly, blanket claims that the compiler needs to know how much memory to assign a variable are dead wrong.
		It&apos;s only when you speak of certain specific languages that such a statement could be correct.
	</p>
	<p>
		The book also claims that the only statements in normal programming languages that actually do something are the assignment statements.
		I find that very difficult to believe.
		For example, are languages that allow file writes not considered normal?
		File writes are typically carried out by a special function or method.
		Never have I seen a file write carried out via assignment statement in any language.
		Unless ...
		Is the book trying to claim that writes to memory are assignment statements and that segments of disk space get assigned new values?
		That&apos;d be a really obtuse way of looking at it.
		And what about language functions that draw on the monitor or output sound?
		Is it calling these assignments?
		That&apos;s quite a large stretch there.
		Data gets written, but these aren&apos;t &quot;assignment statements&quot;.
	</p>
	<p>
		The book presents three methods of type checking when assigning values to variables.
		First, you can stuff the value into a typed variable regardless of a type mismatch.
		Data isn&apos;t converted or anything, so you can get odd results when you accidentally use a value that doesn&apos;t belong in that type of variable.
		Second, you can convert the value in some way.
		And third, you can throw errors in case of type mismatch.
		However, there&apos;s a fourth option: weak typing!
		While by no means a great option, some languages attach types to values, but not to variables.
		Any type of value can go in any variable in such languages.
		Despite the debugging issues this causes, it&apos;s still very much a common option some languages (Python and $a[PHP], for example) implement.
	</p>
	<p>
		I&apos;m not sure what we were supposed to get from the history section on John von Neumann.
		It didn&apos;t really talk much about programming languages, which is a supertopic of this course&apos;s actual topic, comparative programming languages.
		Comparative programming languages weren&apos;t discussed at all.
		I got two things out of that article though.
		First, Von Neumann was opposed to assemblers.
		He thought assembly should be done by hand.
		In his time, this may have been a valid thought, but by modern standards, computing has become cheap, while human labour remains expensive.
		Not only do we have computers perform assembly for us, but we even program things in such a way that they&apos;re less efficient with computing resources just to make them easier for humans to maintain.
		Assembling by hand today, for any reason other than to learn how assembly is completed by a computer for educational purposes, would be insane.
		Second, I learned that Von Neumann&apos;s computer design was abstract.
		He came up with high-level ideas for how it should be structured, but no implementation to realise this design.
		This in no way invalidates what he came up with, but it&apos;s interesting to know.
	</p>
	<p>
		I don&apos;t seem to understand how a Turing machine can simulate any computation.
		The problem is that it can only move forward or backward by one cell each time it processes a cell.
		How do you accomplish loops?
		Or conditionals?
		It doesn&apos;t make any sense.
		Maybe we&apos;ll study that further in another unit.
	</p>
	<p>
		One part of the reading assignment is on a server that blocks my $a[IP] address, so I had to skip it:
	</p>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/403_Forbidden.png" alt="403 Forbidden" class="framed-centred-image" width="689" height="267"/>
	<p>
		There was a monstrous amount of reading material this week.
		There was no written assignment this time aside from the learning journal assignment, but I&apos;m not sure how I&apos;m going to keep up once the written assignments start popping up.
		There&apos;s only so much time in the day, and I have other things to get done in addition to coursework.
	</p>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		Two of the three reading lists last week included Chapter Two from the text book.
		Two of the three reading lists for this week <strong>*also*</strong> include Chapter Two of the same textbook.
		I&apos;m just going to skip that this week, as I read it last week, and even then, it was mostly review.
	</p>
	<p>
		I&apos;ve never heard of a librarian.
		It manages libraries?
		I suppose that makes sense.
		I hadn&apos;t thought about libraries needing to be managed before.
	</p>
	<p>
		The book talks about using an $a[IDE] being easier than using dedicated tools for each task, but for the most part, I disagree.
		For example, I like my editor to be just an editor, and I compile and run things separately.
		That means I don&apos;t need to run things from the $a[IDE] during development, yet run them in some other way during normal use of the program.
		There&apos;s no reason for me to need to run the program in two separate ways (with the exception of having a separate debugging method of running).
		If your language is reasonable, a text editor with syntax highlighting should be all you need to write your code.
		(This doesn&apos;t mean you don&apos;t need other tools for other steps, such as testing.)
		Sure, you could have each tool with the press of a single button, but you&apos;re also having to keep track of multiple ways to perform the same task.
		It&apos;s easier just to use your regular, separate tools if you ever use those tools when not developing.
	</p>
	<p>
		In particular though, Java is poorly designed, and as a result, is a pain in the neck without an $a[IDE].
		This is a fault of the Java language though.
		Java is so painful to work with that a full $a[IDE], such as Eclipse, is almost needed to make Java development feasible.
		The main feature I use of the $a[IDE] in Java though is the part that tells me I&apos;m passing the wrong symbols around.
		This isn&apos;t needed in other languages, as the compiler will tell you off, but when the Java compiler tells you off about that, it mistakenly tells you you&apos;re calling non-existent methods, leading you to focus on finding method name typos which don&apos;t exist.
		This is done because Java stupidly allows multiple methods to exist in the same namespace with the same name.
		Which method is used is determined by what arguments are used.
		If you pass the wrong symbol by mistake, The Java compiler won&apos;t find the method you tried to call, as it&apos;s looking not only for a method with the right name, but a method with the right name that takes the arguments you passed in.
	</p>
	<p>
		Compiler optimisations seem useful.
		Because the programmer doesn&apos;t have to perform the optimisations by hand, the source code might be written in a way humans understand better.
		However, no performance is lost because the compiler fixes it to use the machine&apos;s resources better in the binary.
		It&apos;s disappointing though that interpreters don&apos;t make the same optimisations.
	</p>
	<p>
		The section on standard runtime systems was informative.
		I&apos;ve always wondered why I needed, for example, C libraries installed on my machine even when not compiling from source.
		I thought the produced binaries were complete machine code representations of the files they were compiled from.
		Instead though, it seems that common things needed by many programs are instead omitted from the program binaries and instead stored in the runtime system libraries.
		No doubt this greatly reduces the amount of space required to store large numbers of programs.
	</p>
	<p>
		Debugger seem very useful.
		I haven&apos;t worked with one before.
		I wouldn&apos;t mind finding one for Lua and one for $a[PHP], the two languages I work with most often.
		Then again, a Lua debugger might not help me, as it wouldn&apos;t have access to the application the Lua runtime I work with is embedded in.
		Without that application, my scripts would always fail anyway.
	</p>
	<p>
		The page on static and dynamic typing was confusing at first.
		It defines static and dynamic typing in terms of when the variables are instantiated.
		Later, it goes back and explains that they can also be defined by when variable types are checked.
		It claims that&apos;s a more confusing way to view it, but for me, it&apos;s a lot easier to understand that way.
		Statically-typed languages assign a static, unchanging type to the variables, which is checked at compile time.
		Dynamically-typed languages can&apos;t check types at compile time because the types aren&apos;t static.
		They won&apos;t have a type until a value is assigned to them.
		So they can only be checked at run time, when they actually hold a value.
	</p>
	<p>
		The page goes on to explain that strong and weak typing is a separate concept than static and dynamic typing.
		My bad.
		I thought they were the same.
		In last week&apos;s learning journal, I even made this mistake.
		It&apos;s a bit difficult to wrap my head around, but it seems weakly-typed languages aren&apos;t necessarily dynamically-typed, but rather, their variables have no type.
		You can perform operations on values of unexpected types.
		The example used by the article is written in $a[PHP], and makes use of $a[PHP]&apos;s typecasting system.
		A value of the wrong type can be used because the value gets converted to some other value.
		A variable of the wrong type can be used because ... well, technically it can&apos;t, because variables don&apos;t have right or wrong types.
	</p>
	<p>
		I definitely prefer static typing over dynamic typing.
		When done correctly (Java does not do this correctly), this makes debugging much easier (Java actually does this in a way that makes debugging much harder).
		I&apos;m not sure whether I prefer strong or weak typing though.
		There are advantages to being able to make use of typecasting.
		At the same time though, this is an avenue for potential mistakes.
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		Wow.
		This time, instead of a three-part reading assignment, we get a five-part one.
	</p>
	<p>
		Unfortunately, one of the pages that would be helpful for the discussion assignment seems to be missing:
	</p>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/404_-_File_or_directory_not_found..png" alt="404 - File or directory not found." class="framed-centred-image" width="489" height="275"/>
	<p>
		Most of the stuff on primitive data types was review for me, but I did pick up a couple new things.
		First, computations on unsigned integers should be avoided, as the processor performs computations on signed values.
		The book says performing operations on unsigned values usually results in the compiler adding extra instructions to the compiled code.
		I&apos;m not sure this is exactly right, but my interpretation of that is that the unsigned value must be converted to a signed value, the computation be performed, then the value converted back.
		Assuming the compiler doesn&apos;t skip over stuff to try to save resources, this means that the highest-order bit must be preserved and worked into the computation in some way, which involves further computations.
		(Remember that in a signed value, the highest-order bit is basically a sign bit, so an unsigned integer can get twice as high as a signed integer (well, twice as high plus one, technically), at the cost of never being able to be negative.
		Converting to a signed integer therefore causes that bit to be lost or very high numbers become negative, neither or which is correct handling of unsigned integer computations.
		Thus, the extra steps to work that high bit back in correctly.)
		I&apos;ve been mildly annoyed in a project I&apos;m working on, as the environment I&apos;m working in not only doesn&apos;t provide an integer data type, but doesn&apos;t provide unsigned values at all.
		Maybe the lack of unsigned values is for the best, as I&apos;d be using them for computation all throughout the code, and that seems to be suboptimal.
		The lack of integers is still a drag though.
		I&apos;ve only got 64-bit doubles, and I can&apos;t even make use of all the bits, as many of them are used for the exponent component.
		It&apos;s a waste of potential.
	</p>
	<p>
		It&apos;s also interesting to note that the <code>int</code> data type has a different number of bits on different machines.
		I thought for sure that this was a language-specific detail (which is why C offers varying sizes of integer as a part of the language).
		I usually use $a[PHP] (though not for the integer-less project I mentioned above), and I&apos;m aware $a[PHP] doesn&apos;t have a standard integer size.
		It varies by processor.
		I&apos;ve found that to be odd, and I wrote it off as one of $a[PHP]&apos;s many bizarre idiocies.
		Maybe that&apos;s just the standard way of doing things though.
		It certainly sounds that way, by what the book is saying.
		It seems too that compilers map the different integer types based on the processor&apos;s capabilities.
		For example, if you write in your source code that you need an 8-bit integer and the processor doesn&apos;t operate on partial words, you&apos;ll get a full-sized integer instead.
		The obvious result of this, besides extra memory usage, is that the value doesn&apos;t overflow and underflow where you think it does.
		It&apos;s interesting too that division should be avoided for efficiency.
		I wonder if there&apos;s a way to avoid division in a project I&apos;m working on.
		It&apos;s going to require comparing value A with the value obtained by dividing value B by a large number, and it&apos;s got to do this very often.
		Maybe it would be more effective to multiply value A by the number, then perform a comparison against value B without the division.
		That should get me the same result at a fraction (pun intended?) of the cost.
	</p>
	<p>
		The book describes two-s compliant, and I won&apos;t get into details about that, as I&apos;ve already discussed it in my notes for another course.
		Everyone in this course should have taken the lower-level course first, so two&apos;s compliment should be review.
		I would like to note though that I used to think two&apos;s compliment was the only way to represent a signed integer, though I didn&apos;t know at the time that it had a name.
		There&apos;s another option though, called one&apos;s compliment.
		One&apos;s compliment is terrible.
		For computation, negative values have to be handled as a special case.
		Furthermore, one of the bit combinations is wasted on a representation for &quot;negative zero&quot;, a value identical to regular zero (I say regular zero and not positive zero, as zero can&apos;t actually be positive or negative).
		I guess one wasted representation isn&apos;t too bad in the grand scheme of things, but I&apos;ve always been a stickler for details.
		Why waste a bit combination when you could have it represent something actually useful?
		Discussions of one&apos;s compliment never seem to touch on this wasted bit combination, but it&apos;s something that&apos;s always been more of a concern to me than the added computational complexity one&apos;s compliment also requires.
		I was impressed that the book actually discussed the positive and negative zero issue.
	</p>
	<p>
		I won&apos;t discuss it much, as I already discussed it quite a bit in my initial discussion post for the week, but the book claims that unsigned integers are cyclic, while signed integers have overflow.
		The simple fact is that <strong>*both*</strong> signed an unsigned integers have overflow, and that it is <strong>*because*</strong> of this capacity to overflow that <strong>*both*</strong> signed and unsigned integers are cyclic.
		Some languages specifically detect this sort of thing though, preventing overflow.
		For example, if an integer is out of range in $a[PHP], id doesn&apos;t cycle around like it should.
		Instead, it gets converted to a float, which has a wider range and doesn&apos;t overflow even past its range.
		($a[PHP] floats are actually what other languages call doubles, when compiled for a 64-bit platform, though when compiled for a 32-bit platform, $a[PHP] floats are 32 bits like floats in other languages.)
		Related to overflow is of course underflow, which both signed and unsigned integers experience as well.
		Except, again, in languages that detect underflow and treat it as a special case, such as $a[PHP].
		On a processor level though, overflow is still in effect. It just gets detected at a higher level.
	</p>
	<p>
		I feel like I&apos;m explaining this for the thousandth time, but division results in one number, not two.
		There is no &quot;remainder&quot;.
		Failure to recognise this fact is a pet peeve of mine.
		<code>7</code> divided by <code>2</code> is <code>3.5</code>; it is <strong>*not*</strong> <code>3</code> remainder <code>1</code>.
		If you&apos;re working with integer division, it&apos;s just <code>3</code>.
		If you&apos;re working with modulo division, it&apos;s just <code>1</code>.
		Modulo division has nothing to do with remainders.
		It has to do with cycles.
		The result of modulo division is what you have when you&apos;ve gone a certain distance in a number system with a cycle, such as the integer number system used in a computer.
		It&apos;s about what you have left after you&apos;ve rolled around.
		Sure, if you think of the result of modulo division as the &quot;remainder&quot;, you&apos;ll get the right answer every time.
		That&apos;s just how cycles work.
		However, it paints the wrong picture of what division is.
		Under real-world circumstances, you can usually divide your units up, and get fractional answers.
		In computing, and when trying to split up things that can&apos;t be divided, you&apos;ve got integer division.
		When using integer division, you&apos;re not concerned with &quot;leftovers&quot;.
		Like the book says, this extra part is just truncated.
		There is no remainder.
	</p>
	<p>
		Subtypes are interesting.
		They&apos;re like subclasses, but for primitive types.
		Also, they can&apos;t add functionality, they can only remove functionality of the supertype.
		Still, they can be very useful for making sure your data stays within meaningful ranges.
		It looks like derived types are in the same boat.
		They restrict functionality, but do not add to it.
	</p>
	<p>
		It hadn&apos;t occurred to me that parentheses, used for grouping, cost nothing in code.
		Strictly speaking, this only applies to compiled code, I&apos;m sure.
		It costs the compiler something, but in the compiled binary, everything is in machine language and machine language doesn&apos;t have complex expressions and cannot include groupings like that.
		What that means is that there&apos;s probably some (though very little) cost for parentheses in scripted languages.
		However, this cost is probably minimal and not worth worrying about.
		Constant folding is also pretty awesome.
		In the past, I&apos;ve calculated values and put them in the code directly to save processing power, but it looks like that need not be done.
		Well, in compiled code, anyway.
		It probably still helps to do that by hand in scripted code, which is most of what I work with.
	</p>
	<p>
		Records and record types seem a lot like objects and classes.
		I&apos;m not sure there&apos;s really a substantial difference between the two.
		I&apos;ve only ever used the type of multi-dimensional array formed by nesting arrays within arrays.
		The concept of an array simply requiring multiple keys was a new concept to me.
		Fixed-point numbers are something I hadn&apos;t heard of either.
		I&apos;d used floating-point numbers for years, and recently learned how they work, but fixed-point numbers seem like the better option for almost all of my use cases.
		I almost always prefer precision over range.
		Or as the book puts it, I prefer absolute precision over relative precision.
		The optimisation of leaving off the leading <code>1</code>, as the lead bit is <strong>*always*</strong> <code>1</code>, also explains why I was having such difficulty learning how floating-point values work in the past.
		I needed to know the highest and lowest integers of the range that can be precisely stored for a project of mine, as I didn&apos;t have a real integer type to use in that environment, so I calculated it out and ran some code to test my calculation.
		I found it exceedingly difficult to figure out how the value was packed into the bits though, which made it difficult to find those cut-off points.
		(It&apos;s also worth noting that floating-point numbers suffer from the same issue of there being two representations of <code>0</code> that one&apos;s compliment integers suffer from.)
	</p>
	<p>
		The section on scope, visibility, and lifetime was confusing.
		At first, I thought it was discussion concepts I&apos;m already very familiar with, but then it got a bit obtuse in its explanations, and I thought it was discussing new concepts that happened to have the same names as things I&apos;m familiar with.
		But once I figured out what it was talking about, I saw my initial hunch was right.
		Mostly.
		Anyone that&apos;s ever written even mildly-complex code should understand these concepts already.
		The one thing I didn&apos;t get right was what it was talking about as far as visibility.
		A local variable can basically &quot;cover up&quot; a variable with a less-local scope, such as a global variable.
		It&apos;s a concept I make use of all the time in Lua, but isn&apos;t what I thought was being referred to when the section began.
		I thought it was going to talk about <code>public</code>, <code>protected</code>, and <code>private</code> variables.
		In Lua though, I often use variable names that make sense to me within functions, but those variables sometimes have the same names as Lua functions I never use.
		If I were to try to call those functions though, I wouldn&apos;t be able to because Lua would try to call my variable as a function instead of calling the global function.
		(Lua is object-oriented, so functions aren&apos;t reachable by static names, but instead called via variables holding pointers to the functions.)
		For example, one of my projects deals with a lot of pairs of things, so it uses a local variable called <code>pairs</code>.
		If I recall, that variable holds an table containing every pair the code deals with, in the form of two-element subtables.
		However, there&apos;s a Lua function called <code>pairs()</code> as well.
		It never causes problems though, as I always use <code>next()</code> instead of <code>pairs()</code>, as <code>next()</code> is easier to work with.
	</p>
	<p>
		In the discussion forum, I learned something new.
		Apparently, integer overflow is a source of crashing on Windows-based systems.
		I had no idea.
	</p>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		The section on control structures was review for me.
		In fact, I was surprised to see such a basic thing covered in a 4000-level course.
		I&apos;m not going to look back into the course catalogue to make sure or anything, but I&apos;m pretty sure some courses requiring basic programming are prerequisites of this course.
		In my particular case though, I&apos;ve been using control structures for years before coming to this school.
		The part on jump tables was new though.
		That part was about implementation assembly language, and I hadn&apos;t dealt with assembly language until I got to this school.
		I&apos;ve used it in only one course, and only when translating assembly language to machine code.
		How higher-level languages translate into machine code wasn&apos;t covered.
	</p>
	<p>
		I disagree with the book as to the use of curly braces.
		It says to use them to force <code>else</code> statements to bind to the correct <code>if</code> statements when the default binding is wrong.
		However, braces should <strong>*always*</strong> be used, regardless ow where the bindings would normally be.
		They just make the program that much easier to read.
		They&apos;re punctuation.
		Leaving them out just because you can is like leaving punctuation out of an English sentence just because no one&apos;s getting on your case about it and they understand your intent.
		They may understand your intent, but you&apos;re still making it harder for them to do so.
		In this case, the compiler doesn&apos;t care.
		However, other programmers are potentially going to read your code at some point.
		Like indentation, braces should not be omitted.
		The one exception I&apos;d make to that rule is the <code>else if</code> option.
		Some languages provide an actual <code>elseif</code> key word (<code>elsif</code> or <code>elif</code> in some languages), but others rely on simply using an <code>if</code> statement as the statement run in an <code>else</code> block.
		In that case, I would recommend the common practice of omitting the braces and using <code>else if {}</code> instead of <code>else { if {}}</code>.
		The former is just so much easier to read.
		The book mentions too that the braces can be used to execute multiple statements as a block, but again, I use the braces even for single-statement <code>if</code> blocks for readability.
	</p>
	<p>
		The information on compiler optimisation of <code>if</code> statements was informative.
		Typically, I avoid using expressions I think have more steps, rearranging my code to do so.
		For example, if I have an <code>if</code>/<code>else</code> statement, I don&apos;t include <code>if(!\$val):</code>; instead, I use <code>if(\$val):</code>, then put the positive option in the <code>if</code> block and the negative condition in the <code>else</code> block.
		It looks like with compiled code, I don&apos;t have to do that.
		That said, for now at least, I work mostly with scripted code instead, and the book tells us that interpreters don&apos;t tend to make the same optimisations as compilers.
	</p>
	<p>
		I knew about short circuit evaluation of statements, but I didn&apos;t know there was a way in some languages to use full evaluation instead.
		I frequently make use of short-circuit evaluation as the book said to make sure an operation can be performed.
		When I work in Lua, I frequently don&apos;t know what table keys exist, due to the dynamic nature of the environment I use Lua in.
		As such, if I need to check the value of a subtable in one of the main tables, I use <code>if table[key] and table[key].other_key == &quot;value&quot;:</code> instead of simply using <code>if table[key].other_key == &quot;value&quot;:</code> because I don&apos;t know what keys my function will be passed and I don&apos;t know if those keys will currently be valid entries in the table.
		However, if those entries are in fact in the table, I know they&apos;re supposed to be tables themselves and contain the other key I&apos;m checking against the value of.
	</p>
	<p>
		The book discussed why not to use global variables as looping indexes.
		I&apos;d say it&apos;s generally best to avoid using global variables whenever feasible.
		Much of the time, when you assign values to variables, you&apos;re not looking to change things outside the area you&apos;re using the variable.
		In such cases, you want your declared variable to not overwrite some global value, but also, you don&apos;t want your value saved after your block is complete.
		By using local variables, you ensure your variable is properly disposed of at the end of the block and its memory freed.
		Well, except in some languages, in which memory must be explicitly freed.
		Again though, you can still make the variables local to avoid accidental contamination, as the book said.
	</p>
	<p>
		The idea of sentinels in searching arrays is interesting, though I don&apos;t understand how they work in practice.
		The idea is that you add an extra spot at index zero when searching an array, and that spot is set to the search value.
		However, there are a couple problems with this.
		First of all, index zero is the beginning of the array.
		That means there&apos;s <strong>*already*</strong> an index zero in your array, and setting it to another value will overwrite the previous value.
		To see the second problem, let&apos;s try using a different index instead.
		Say, negative one.
		That way, the value comes before the first value as intended, and if not found in the array, <code>-1</code> is returned.
		That&apos;s an easy enough condition to check for, just like checking the return value for <code>0</code>.
		However, we&apos;re now outside the bounds of the array, and can&apos;t properly set a value there without smearing memory.
		We can even try putting this at the end instead of the beginning, but it doesn&apos;t matter, as we&apos;d smear memory there too.
		That is, unless the sentinel is supposed to be a part of the array even when not searching.
		I guess that could work.
		However, I wouldn&apos;t use position zero for that.
		I&apos;d use position n-1, the final entry in the array.
		Programmers expect arrays to start at zero, and index zero should contain real data.
		If we&apos;ve put an extra array index into the array for this purpose, we know how many indices the array has.
		At the end of the algorithm, I&apos;d check to see if the found index is n-1, and if so, convert it to <code>-1</code> for ease of programmers checking to see if a value has been found.
		Again though, I would never use zero as an error condition, such as a value not being found.
		It&apos;s a perfectly good array index.
	</p>
	<p>
		The book claims <code>goto</code> statement have their place, albeit a very small one.
		It says they should usually be avoided though.
		I&apos;ve yet to see a situation in which <code>goto</code> statements are preferable, but maybe that&apos;s just the sort of code I&apos;ve worked with.
		Maybe in some obscure situation, <code>goto</code> statements (excluding the ones in assembly languages, as assembly languages are too simple to include basic loops and conditional execution) really do have a valid use.
	</p>
	<p>
		I typically refer to subprograms as functions, due to the languages I tend to use, but other than that, the section on subprograms was also review.
		The book makes a differentiation between functions and procedures, saying that the former returns a value while the latter does not, but many languages don&apos;t make that differentiation.
		For example, in $a[PHP], both types of subprogram are simply called functions.
		Very modern versions of $a[PHP] do have void functions though, which are functions specifically stating themselves not to return a value.
		From what the book says, it sounds like C also doesn&apos;t have this distinction, instead using void functions in place of a separate procedure option.
		Still, it makes sense to differentiate these two types of subroutines when speaking in a general context and not about specific languages.
	</p>
	<p>
		The book also seems to use the same terminology as Java when talking about values passed into a subprogram.
		It talks about &quot;formal parameters&quot; and &quot;actual parameters&quot;.
		When using those terms, the unqualified term &quot;parameter&quot; becomes ambiguous.
		For that reason, I <strong>*never*</strong> call the value passed in a parameter of any sort.
		Rather, I call it an argument, as many discussions on the topic and even many programming languages do.
		Even mathematics calls the values passed to a function &quot;parameters&quot;.
		I typically call the variables declared in the subroutine&apos;s signature simply &quot;parameters&quot;, though is a Java-centric course, I sometimes call them &quot;formal parameters&quot; to be clear.
		Still, that means I say &quot;formal parameters&quot; and &quot;arguments&quot;, not &quot;formal parameters&quot; and &quot;actual parameters&quot;.
		It&apos;s just much more clear that way.
	</p>
	<p>
		I don&apos;t understand the logic in the book&apos;s complaint about named arguments.
		It claims that if you switch to a competing library, you may have to modify calls to the library&apos;s subroutines if it uses different named parameters.
		If your arguments don&apos;t match the new library&apos;s parameters, you&apos;ll have to edit your function calls.
		The book says you should instead use positional parameters instead.
		However, we&apos;re talking about separate libraries here.
		The fact that the function calls need to be edited is a symptom of the fact that the $a[API]s of the two libraries don&apos;t match.
		And if the $a[API]s don&apos;t match, there&apos;s a good chance calls made using positional arguments will break too, as the new library will take arguments in a different order.
		If your $a[API]s don&apos;t match, it doesn&apos;t matter what syntax you use to pass your values.
		You&apos;re going to need to update your function calls either way.
	</p>
	<p>
		The book claims that recursion is underused, but I&apos;m not so sure of that.
		Recursion is powerful.
		However, it also has a lot of overhead.
		When recursion is the right tool for the job, it can do wonderful things.
		However, it&apos;s usually the wrong tool for the job, and something else usually provides a better solution.
	</p>
	<p>
		I like how the first page on recursion makes sure to mention that recursion is never actually necessary.
		I strongly believe it to be useful tool for certain types of use case, however, there&apos;s always a way to specify the method of doing something without using recursion.
		That page was all review though.
		Not only do I use recursion decently enough, but I&apos;ve also spent quite a lot of time simply thinking about recursion.
		When I&apos;ve nothing better to do, I tend to let my mind wander to oddball topics such as recursion.
		I agree with the page&apos;s general guidelines for when to use recursion, too.
		The book made a big point about avoiding the use of global variables within recursive functions, but as it said, it&apos;s best to avoid using global variables in general.
		Recursion compounds the problem, but it doesn&apos;t create it.
	</p>
	<p>
		The second page on recursion didn&apos;t have a whole lot to add, but it did make a few important points.
		First, recursion is slower and uses more memory than things written instead of it.
		Second, it makes some code easier to develop and maintain, so what it optimises is developer resources, not computer resources.
	</p>
	<p>
		Oddly, no one besides me posted to the discussion board during the first three days.
		I wonder of other students were struggling this week.
		I guess recursion can be a heavy topic for new users.
		If I recall, I had difficulty with it myself, when I first started using it.
		I&apos;m not guessing that many people had difficulty with control structures though.
		Those are pretty basic.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		Several of this week&apos;s assigned readings weren&apos;t actually available to be accessed:
	</p>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/404_-_File_or_directory_not_found.~_again.png" alt="404 - File or directory not found." class="framed-centred-image" width="489" height="275"/>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/Can_not_find_data_record_in_database_table_context..png" alt="Can not find data record in database table context." class="framed-centred-image" width="778" height="658"/>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/Page_not_found.png" alt="Page not found" class="framed-centred-image" width="304" height="729"/>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS4402/Unable_to_connect.png" alt="Unable to connect" class="framed-centred-image" width="332" height="557"/>
	<p>
		The reading material for the week mentions that system languages allow applications to be developed more quickly than bare assembly language.
		They also allow easier <strong>*maintenance*</strong> too though.
		It&apos;s not just initial development time that matters.
		If the software is to survive, adapt, and evolve, the initial development will only be a part of the picture.
	</p>
	<p>
		That same article talks about how strong typing encourages incompatible interfaces, and how conversion code is needed to translate between data types when trying to combine things.
		That all makes sense.
		The thing there that caught my attention though it that it said this usually isn&apos;t an option, as most programs are distributed as sourceless binaries.
		It made me reflect a bit.
		I really do appreciate the availability of source code.
		All of the software I have on my computer, down to the very last system component has its full source code made available to the public.
		Not everyone lives that way though.
		In a world where proprietary software is considered the norm, most people don&apos;t have access to the source code of hardly anything on their system.
		MY computer&apos;s hardware no doubt has some embedded, sourceless binaries, such as ones to to make the hard drive function, but everything running on top of that can be edited and recompiled at any time.
		For people that want the freedom to modify what they like, my system, Debian, is truly a great option.
	</p>
	<p>
		Still, even with access to the source code of everything on your machine, modifying and recompiling things to fit them together is inconvenient.
		Scripting languages provide a better glue by not requiring modification to existing components.
	</p>
	<p>
		The article does make the point that one type of language doesn&apos;t replace another.
		It says scripting doesn&apos;t replace system languages, and the reverse is true as well.
		As for scripting replacing system languages, that wouldn&apos;t make much sense.
		Scripting languages must somehow be implemented, and they&apos;re usually implemented in a system language.
		But still, the point is that you&apos;ve got to choose the right tool for the job.
		A hand saw isn&apos;t a replacement for a hammer; both have their own purposes.
		Mainly though, between these two, it&apos;s a trade-off between efficiency of running the software and efficiency in developing and maintaining the software.
	</p>
	<p>
		I already know how write Linux shell scripts, I just don&apos;t like to do it, so I didn&apos;t get much out of that tutorial.
		For anything even remotely complex, shell scripting is very painful to work with.
		I also know how to use Python pretty well, though again, I prefer other languages.
		Python just has a bad attitude; I prefer languages that don&apos;t actively mock users.
		The $a[PHP] tutorial wasn&apos;t of any help either; $a[PHP] is an inconsistent language, but it&apos;s my native tongue, so I use it for nearly everything.
		That doesn&apos;t mean I think it&apos;s well-designed; I also use English for all written/spoken work, yet English is one, if not <strong>*the*</strong>, most broken natural languages.
		Still, I use $a[PHP] every day, so a beginners&apos; tutorial on the language isn&apos;t going to cover anything I don&apos;t already know.
	</p>
	<p>
		The Perl tutorial was much more useful to me.
		I&apos;ve been meaning to try learning Perl anyway.
		From the tutorial, I found Perl&apos;s syntax is very much like that of my native language, $a[PHP].
		Arrays are declared a bit differently though.
		When initialising the array, an <code>@</code> sigil is used instead of the usual <code>\$</code> sigil used for variables.
		However, when retrieving or changing the value, you do use the usual <code>\$</code> sigil.
		Two sigils are used to get the length of the array (<code>\$#</code>), but the number returned is actually one less than the array length, which is very unintuitive.
		This results in the returned value being equal to the final key of the array, unless the array is empty, in which you get back <code>-1</code>.
		You can also use these sigils to set the number of values in the array.
		Instantiating a hash uses the <code>%</code> sigil, and accessing/setting values uses curly braces instead of square brackets.
		This is all well and good, but one thing presented confuses me.
		The tutorial says the different sigils offer different namespaces.
		If that&apos;s the case though, why is the <code>@</code> or <code>%</code> sigil used to create an array or a map, but the <code>\$</code> sigil used to access the values in the created constructs?
		That doesn&apos;t sound like separate namespaces to me.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		Most of this week was review for me.
		I program as a hobby, and then to use object-oriented programming whenever I find the slightest reason why it would make sense to.
		Typically, if there are at least two functions I have defined that operate on the same data, I create a new class and make those functions into methods.
		If there is only one function though, I typically just define it as a stand-alone function, not classes or objects involved.
		Then again, my native language, $a[PHP], is bizarre in that it supports both object-oriented programming and non-object-oriented programming very well.
		Most languages tend to heavily favour one style over the other.
	</p>
	<p>
		The book makes the claim that object-oriented programming should be done even in languages that don&apos;t support objects.
		I&apos;d never thought of the possibility of doing that.
		I guess I&apos;ve done it before, back before I learned about standard object-oriented features though.
		I&apos;m not sure how to pull it off in a strongly-typed language though.
		Only when an array can hold multiple types of data do I really see faking object orientation as an option.
	</p>
	<p>
		The <code>virtual</code> key word is new to me.
		In C++, it looks like it causes late static binding.
		Late binding is the default in the scripting languages I work with, so we don&apos;t need that key word.
		The exception would be $a[PHP], which uses late binding in most contexts, but early binding when a method is called without an object from another method.
		In that case, early binding is the default, though this can be overridden at the place of the method call with the <code>static</code> key word.
		It looks like C++ takes binding a step further though.
		Being that the variables themselves have types, the method calls are bound at compile time, so a different version of the method might be called on the same object depending on what type of variable is currently storing the object.
		The <code>virtual</code> key word is used specifically to override that.
	</p>
	<p>
		From the looks of it, Ada allows the definition of multiple functions that have the same name, provided they have different arguments.
		This is the one thing I really hate about Java.
		With true class support, it makes sense to have different classes define methods of the same name.
		Each class definition is a namespace, separate from other name spaces.
		However, having multiple non-class functions (or multiple methods in the same class) have the same name only leads to difficulty in debugging.
		The unconstrained type variables that get constrained by adding a value to them are a bit odd, but I can see how they&apos;d be useful when combined with unconstrained function parameters.
		That said, in most cases, you want your function parameters constrained in some way.
		It seems like a niche usage, but likely has some applications on rare occasions.
	</p>
	<p>
		The section on Ada&apos;s polymorphism support was difficult to follow, due to the odd use of terminology.
		I think I get it, but it was certainly a slow read, as there were at least four strange terms that I needed to patch in my mind with more-appropriate terms to even grasp the concepts the book was trying to present.
		I get why the book presented the lesson that way.
		Like it said, it&apos;s best to discuss a language with that language&apos;s own terminology.
		However, it&apos;s asinine for language developers to make up their own terms to fit concepts present across programming languages or to take terms that already exist and use them to name different language features than they typically refer to.
	</p>
	<p>
		Multiple inheritance is something I haven&apos;t worked with before, but something I&apos;ve long wished to have in $a[PHP].
		In $a[PHP], we have traits, but they don&apos;t allow for type enforcement in function/method arguments and return values.
		We have interfaces, but they don&apos;t allow the definition of method bodies, only method signatures.
		And we have inheritance, but there can only be one parent class.
		There have been many times in which I have absolutely <strong>*needed*</strong> all objects of a given class to have a specific implementation for certain methods.
		The base class therefore defines the method, and declares it <code>final</code>.
		However, one or more of the subclasses, which provide different implementations for other methods, would benefit from inheriting from inheriting from one of the language&apos;s built-in classes.
		Such inheritance is impossible though, without relaxing the requirements on the strict method, which isn&apos;t an option, so I&apos;m not able to make use of the built-in class&apos; features.
		Another example is my exception definitions.
		In order to define the exceptions to be catchable in general <code>catch</code> blocks, my exceptions need to descend from the exceptions of the former leaf nodes in the main exception class family tree.
		However, to make them recognisable as having the extra methods I added, they need to either descend from my own exception base class or implement my own exception interface.
		I have to go with the interface, due to having only single-inheritance capabilities.
		As interfaces can&apos;t define method bodies, I need a separate trait to provide the method bodies.
		As a result, I&apos;ve got an interface and a trait that are only meant to be used together, but there&apos;s no actual way to enforce this, and the fact that there are two items, an interface and a trait, feels like a duplication.
	</p>
	<p>
		One thing the book didn&apos;t mention at first, but I&apos;ve seen in $a[PHP] is <code>protected</code> class members.
		<code>public</code> class members are visible to everything.
		<code>private</code> class members are only visible within the class itself.
		$a[PHP]&apos;s <code>protected</code> members, however, are visible within the class itself and within any child classes, but are hidden everywhere else.
		It looks like C++ has these too, they just didn&apos;t get mentioned until the next section, which was odd.
		C++&apos;s <code>friend</code> methods and classes are interesting.
		They allow certain outside code to access <code>private</code> methods and classes without opening up visibility of the class methods to all parts of the code.
		I&apos;m not sure that&apos;s a great idea, but it&apos;s certainly something that gets one thinking.
		Ada&apos;s child package concept seems a bit wonky.
		Child packages, which are not defined by the parent, are able to access the parent&apos;s <code>private</code> members, despite the parent declaring said members <code>private</code>.
		That seems to me like it defeats the purpose of declaring said members <code>private</code>.
	</p>
	<p>
		Eiffel&apos;s treatment of constants as being methods without arguments is strange, and at first, I thought it inefficient.
		I pictured defining Eiffel basically defining a method for each constant.
		An item to the calling stack would be pushed, the value returned, then that calling stack item popped.
		That&apos;s probably not how it works at all though.
		Eiffel&apos;s constants are probably just like methods in the syntax of how to use them.
		When compiled, such unnecessary behaviour is very unlikely to be used.
	</p>
	<p>
		&quot;Overloading&quot; is another one of those words that seems to get used differently by different languages.
		The version presented by the book is the same version Java uses: multiple methods within the same namespace can be given the same name, as long as they take different arguments.
		I find overloading to be incredibly aggregating.
		If you have different methods, just give them different names.
		The only language I&apos;ve worked with that supports overloading is Java, and the feature messes with Java&apos;s error messages and causes confusion when debugging.
		If you accidentally pass the wrong variable into a method, and what you accidentally passes in is of the wrong type, the Java compiler will complain that you&apos;re trying to call a non-existent function.
		So what do you do?
		You scour your code trying to figure out why the method that&apos;s sitting right in front of you is somehow not defined by the time you try to call it later in the code.
		Did you misspell the method name, either in the method&apos;s declaration or where you called it?
		No.
		Is there some sort of scope issue?
		Maybe.
		You can&apos;t figure out why there would be one though.
		And after a couple hours of debugging, it dawns on you that you passes the wrong variable in, not that there&apos;s anything wrong with the method at all.
		If the Java compiler had complained about a bad argument type, you&apos;d&apos;ve fixed the problem in a couple minutes, tops.
		But the Java compiler doesn&apos;t do that, because it&apos;s not looking at the method name and checking what it&apos;s parameter types are.
		Instead, it&apos;s ruling out that method as your intention entirely because it&apos;s parameter types don&apos;t match the argument types in the method call.
		The book claims overloading to be a convenience.
		Convenience!?
		Ha!
		Overloading is a terrible feature to put in a language.
		If two methods do similar things but take different arguments, just give them similar yet different names.
		For example, use <code>add_int(int number)</code> and <code>add_float(float number)</code> instead of <code>add(int number)</code> and <code>add(float number)</code>.
		You don&apos;t need and shouldn&apos;t want to give multiple methods identical names.
		The exception, of course, is constructor methods.
		You may want multiple ways to instantiate objects for a good reason.
	</p>
	<p>
		The discussion topic this week was unexpected.
		I&apos;ve seen generic programming before, but we also discussed something called &quot;open recursion&quot;, which doesn&apos;t seem to be a form of recursion at all.
		It&apos;s something I&apos;ve dealt with before though.
		The two programming languages I work with most are $a[PHP] and Lua.
		In $a[PHP], we&apos;ve got open recursion.
		It doesn&apos;t matter what order methods are declared in, they all have access to one another.
		And as long as variables are in the right scope, they can be accessed.
		In Lua though, you don&apos;t have access to variables that haven&apos;t been declared yet.
		(In Lua, everything&apos;s a variable.)
		As a result, there are times I have to declare variables without assigning values to them, then immediately assign the values as tables of functions.
		That way, the functions are able to reference other functions within the table.
		It sort of provides open recursion, you just have to jump through some basic hoops to get it.
		Still, not having it by default lets you see what it&apos;s like not to have it, even if only for a minute while you fix the code.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		We have had no main assignments in this course.
		We&apos;ve had discussion assignments and learning journal assignments, but nothing else.
		Oh.
		Yeah.
		Also the mountainous piles of reading assignments.
		It&apos;s really weird that we&apos;ve had no main unit assignments though; this week was the last chance for any sort of assignment like that.
		If assigned next week, we wouldn&apos;t have a week after that in which to grade them, so we can&apos;t have assignments next week.
	</p>
	<p>
		Functional programming is interesting, but only from an educational standpoint.
		It doesn&apos;t seem like the sort of thing that&apos;s viable for actually writing programs.
		For example, the lack of looping structures is pretty big.
		Recursion is inefficient, so having to use it when a basic loop would do isn&apos;t good.
		Despite its inefficiency, recursion should be used when recursion is the intuitive way to express something.
		And by that same logic, loops should be used instead when loops are the intuitive way to express something, which is most of the time.
		Not having loops must hinder the readability of the programs written in the language.
	</p>
	<p>
		Lazy evaluation is just weird.
		At first, I thought it only a little strange, but completely necessary in the context of functional programming languages.
		However, once the book got to the part on tree comparisons, it was clear that there was quite a bit more oddity at work.
		In the example provided, two trees are converted into lists, then compared.
		Using lazy evaluation, the book says the trees don&apos;t even need to be fully converted if the trees are unequal, as the inequality will likely be discovered before that.
		Clearly, the process the example&apos;s code says to use is to convert the trees, then compare.
		Comparison shouldn&apos;t even begin until the full conversion of both trees is performed.
		However, the partly-formed lists are compared instead so the rest of the lists doesn&apos;t need to be computed.
		You can easily see how this would be efficient, but it&apos;s nothing short of bizarre.
	</p>
	<p>
		The list notation in Haskell is odd.
		It&apos;s pretty straightforward if you ignore the two numbers, two full stops, then a third number notation.
		Intuitively, I&apos;d have thought the first number was just a number, then the number double full stop number was a range.
		Instead though, all three are a range.
		The first two numbers are both literal numbers in the range and a declaration of the interval at which the numbers in the range are spaced, while the final number is the point at which the range stops.
		Except that that number isn&apos;t necessarily in the list, as the interval might not allow for it.
	</p>
	<p>
		The final article for the week started out by saying that programmers are procrastinators.
		The funny thing is, censorship at this school has turned me into a procrastinator.
		This censorship leaves me lethargic any time I even think about working on coursework, so I tend to put it off longer than I should.
		And when I do get going on it, I need frequent breaks or I zone out.
		And this applies to any programming assignments we have as well.
		This spreads out to procrastinating in other areas; I can&apos;t do X until I complete my coursework, so because I put my coursework off, X gets put off too.
		Under normal circumstances though, I don&apos;t procrastinate when it comes to programming.
		I tend to get started as soon as I can, because I&apos;m excited about the finished product.
		There&apos;s a lot I used to get done outside of programming too, before the censorship started.
		And between terms, I regain some of my vital energy and actually get quite a bit done.
	</p>
	<p>
		This article also asks the rhetorical question of why the universe is described using mathematics.
		Does the universe have some connection to maths?
		This question, to me, makes it seem like the author doesn&apos;t understand the root of what maths are.
		Mathematics is a system for working with data in a way that is coherent and logical.
		If something can&apos;t be described using mathematics, either you don&apos;t understand the true nature of the thing or the thing isn&apos;t coherent and consistent.
		There are far too many mathematical rules for me to cover each one, and besides, I don&apos;t claim to know them all.
		I once studied quantum mechanics for a bit, but the maths confused me and I never did get a firm grasp on them.
		But even the most simple mathematical rules are based on abstracting away fundamental principles of reality.
		Take addition, for example.
		If there&apos;s an apple on the table and you set two more apples on the table, the number of apples now on the table can be described using addition.
		Subtraction works the same way.
		As humans, we have a tendency to abstract things many times a day and not even notice.
		Currency is a physical abstraction for value, and we abstract away currency in accounts.
		Negative numbers find use there, when we represent owing someone something as having a negative balance.
		Multiplication builds on addition, and allows you to add multiple times at once, or even add multiple negative times.
		The list goes on.
		Reality doesn&apos;t magically know about mathematical rules.
		Instead, we make up mathematical rules to use as tools for understanding our world and processing the data in it.
		If reality were different than it is now, maths would be different as well; mathematics are only our description and interpretation of the world.
	</p>
	<p>
		The article also talks about how there are representations for games we&apos;ve made up, and those games don&apos;t resemble the actual world.
		It seems to be saying that the universe is bound by maths, but maths aren&apos;t bound by the universe.
		Once we have our rules that can describe the system we live in though, we can use those same rules to describe things we made up, such as games.
		Humans are the most bizarre animals on the planet, and we don&apos;t always direct our focus at reality or even things that could conceivably fit into our reality.
		We make stuff up, but it&apos;s not useful to have a separate language and separate formalisms for describing real versus imaginary things.
	</p>
	<p>
		Once the article got into actually explaining functional programming, it had much more reasonable things to say.
		For example, unit testing in functional programs becomes much easier, because you don&apos;t have to worry about side effects and calling order.
		All you need to worry about is inputs and outputs.
		Deadlocks and race conditions are also non-existant in functional programming, though personally, I think this is another result of the lack of side effects in functions.
		How can you have a race condition when no data is getting modified?
		You just can&apos;t.
	</p>
	<p>
		The book touched upon the Windows and UNIX restart models for updating.
		On Windows, you restart the whole system.
		Or, you did in the past.
		I&apos;m not a Windows user, so I&apos;m not quite sure about this, but I think Modern Windows allows at least some updates without full system restarts now.
		That said, the system is still rather unstable, and needs to be restarted every month or so just to get everything back into alignment.
		We in the Linux world don&apos;t put up with that kind of nonsense.
		Linux follows the UNIX model, and restarts only the components being updated, as well as components that depend on the ones being updated.
		Components are shut down, replaced, then started back up.
		Linux systems tend to run for years at a time without degradation of performance.
		We pretty much don&apos;t need to restart the whole system except in case of kernel update, as the kernel is so important that the rest of the system relies on it and can&apos;t continue while it&apos;s down.
		But some Linux users even have hot-patching available for the kernel.
		I forget the details, but they don&apos;t have to restart their machines even for kernel updates, which is good for servers that need to be online at all times.
		Personally, I haven&apos;t made the effort to get that sort of thing set up because my distribution has such infrequent kernel updates.
		As far as I can remember, I don&apos;t think I&apos;ve encountered a kernel update except when upgrading to a new release of the system, where pretty much the entire set of software sees a major update, and I prefer to shut the system down to perform a clean installation at that time anyway.
	</p>
	<p>
		The article&apos;s take on currying seemed way different than that described by the textbook.
		The textbook described it as returning functions as return values and calling those functions with new input values.
		The article instead described it as being basically the adaptor pattern, where a wrapper function is created to translate an argument set to work with a different function&apos;s parameters.
		I&apos;m guessing it&apos;s the textbook that&apos;s correct, and that the article is just explaining the concept very poorly.
	</p>
	<p>
		I get the feeling infinitely-long the data structures mentioned by the article aren&apos;t much different than generators in imperative languages.
		The example used was Fibonacci numbers, and you can&apos;t just jump to an arbitrary Fibonacci number and compute it.
		To compute a Fibonacci number, you must compute all the numbers that came before it, just like with a generator.
		A generator too can go on as long as need be, with no end.
		When the user stops asking for the next output, no more output is given.
	</p>
	<p>
		The concept of &quot;continuations&quot; seems to just be a subset of higher-order functions.
		Basically, a callback is passed to the function, and that callback called with the would-be return value of the main part of the function.
		The function then returns the return value of the callback.
		It looks like they further have implications with execution order, though I&apos;m not entirely sure such implications couldn&apos;t be enforced even with just a basic higher-order function.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		Logic programming looks highly useful for certain types of problems, but pretty much unusable for other problems.
		Instead of issuing instructions for the computer to execute, you provide a list of relationships, and the computer goes about deducing other relationships you&apos;ve asked for based on the known relationships.
		If you wanted to solve one of those problems in which you needed to figure out what person lived in what house and had what colour of curtains, a logic programming language would allow you to write out all the known relationships and not worry about figuring out how to figure out the other relationships.
		From the sounds of it though, that simplicity for the programmer comes at a computational cost for the computer.
		The computer seems to try all possibilities; in other words, it uses brute force, which takes a lot of resources.
	</p>
	<p>
		The book makes the claim that it&apos;s difficult to imagine a language more simple than a logic programming language, but there&apos;s a reason for that.
		First, logic programming is such a high-level abstraction that most of the complexity is abstracted away.
		Second, logic programming is so high level that most of the <strong>*usefulness*</strong> is abstracted away!
		And third, it&apos;s so high level that most of the efficiency is abstracted away.
		Logic programming has a time and place, but it doesn&apos;t work for most situations.
		It&apos;s sort of like a domain-specific language in this regard; it&apos;s simpler than a language that can do anything, but it&apos;s also not able to do much (if anything) outside its domain.
		Sure, like the book says, the Horn clause is a powerful statement, but the fact that it&apos;s the only available statement (besides facts and goals) limits what you can do drastically.
	</p>
	<p>
		Using Prolog as a database is an interesting concept.
		Is it more efficient than using actual database software?
		Probably not.
		The <code>assert()</code> and <code>retract()</code> functions make it possible to update the data without editing the program itself though, so it seems feasible.
		Is there a way to save the database to disk though?
		That seems like a rather important feature, and the book didn&apos;t cover it.
	</p>
	<p>
		The description of what happens when you pass arguments of the wrong type in Prolog greatly reminded me of Java.
		It&apos;s not quite the same.
		In Prolog, it results in &quot;falure&quot;, which is logic programming&apos;s version of a boolean <code>false</code>.
		There&apos;s no good way to trace it though, and the failure has no obvious cause.
		In Java, when you pass arguments of the wrong value, the compiler complains that it can&apos;t find the method, even though the method has clearly been defined.
		Again, it&apos;s an error that doesn&apos;t appear to make any sense.
		It fails to notice that it&apos;s the argument that&apos;s the wrong type, not the method that doesn&apos;t exist, due to Java&apos;s ugly overloading feature.
	</p>
	<p>
		The finals exam mostly went well.
		Each of the exams had a single question I wasn&apos;t sure what the intent was though.
		I forget the exact questions, as my exams this term are proctored, so I had to take them at the testing centre instead of at home, but there was a technically correct answer and an answer I thought the exam was likely intending.
		Some of the answer keys at this school seem to favour technically-incorrect answers based on generalisation of the topic.
		I went with the technically-correct answer in both cases, but even if they get marked wrong, I should be fine.
		There wasn&apos;t a lot of challenge to the tests.
	</p>
</section>
END
);
