<?php
/**
 * <https://y.st./>
 * Copyright © 2017-2018 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 1104: Computer Systems',
	'<{copyright year}>' => '2017-2018',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		The $a[MIT] Press seems to have removed the textbook used by this course from their website, or at the very least, moved it from the $a[URI] this course links to it by.
		Thankfully, the course provides alternative downloads hosted on our school&apos;s own servers, so the reading assignment was still available to us.
		One of the applications required for this course is Logisim, which is provided in Debian&apos;s package manager, so all I had to do to install it was run <code>sudu aptitude install <a href="apt:logism">logisim</a></code>.
		The website with the TECS software suite is maliciously blocking me though, which is an incredibly shady thing for the website to do.
		Because of this block, I can&apos;t download the source code the website provides under the $a[GNU] $a[GPL], but at least our own university mirrors the binary version of the software, just like it does the textbook, so again, I&apos;ll be able to complete the assigned work.
		Additionally, the virtual computing lab is available in this course for those that can&apos;t use the software directly, though I won&apos;t need that in this course.
		I really wish the virtual computing lab had been available in <span title="Operating Systems 1">CS 2301</span> last term too, as there was software we were required to run in that course that I had no way of running.
	</p>
	<p>
		Logisim starts and runs just fine.
		I&apos;m confused as to what to do with the TECS software suite though.
		It&apos;s a directory of batch files and shell files, with some subdirectories containing other items.
		I tried running each of the shell files, but I only get error messages about a missing main class (a different missing main class for each shell script I try to run).
		Hopefully when the time comes that we need to use the TECS software suite, the necessary instructions for using the software will be provided as well.
		As I can&apos;t access the TECS software suite website, as that website is maliciously blocking me as mentioned above, I can&apos;t check to see if any sort of instructions or manual is provided there.
	</p>
	<p>
		I like that the textbook is showing a modular view of the hardware.
		Modules are small and easy to understand, and more importantly, modules can be reused.
		By building up modules instead of directly building an entire system, we can reused the same modules we&apos;ve built repeatedly, potentially even repeatedly within the same complete system.
		When the book said it could explain how to build all logic gates our of nand gates, I was taken off guard.
		Everything can be composed as a (perhaps complex) series of nand gates?
		How do you build or gates and and gates out of nand gates?
		That piqued my curiosity.
		Unfortunately, the book didn&apos;t discuss how this was actually done.
		I could do some trial and error with the provided software, but as discussed above, I can&apos;t figure out how to get it running and have no access to any sort of manual for it due to the website&apos;s restrictions on what $a[IP] addresses can be used to access the website.
		The two most basic gates, not and and, seem doable if it&apos;s possible to put a constant &quot;on signal&quot; or &quot;off signal&quot; wire into the design, but I&apos;m not even sure that&apos;s a valid design component without checking it in the software.
	</p>
	<p>
		The book introduces the concept of a m&quot;don&apos;t care&quot; symbol for use in truth tables, saying it reduces the number of rows needed for the table.
		I think additionally, when used correctly, this symbol also helps convey the semantics of the gate it represents.
		For example, in a fully-fleshed-out truth table for an or gate, we see a bunch of ones and zeros.
		We can go through that and look for a pattern, but it takes some effort.
		However, if wi use this &quot;don&apos;t care&quot; symbol, we can make it completely obvious in our table that if a single <code>1</code> is present, the output is <code>1</code> and the output is <code>0</code> otherwise.
		That said, if &quot;don&apos;t care&quot; symbols are used <strong>*improperly*</strong>, they&apos;ll introduce ambiguity into our tables.
		If using this symbol, we need to double-check our work, especially for complex gates.
	</p>
	<p>
		The timing diagram method of specifying the outputs of a gate given the inputs is horrid and is slow to read, not to mention that it can&apos;t even be used in a plain text file; graphical images are absolutely mandatory for this format.
		I&apos;m not sure why anyone would even consider using this format in place of a simple, easy-to-read, easy-to-create, easy-to-store truth table.
	</p>
	<p>
		Boolean algebra&apos;s a still-better tool than a truth table though.
		The book doesn&apos;t mention it, but boolean algebra can be used in programming too, not just circuit design.
		It&apos;s a nice, general-purpose tool, especially in the field of computing.
		It&apos;s hard to represent the not-bar in plain text format though, especially given that it can span multiple variables.
		In that regard, truth tables still have an advantage.
	</p>
	<p>
		I&apos;m failing to see the advantage of moving an inverter from the output of a gate to the inputs.
		Simply put, this causes <strong>*more*</strong> inverters to be required.
		Instead of requiring one inverter for the output, an inverter for <strong>*each input*</strong> is required.
		That means more material cost and probably more power cost.
		However, this concept can be flipped.
		If you have an inverter for all of your inputs ... you can replace them with a single inverter for the output and switch gate types.
		This allows for better cost effectiveness and simpler design.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<h4>Why do Boolean functions play a central role in hardware architectures?</h4>
		<p>
			Our book doesn&apos;t really explain this.
			It says that because computer hardware is required to operate on boolean values, boolean functions are key to making this happen (Nisan &amp; Schocken, n.d.).
			However, why booleans?
			Why is it that computers operate on booleans and not some other type of value, such as integer, colours, or compass directions?
			Why are booleans the base for representing all other data types within a computer?
		</p>
		<p>
			My theory is that it has to do with simplicity.
			Representing a value within a spectrum is very difficult.
			Any slight variation can cause the value to become ... incorrect.
			To better represent values in a computer, a data type with a set of finite possibilities needed to be chosen.
			When you have such a data type, other data types can be represented with it.
			Depending on the size of the possibility set you&apos;re trying to represent, you may need to string multiple values of your chosen primitive together to represent the new data type.
			(For example, an eight-bit integer can be represented with eight boolean values.)
			The more possible states your primitive can have though, the more complex it is to represent in the hardware.
			However, if your data type can only be in one state, it&apos;s useless, as it always stores the exact same null information.
			I think booleans were chosen for use in computers because being able to represent one of two values makes them simple enough to be easily implemented, but complex enough to actually store information.
		</p>
		<p>
			With booleans having been chosen for their simplicity and our ability to string them together to represent more-complex data types, we can get back to what the book had to say: because boolean values are in use, boolean functions are required to operate on them.
		</p>
		<h4>Describe Composite Gates.</h4>
		<p>
			A composite is something that is built from a mix of other things.
			For example, a composite metal, also known as an alloy, is a metal made from a mix of other metals.
			It follows then that a composite logic gate is a gate made up of other logic gates.
			There is though a subtle difference between how we use the word &quot;composite&quot; to refer to complex gates and how we use it to refer to things outside the realm of computer science.
			Normally, a composite cannot be built from multiple of the same thing.
			That is to say, if you mix iron and iron, you still have regular iron, and not any sort of composite.
			However, when speaking of composite logic gates, you could create a composite gate using exclusively one type of elementary gate.
		</p>
		<p>
			Composite logic gates allow the physical implementation of complex boolean functions using very simple primitives that don&apos;t seem very powerful on their own.
			According to our textbook, all boolean logic can be implemented using only carefully-arranged nand gates (Nisan &amp; Schocken, n.d.), and the secret to that is the building of composite gates from them.
			Creating composite gates from lesser gates involves chaining them, using the output of one as the input to the next.
		</p>
		<h4>Describe Multiplexors and Demultiplexors? What is the importance of the use of the selection bit and the data bits?</h4>
		<p>
			A multiplexor is a logic gate with three inputs and one output, in which one of the inputs determines which of the other inputs to output (Nisan &amp; Schocken, n.d.).
			The determining input is referred to as the selection bit, while the other two inputs are referred to as data bits.
			Demultiplexors are said by the book to be the reverse, using two inputs and two outputs.
			The selection bit remains as an input, but the other two inputs, the data bits, become outputs and the output becomes the new input.
			The selection bit is then used to determine which new output the new input feeds.
		</p>
		<p>
			It&apos;s worth noting though that this isn&apos;t exactly a reverse; data is lost here.
			Multiplexors throw away one of their input channels, while one of the output channels of a demultiplexor is always set to zero.
			(Which channel is the one set to zero will depend on the selection bit&apos;s input.)
		</p>
		<h4>Use your Schottky (sic) Book to identify three Muxes and three Demuxes with brief descriptions of each.</h4>
		<p>
			A regular multiplexor and demultiplexor acts as was described in the section above.
		</p>
		<p>
			A multi-bit multiplexor and demultiplexor are a bit different from ordinary multi-bit logic gates.
			Instead of taking the same number of bits for all inputs and outputs, an n-bit multiplexor and demultiplexor take n bits for all inputs and outputs <strong>*except*</strong> the selection bit.
			The selection bit is only one bit wide, and determines which entire n-bit input (in the case of multiplexors) or output (in the case of demultiplexors) will be used (Nisan &amp; Schocken, n.d.).
		</p>
		<p>
			Multi-way multiplexors and demultiplexors, again, are a bit different than their ordinary multi-way logic gate counterparts.
			Again, the selection bit&apos;s size isn&apos;t determined by the size of the other inputs and outputs.
			Instead, the selection bit&apos;s size is determined by the <strong>*number*</strong> of inputs (in the case of multiplexors) or outputs (in the case of demultiplexors) (Nisan &amp; Schocken, n.d.).
			If you understand binary integer sizes, the size of the selection input is the number of bits needed to uniquely identify each input or output channel.
			If you don&apos;t, this size can be found instead using logarithms, where log<sub>2</sub>{number of input/output channels} (rounded up) will be the number of selection bits you need.
			The book isn&apos;t clear on what happens when there is an uneven (in base two) number of input/output channels.
			For example, if we have a multiplexor operating on three input channels, you obviously need two selection bits: one for channel <code>00</code>, one for channel <code>01</code>, and one for channel <code>10</code>.
			But what happens if the selection bits are set to <code>11</code>?
			Is this behaviour undefined?
			Or is a multi-way multiplexor (or demultiplexor) required to have an even number for channels to operate on so all selection inputs will be valid?
		</p>
		<div class="APA_references">
			<h4>References:</h4>
			<p>
				Nisan, &amp; Schocken. (n.d.). The Elements of Computer Systems. Retrieved from <a href="https://my.uopeople.edu/mod/resource/view.php?id=132787"><code>https://my.uopeople.edu/mod/resource/view.php?id=132787</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			It&apos;s true that boolean functions play an important role in computing because computerised data is only a series of booleans.
			However ...
			Data made up of boolean values plays an important role because computers operate using a series of boolean functions.
			It&apos;s a chicken-and-egg situation, which the values and the functions each being important because of the presence of the other.
			The book doesn&apos;t cover this, but I think the question we need to ask ourselves is why boolean values and functions were chosen to begin with.
		</p>
	</blockquote>
	<blockquote>
		<p>
			It looks like you had a different take on item three than I did.
			I thought it was asking about types of multiplexors and demultiplexors, but it could have just as easily been referring to examples of where to find them in the real world.
			I like your examples; they show what role they fill for us and why we need them.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I think multi-way and multi-bit were supposed to be two different types of multiplexors and demultiplexors.
			That said, as I read the material, I figured a fourth type could exist that is both multi-way and multi-bit.
		</p>
		<p>
			By the way, what determines the value of k?
			How many control bits are needed for an m-way multiplexor or demultiplexor?
		</p>
	</blockquote>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		The book goes into detail about binary maths with integers, which is great for those not familiar with them, but I&apos;ve known how to work in bases besides base ten for years.
		The same applies to the section on maths in base sixteen.
		There wasn&apos;t much there for me left to learn, so I have few meaningful comments.
		However, it&apos;s worth noting that the &quot;overflow&quot; in the limited number of bits doesn&apos;t have to be viewed as overflow except in the most extreme circumstance; adding a positive number and a negative number will never &quot;overflow the sign&quot;.
		There&apos;s only overflow if we&apos;re looking at the bits as an <strong>*unsigned*</strong> number or if the signed number is at the very minimum of the range we can represent).
		The book, on the other hand, repeatedly treats the complement number, the one with a <code>1</code> in its highest-order bit, as negative.
		Due to the way in which signed numbers are implemented in typical binary, signed and unsigned addition and subtraction are identical, but a sign can&apos;t overflow; the number is either positive or negative (or zero, as zero is technically neither).
		The book also refers to this most-significant bit as a sign bit, which I guess is technically correct, but it&apos;s a huge oversimplification.
		I mean, if you flip a &quot;sign bit&quot;, the number should be the exact negative version of itself, right?
		But that&apos;s not how it works; if negative numbers were implemented that way in binary, separate logic would be necessary for working with positive and negative numbers based on that bit.
		Instead, the process for finding the negative version of the number is to find this so-called &quot;two&apos;s complement&quot;.
		I&apos;d go into detail as to why exactly that is, but it&apos;s really beyond the scope of the lesson.
		It should suffice to say that implementing negative numbers as cyclical to positive numbers, instead of mirrored to them, allows binary mathematics to ignore the sign, while implementing negative numbers as being higher than positive numbers allows signed and unsigned versions of positive numbers (and zero) to match.
	</p>
	<p>
		The signed magnitude representation of numbers was new to me.
		It seems a lot easier to read, but a lot harder to work with mathematically.
		As stated above, such a representation requires different logic for working with positive and negative numbers.
		It also causes an overlap in the data we can represent, wasting valuable storage space: <code>0b00000000</code> and <code>0b10000000</code> would be zero and negative zero, respectively, which are the same exact number.
	</p>
	<p>
		The section on floating-point numbers was enlightening.
		I had no idea how such numbers were represented within a computer.
	</p>
	<p>
		The bit-sift method of multiplication and division by two should be intuitive to anyone familiar with maths at all.
		Take a look at the more-familiar base ten.
		Multiplying by ten (base ten <code>10</code>) moves everything to the left, while dividing by ten moves everything to the right.
		Likewise, multiplying and dividing by two (base two <code>0b10</code>) has the same effect in binary.
		If you know basic maths and you know how bases work, this shouldn&apos;t be any news to you.
		However, I did find this section enlightening because of other implications mentioned.
		It was explained why zeros need to be shifted in from the right, while the sign needs to be shifted in from the left.
		I&apos;ve run into this behaviour <strong>*outside the context of maths*</strong> and have always wondered why bit-shifting behaves this way.
		It seems ... highly inconsistent, to say the least.
		However, this behaviour was likely designed with multiplication and division by powers of two in mind.
		While this fundamental behaviour makes little sense when not doing maths, it does appear to have a purpose after all.
		The shortcut of splitting a factor into base-two factors, multiplying by the other factor separately, and adding the results was also something I wouldn&apos;t&apos;ve thought of doing.
	</p>
	<p>
		I&apos;d love to know how the first book thinks we can check for necessary carry-over when attempting to detect overflow in the adding of two unsigned integers.
		Take the case of adding <code>0b11111111</code> to <code>0b00000001</code>.
		You can&apos;t just check the most-significant bit, or even the most-significant several bits.
		Without checking <strong>*every*</strong> bit, you can&apos;t know that overflow will occur.
		Instead though, you could check the result like you can with signed integers.
		Instead of checking the sign, if the result is smaller than one or both of the numbers being added together, you&apos;ve had an overflow.
		The second book makes things a bit more clear though; it says in case of a carry-over at the end, we can <strong>*report*</strong> carry-over.
		This is hugely different: it means that we&apos;re building the hardware to set some flag in case of overflow, not attempting to find the carry-over before or after it has already happened.
	</p>
	<p>
		The discussion assignment was very frustrating.
		The book hasn&apos;t given us anything to work with as far as building composite gates besides the restriction that we use only nand gates as our base.
		We need more information.
		For example, are constant inputs valid?
		We&apos;re also not offered any help as to how to think about building gates logically, we&apos;re just told to go do it.
		I feel like I&apos;m just feeling my way through the dark right now.
		The book goes in-depth with the things I already know, such as binary addition, then fails to say hardly anything in the areas that I actually need information about.
		I think I misunderstood the discussion assignment, though.
		It asked us to implement a xor gate.
		The book told us to build everything out of nor gates, so I assumed we were supposed to do that for the discussion assignment.
		It didn&apos;t actually say to do that though; it wasn&apos;t very specific, and I was the <strong>*only*</strong> student to interpret it that way.
		Furthermore, the main assignment for the week specifically allowed the use of several other types of gates.
		That said, a couple students tried to claim that they built a xor gate using one xor gate.
		That was clearly the wrong answer.
		Just because other students did something doesn&apos;t make it the right answer.
	</p>
	<p>
		I thought the assignment for this week would be a pain in the butt.
		It was a three-parter, and the first two parts were no problem.
		However, the third part was a monster.
		We were to build a three-input xnor gate.
		Xnor is the xor, but xor is ill-defined and ambiguous whenever it has a number of inputs not equal to two.
		The textbook for the course uses one definition, while the software we&apos;re using, Logism, uses the other.
		So which do we use?
		Which implementation will be considered &quot;correct&quot; when it comes time to grade?
		I ended up doing what I always do when faced with ambiguity in coursework: I put in double the effort of everyone else and implemented both versions, just to be safe.
		Joy.
		Thankfully, the assignment was very specific in what primitive gates we were allowed to use to build our xnor gates; otherwise, I&apos;d assume we were stuck using exclusively nand gates, as we were told by the book that every other gate can be built out of those alone.
		While the project seemed daunting, especially because I was going to have to double my efforts, it turned out to be a rather trivial task because of our access to several basic gates to use as components.
	</p>
	<p>
		Circuit design is actually pretty fun.
		I do wish I had more to go on as far as implementing the basic gates, but once the basic gates are available, it&apos;s not too difficult.
		It&apos;s just a logic puzzle, and you have to figure out what pieces to use and where to use them.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<h4>How many gates are used in the implementation of the Xor gate in the Nissan and Schocken textbook?</h4>
		<p>
			Much to my frustration, the Nissan and Schocken textbook didn&apos;t show us how to build a single composite gate last week.
			This week, the Nissan and Schocken textbook mentions using xor gates to build a half-adder gate, but doesn&apos;t show how to build a xor gate.
			While we might be able to figure out how to build a xor gate on our own, we can&apos;t discuss the number of gates used in the book&apos;s implementation, as the book hasn&apos;t even given an implementation yet.
		</p>
		<h4>Present your implementation to the Xor chip. How many gates are there in your implementation?</h4>
		<p>
			The book has given us so little to work with as far as gate-building that I&apos;m not really sure what&apos;s valid and not.
			To make xor gate functionality work, I felt I needed a not gate.
			I have an idea on how to implement this, but without any guidance from the book whatsoever, I don&apos;t even know if this is an acceptable solution:
		</p>
		<img alt="logical not gate" src="/img/CC_BY-SA_4.0/y.st./coursework/CS1104/not_gate.png" class="framed-centred-image" width="203" height="92"/>
		<p>
			Using that solution though, one nand gate can be converted into one not gate.
			I quickly found I needed either an and gate or a gate that output <code>1</code> (or <code>0</code>) only when both inputs are <code>0</code> (or <code>1</code>).
			I forget what the book called those four gates.
			As an experiment, I put three nand gates together in the most basic configuration I could think of; one that I&apos;d been thinking about for a while, but was unsure what would result from.
			As luck would have it, I ended up with an and gate, which was exactly what I needed:
		</p>
		<img alt="logical and gate" src="/img/CC_BY-SA_4.0/y.st./coursework/CS1104/and_gate.png" class="framed-centred-image" width="320" height="130"/>
		<p>
			With the not gate, and gate, and original nand gate, I came up with the following solution:
		</p>
		<img alt="logical xor gate" src="/img/CC_BY-SA_4.0/y.st./coursework/CS1104/xor_gate.png" class="framed-centred-image" width="310" height="140"/>
		<p>
			Each not gate is built from one nand gate and the and gate is built from three nand gates.
			Therefore, my solution effectively is built from seven nand gates.
		</p>
		<h4>Can you think of a more efficient implementation?</h4>
		<p>
			When I figure out how to build an or gate, I might be able to remove the not gates, but I don&apos;t have the skill to improve that design by very much.
			It&apos;s not great, but it&apos;s the best I can do, especially given that the book doesn&apos;t want to help us learn how to effectively design gates; it seems to want us to just guess at it.
			However, much better solutions do exist.
			In particular, I like the four-nand-gate solution provided by Wikipedia (Wikipedia, n.d.); check out the <a href="https://en.wikipedia.org/wiki/XOR_gate">XOR gate</a> article if you&apos;re interested.
		</p>
		<h4>Efficiency is not necessarily the number of gates used. It&apos;s all about &quot;following the money.&quot; In other words, what does it cost to implement a new gate?</h4>
		<p>
			There are a couple different costs to think about when constructing a gate.
			Last week, we learned that different gates are more or less expensive to implement than others in terms of material cost.
			If you use a small number of expensive gates to build your composite gate, you might have a higher material cost than if you&apos;d used a larger number of less-expensive gates.
			The number of gates used isn&apos;t as important as the total cost of the gates used, not to mention that you have gate-testing that needs to be done, other materials that go into the gate such as wiring, and costs of keeping your factory open.
		</p>
		<p>
			The other main costs are associated with <strong>*using*</strong> the gate.
			Using slower component gates to build your composite gate may result in higher time costs in using your gate than if you&apos;d constructed it with a higher number of faster gates.
			Energy costs are also an issue, as some gates consume more power than others.
			The number of gates used is important, but it&apos;s not by any means the only deciding factor in terms of cost.
		</p>
		<div class="APA_references">
			<h4>References:</h4>
			<p>
				Wikipedia. (n.d.). XOR gate. Retrieved from <a href="https://en.wikipedia.org/wiki/XOR_gate"><code>https://en.wikipedia.org/wiki/XOR_gate</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			Where did you see that implementation in the book?
			I couldn&apos;t find it, myself.
			I even did a search for the word &quot;xor&quot;, and it only tuned up one hit; it wasn&apos;t that diagram.
			I&apos;ve been having trouble wrapping my head around this stuff, and a big part of it&apos;s that I&apos;m not seeing any implementations such as that in the textbook.
		</p>
	</blockquote>
	<blockquote>
		<p>
			The goal was to build a new gate using other gates.
			You can&apos;t use a single xor gate to build a xor gate.
			You need to combine other sorts of gates, such as nand gates, to implement the logic of a xor gate yourself.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I like your solution!
			It&apos;s very simple; simplicity&apos;s a great thing to have in gate design.
			It makes it easier to follow, but also, it means there&apos;s less parts, which means less things can go wrong (such as worn out gates or flaws in design).
			Plus, y&apos;know, cost effectiveness.
		</p>
	</blockquote>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		I apologise for my formatting on the last two learning journal assignments.
		I&apos;ve had this same, exact learning journal assignment, with the same, exact wording, in several courses now.
		Each professor seems to interpret the assignment differently though.
		My last professor, for example, told me to be more brief and merely summarise the week&apos;s activities.
		I could&apos;ve submitted last week&apos;s learning journal entry in a format you&apos;d approve of, but you hadn&apos;t made your interpretation of the assignment clear yet, as you hadn&apos;t yet graded the entry from the Unit 1 until the one for Unit 2 was already due and complete.
		Maybe I should make the differing interpretation issue clear on the first week&apos;s entry next time.
	</p>
	<h3>Describe what you did. This does not mean that you copy and paste from what you have posted or the assignments you have prepared. You need to describe what you did and how you did it.</h3>
	<p>
		We were limited in what gates we could use for our assignment.
		I needed xor gates, but didn&apos;t have access to them, so I set up something simple by combining and, nand, and or gates as a substitute.
		From there, I built up the single half-adder, two full-adders, and some sort of full-adder without a carry output to build the 4-bit adder we need.
		From there, the challenge was in compacting the design (read: move all the wires around to take up less space) so the 4-bit adder could eventually be added to the arithmetic logic unit.
	</p>
	<p>
		For the subtraction unit, it was suggested that we find the two&apos;s compliment of the subtrahend, then add it to the minuend.
		To do this, you need to invert the bits of the subtrahend, add one, then add to the minuend.
		In other words, you need <strong>*two*</strong> 4-bit adder circuits: one to find the two&apos;s complement, and one to add the two 4-bit numbers.
		If that sounds needlessly complex, it&apos;s because it is.
		I quickly realised a much simpler solution: add the one, the inverted subtrahend, and the minuend in one go!
		The one can be treated as a carry-in bit, the same as any other carry-in bit aside from the fact that (for now) it comes from nowhere (but it&apos;d come from nowhere if we did this in two steps instead of one as well) and that it carries into the least-significant bit, which isn&apos;t normally done.
		Additionally, this carry-in-from-nowhere property would later come in handy in simplifying the full arithmetic logic unit.
		I ended up having to de-compact all my wires neat the beginning of the addition logic circuit, turn the half-adder there into a full-adder, and recompact, but the result was definitely worth it: a circuit that (as we&apos;ll soon see) can perform both addition and subtraction once one minor adjustment has been made.
		(It&apos;s worth noting though that after re-reading the instructions once more, I found this is basically what the instructions said to do anyway.
		I&apos;d gone through the effort to figure out an improvement, but all I&apos;d done so far was what I&apos;d been told to do.)
	</p>
	<p>
		So now we have a subtracter with a carry-in from nowhere.
		But what if &quot;nowhere&quot; wasn&apos;t really nowhere?
		What if we carry in one of the control bits, potentially inverted?
		When the most-significant control bit is zero, we perform maths, while if it&apos;s one, we perform a bit-wise operation.
		The least-significant control bit is used to control which mathematical (or bit-wise) operation we perform.
		With this carry-in, we can use that control bit to turn out subtracter back into an adder when needed by passing a one in for subtraction and a zero in for addition.
		As luck would have it, zero already represents subtraction in the control bit setup, so we won&apos;t even need to invert that bit before carrying it in.
		In theory, we needed three copies of the same addition circuit to complete the arithmetic logic unit: two for subtraction and one for addition.
		We&apos;ve reduced that to one copy.
		While I can&apos;t be sure the setup of my addition circuit is the most efficient design for one, removing two full copies of it has improved the efficiency considerably.
		You have to keep in mind that using the multiplexor to choose which output to use like we&apos;re doing causes us to run all four operations (addition, subtraction, bitwise and, and bitwise or) even though we&apos;re only using one of the four outputs.
		Performing addition three times when you might not even need it once is a bit excessive.
		I didn&apos;t think I&apos;d be able to reduce the design this far, but it was sort of my own personal goal to get addition and subtraction running on the same circuit from the beginning.
		(This was my main actual improvement upon the design suggested by the assignment.)
		I still needed to build a conditional inverter to go along with the conditional carry-in though, which I could easily do with an exclusive or gate.
		Too bad exclusive or gates aren&apos;t on the approved gate list for this assignment.
	</p>
	<p>
		To pull this off, I needed to combine nand, or, and and gates again.
		I debated back and fourth about performing the operation before the splitter (on the one four-bit value) or after the splitter (on the four one-bit values).
		Because we were instructed to build our bit-wise functions using the one-bit values, I decided to perform this operation on the one-bit values as well.
		That resulted in four times as much circuitry of the bit inversion though.
	</p>
	<h3>Describe your reactions to what you did</h3>
	<p>
		Reading over the main assignment for the week, it said it would be &quot;acceptable&quot; to use the pre-built circuits supplied for part of the assignment.
		&quot;Acceptable&quot;; not &quot;required&quot;.
		For archival and copyright reasons though, this would be extremely suboptimal, not to mention incredibly inconvenient.
		(I archive all my past coursework, though in compliance with the university&apos;s rules, the archive will remain private and inaccessible until two calendar years after my final term here have passed.)
		I figured I&apos;d use my own implementation, as it wasn&apos;t prohibited.
		As I read on though, I found that using one&apos;s own implementation is <strong>*preferred*</strong> anyway.
		Sweet.
		It&apos;s preferred not only by me, but by whoever wrote up the assignment directions as well.
	</p>
	<p>
		When I went back to complete the assignment, I found I&apos;d misinterpreted the direction, so there wouldn&apos;t be any issues using the predefined adder and subtracter after all; I&apos;d thought that these were components in files downloaded from the university website, but they were only components in the program.
		By the time I realised this though, I&apos;s already spent quite a bit of time compacting my adder&apos;s design to make usage in the full arithmetic logic unit more feasible.
		The subtracter was supposed to be mostly a clone of the adder, but with a minor adjustment, so having the adder compacted, I&apos;d already done the majority of the work in integrating it with the logic unit, there was no reason not to continue with that plan.
	</p>
	<h3>Describe any feedback you received or any specific interactions you had. Discuss how they were helpful</h3>
	<p>
		We have not yet received any feedback from any assignments yet aside from the Unit 1 learning journal.
		The feedback there has helped me better understand what the professor is looking for, as evidenced by the new formatting of this learning journal submission.
		We&apos;ve had interactions in the forums, but nothing notable.
	</p>
	<h3>Describe your feelings and attitudes</h3>
	<p>
		I suck at circuit design.
		I still can&apos;t understand the four-nand-gate solution to building a xor gate.
		That said, I had a lot of fun this week, and I&apos;m reasonably happy with my project submission for the week.
		If I had more time, I&apos;d compact the wires a bit further and potentially experiment with flipping the orientation of the gates for an even smaller diagram.
		I might additionally work on moving things around to make the diagram taller and narrower (my design is too wide to fit on small monitors, which will make it harder to grade; I&apos;ll do better next time).
		I&apos;m particularly happy with the fact that I got the one addition subcircuit to perform both addition and subtraction; no second addition subcircuit was needed in the full circuit.
		I&apos;m also happy that I was able to squeeze the input and output pins into the corners, not needing to stretch my already-too-wide diagram to fit them in.
		This was factored into my initial design as far as the output pin was concerned, but I had to shuffle some things around on the final day to get the input pins moved into place.
		This might show a bit because of some needlessly crossed wires that resulted; again, given more time, I&apos;d correct that.
	</p>
	<p>
		Last week, I mentioned needing to do double the work due to the ambiguity of part of the assignment.
		This week, that paid off for me.
		There are two definitions of a xor gate; the definitions overlap when only taking two inputs, but when taking more inputs, the two versions of the logic output very different results.
		The grading instructions were very clear as to what version of the xor gate was &quot;correct&quot;.
		All three students I graded the work for got it wrong.
		I, on the other hand, presented a solution for both versions of the logic, and therefore got it both right <strong>*and*</strong> wrong.
		The definion of &quot;xor&quot; that I most agree with was the one deemed correct, but it&apos;s a bit unfair.
		The textbook we&apos;re reading from only teaches the exact opposite version.
		Hopefully that&apos;ll be fixed in future iterations of this course.
	</p>
	<h3>Describe what you learned</h3>
	<p>
		The main things I learned this week were about circuit-building and pipelined processors.
	</p>
	<p>
		I understand we didn&apos;t build a full arithmetic logic unit this week, but we built enough of one to get some major concepts across.
		First of all, the implementation of an adder seems important; as does reversing an adder to use it as a subtracter.
		I also learned how to build a multiplexer, as I needed to build one of those for the arithmetic logic unit as well.
		It was also interesting to learn that no matter what computation is desired, <strong>*the entire arithmetic logic unit is active*</strong>.
		The multiplexer selects which function&apos;s data to output, but all the functions are run in parallel before that happens.
		That means that simplifying any one function of the arithmetic logic unit should make all functions of the arithmetic logic unit run more efficiently.
	</p>
	<p>
		Pipelined processing was a bit more of a hands-off topic.
		We didn&apos;t actually touch anything in that regard.
		However, it&apos;s a very interesting concept.
		It speeds the running of the processor by allowing the entire processor to be used at once instead of just a third of it.
		That said, when conditional branching of a program occurs, the processor has to guess which branch will be used.
		If it guesses incorrectly, it&apos;s got to throw out the data it&apos;d been working with in two thirds of it&apos;s functionality, and go back to where it made the incorrect guess.
		It&apos;s interesting that the processor would make such a guess at all instead of waiting in such situations until it knows for sure what it&apos;s supposed to do (it could still use the pipeline method in cases that aren&apos;t ambiguous), but I guess I see the point of making a guess and going with it.
		Assuming an even spread, it&apos;ll get the answer right half the time, which still saves a lot of $a[CPU] cycles compared with not guessing.
	</p>
	<h3>What surprised me or caused me to wonder?</h3>
	<p>
		It seems that typically, components in a computer are enabled by <code>0</code> (off) signals and disabled by <code>1</code> (on) signals.
		This is incredibly counter-intuitive, and in my opinion, is an outright-bizarre design choice.
		It doesn&apos;t seem particularly bad though, unlike some design choices I&apos;ve seen, just very strange.
	</p>
	<p>
		Pins on integrated circuits are numbered from one, skipping zero.
		Why?
		We&apos;re computer people, we count from zero.
	</p>
	<p>
		The positions on the registers are labelled in a small-endian way.
		That is to say the least-significant bit is bit zero and the most-significant bit is bit n-1.
		This is counter-intuitive to me, but I guess it makes sense.
		It means that bit zero will always have the same importance, no matter the register size, as will bits one, two, et cetera.
		Due to time constraints, I had to complete my circuit before reading about this though.
		Not knowing made it a bit difficult to work with the splitters, as they kept splitting the bits the opposite way as expected.
		I got the hang of it, but after reading that section of the assigned reading, I understood why the bits were labelled that way in Logisim.
	</p>
	<p>
		The grading instructions presented us with the answer to last week&apos;s challenge problem.
		Except ... there was no challenge problem last week.
		How do we solve problems we were never given?
		Given the problem, the solution is intuitive.
		However, we had no access to the problem until we already had the answer.
		This doesn&apos;t bode well for the predicted level of organisation in the course.
		Hopefully nothing too important will be messed up this way in the assignments.
		So far, no harm has been done.
		Actually, scratch that.
		Harm has been done due to the ambiguous xor gate question.
	</p>
	<h3>What happened that felt particularly challenging? Why was it challenging to me?</h3>
	<p>
		Figuring out multiplexors was a challenge.
		The book described how a demultiplexer functions, and after reading it several times and examining the diagram, I understood.
		However, multiplexors still felt like a black box to me.
		They&apos;re such a vital component, and I still didn&apos;t know what they actually do.
		I understand their purpose, but without some sort of description of their inner workings, I can&apos;t figure out how they perform their task.
		I spent a good chunk of time trying to figure it out before I gave up.
		As I was trying to go to sleep that night though, a possible answer came to me.
		It&apos;s likely not the most efficient solution, but I could use and gates to zero out any data from the channels not wanted (using the selection bits just like in a demultiplexor), then use an or gate to merge the lines and keep the signal from the one channel that hadn&apos;t been zeroed out.
		It would&apos;ve been nice if the book had told us about both of these complementary components though and not left us to guess at how one of them works.
	</p>
	<p>
		Frequent use of obtuse abbreviations in the book has also made understanding what&apos;s being talked about a challenge.
		If you talk about an integrated circuit, I understand the concept.
		If you talk about an &quot;IC&quot;, I&apos;ve got to go back through the text and figure out what an &quot;IC&quot; is, then go back to where I was reading and continue.
		As soon as we switch topics, I forget what an &quot;IC&quot; is, and if we start talking about them again, I&apos;ve got to go figure out what that even means yet again.
		Even within the section, we&apos;d start talking about switches, then mention &quot;IC&quot;s again and I&apos;d have to think for a bit to remember what they were.
		The first thing that came to mind was frequently &quot;input controller&quot;, and I knew that was wrong.
		It&apos;d be nice if people could quit being lazy and could actually spell out their words.
		When you&apos;re speaking, it&apos;s different.
		You say something once and it&apos;s interpreted once.
		It often pays to safe effort.
		But when you type up a document such as a textbook, it lasts.
		The added effort that you put in once helps countless times; saving typing a few characters is incredibly short-sighted.
	</p>
	<p>
		The book also keeps referencing past chapters, which itself isn&apos;t bad.
		However, the book isn&apos;t available on the external Web any more, and the university has not provided us access to those chapters because we haven&apos;t been assigned them as reading material; we&apos;re skipping through the book.
		As such, we&apos;re frequently left with what amount to cryptic hints about things.
	</p>
	<p>
		Lastly, when I fixed up the spacing on my project for the week, I accidentally crossed some wires incorrectly.
		I ended up adding a couple errors and spent way too much time trying to debug.
		The directions were also a bit vague, so I wasn&apos;t sure if I was supposed to use the built-in multiplexor from Logisim or if I was supposed to build my own, for example.
		I ended up building my own, but if the grading direction ask for us to check to see if students used the built-in one, I&apos;ll end up docked points for that.
	</p>
	<p>
		The grading instructions this week had the truth tables set up with their rows in a non-standard order.
		Because I didn&apos;t notice at first, I nearly marked some truth tables from students incorrect when they got their truth tables right!
		I caught the problem soon enough to fix it, but it&apos;d be nice if the rows in the truth tables in the grading sheets were in the standard order; an order both intuitive and taught to us in the textbook.
		I also fear my own work might be graded incorrectly, as other students might not catch the problem like I did.
	</p>
	<h3>What skills and knowledge do I recognize that I am gaining?</h3>
	<p>
		I think signed amplitude format is a stupid and inefficient number format for use in computers.
		I&apos;ve always thought that.
		I used to think that was the only way computers stored numbers.
		I later learned of two&apos;s compliment format, though not by that name, and thought that I&apos;d been wrong all along that signed amplitude format was even used at all.
		I was relieved.
		Two&apos;s compliment format is just so much more intuitive, and allows positive and negative numbers to be treated identically in most contexts.
		This course taught me though that both systems are very real and in use in different processors.
		The first thing the reading material showed me this week was yet one more reason to prefer two&apos;s compliment format over signed amplitude format.
		To implement subtraction in two&apos;s compliment format, you can just invert the subtrahend (that is, switch the sign) and add the two numbers together.
		You can do this with maths on paper too, so it&apos;s no big breakthrough or anything - but good luck pulling that off using signed amplitude format.
		It won&apos;t work.
		You have to implement special logic for addition and subtraction separately when using signed amplitude format, but with two&apos;s compliment format, you only need to implement addition and a small add-on to convert it into subtraction.
	</p>
	<h3>What am I realizing about myself as a learner?</h3>
	<p>
		I&apos;ve learned nothing new about myself as a learner.
		I don&apos;t do well with strange (read &quot;foreign&quot;, not &quot;weird&quot;) abbreviations, I can&apos;t guess at what people are thinking, and until something is explained to me, I view it as a confusing black box.
		I already knew all that about myself though, and have for quite some time.
		You may have noticed the dotted underlines under all abbreviations I use in my work.
		Those are because I mark up my abbreviations specifically as a kindness to whoever will read it.
		If you don&apos;t know one of them, you can hover your mouse over it and most Web browsers will then tell you exactly what it stands for.
	</p>
	<p>
		(I do this even for obvious abbreviations though, so don&apos;t take it as an insult if I mark up an abbreviation that you think anyone in the field should know.
		As I mentioned before, I archive everything, so abbreviations here should be clear even to the layman.)
	</p>
	<h3>In what ways am I able to apply the ideas and concepts gained to my own experience?</h3>
	<p>
		Boolean algebra is very similar to some truth expressions used in programming.
		The explanation on using bit shifting to multiply and divide by powers of two also helped me understand the odd behaviour exhibited in non-arithmetic bit-shifting.
		Other than that, most of this is new to me.
		It doesn&apos;t apply to my current experience in any way.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			Back before odometers were digital, they had actual, physical wheels with numbers on them.
			Each wheel was a digit in the number of kilometres your vehicle had travelled.
			Eventually, if you took good care of your vehicle, the number would max out.
			If it had eight digits, it&apos;d be at &quot;99999999&quot;.
			Then, if you travelled one more kilometre, it&apos;d be back at zero.
			This is overflow.
			It&apos;s unsigned, and it&apos;s in base ten, but it&apos;s overflow.
		</p>
		<p>
			The same applies to adding in binary.
			If you&apos;re working with eight bits signed, your maximum number is 127 (<code>0b01111111</code>).
			The only real difference between this and the odometer example is that the minimum number isn&apos;t zero.
			You still cycle back to the minimum, but it&apos;s not zero.
			Instead, it&apos;s -128 (<code>0b10000000</code>).
			In the odometer example, if you were four kilometres away from maxing out and you add twelve, you&apos;re now seven above the minimum: you&apos;re at seven.
			Taking negative numbers into account, if (in eight bits) you add twelve to 123 (four below the maximum), you get -121 (seven above the minimum).
			Overflow, when not handled specifically, simply cycles you back around to the beginning.
			This only applies to adding numbers of the same sign, as stated in the discussion assignment, but is a problem both with adding positive numbers and adding negative numbers.
			With negative numbers, you instead get underflow, and you cycle from the bottom to the top instead of from the top to the bottom.
		</p>
		<p>
			Subtraction reverses the same-sign overflow/underflow issue though.
			Instead, same-sign subtraction is harmless, while subtracting sufficiently-large, differing-sign numbers is what causes over/underflow.
			As stated by the textbook (Tarnoff, n.d.), subtraction is just addition in reverse, so this should come as no surprise.
		</p>
		<p>
			Detecting overflow can either be done on a hardware level or software level.
			On a hardware level, we can check for a carry-over on the most-significant bit.
			On unsigned numbers, if the most significant bit <strong>*causes*</strong> a carry-over to a non-exist bit, we&apos;ve hit overflow.
			Likewise, if the most-significant bit <strong>*tries to borrow from*</strong> a non-existent bit, we&apos;ve hit underflow.
			For signed numbers, instead of checking to see if we tried to interact with a non-existent bit, we can instead check to see if we carrier into the sign bit or borrowed from the sign bit.
			On a software level, we can&apos;t check for any of these things.
			What we <strong>*can*</strong> do though is check the sanity of the answer.
			We can check to see if the result is too high or low.
			If adding positive numbers or subtracting a negative number from a positive number results in a negative number, we overflowed.
			For unsigned numbers, the result won&apos;t be negative, but it&apos;ll still be lower than the initial number, which is clearly wrong and signals that we&apos;ve overflowed.
			If we add two negative numbers or subtract a positive number from a negative number and the result is positive, we&apos;ve underflowed.
			For unsigned numbers, we&apos;ll underflow if the subtrahend is larger than the minuend, we&apos;ll underflow.
			We can either see that the result is larger than the minuend to know we underflowed, or we can detect the underflow before it happens by comparing the minuend and the subtrahend before performing the calculation.
		</p>
		<div class="APA_references">
			<h4>References:</h4>
			<p>
				Tarnoff. (n.d.). Combinational Logic Applications. Retrieved from <a href="https://my.uopeople.edu/pluginfile.php/232140/mod_resource/content/2/TarnoffCh8_v02.pdf"><code>https://my.uopeople.edu/pluginfile.php/232140/mod_resource/content/2/TarnoffCh8_v02.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			I agree, using that extra output, which would basically be a final carry bit, would be the proper and most effective way to detect overflow.
			Either most hardware doesn&apos;t offer that feature though, or modern programming languages don&apos;t tend to offer access to it.
			It&apos;s a feature I&apos;d love to see in future computers and programming languages.
		</p>
	</blockquote>
	<blockquote>
		<p>
		I completely misunderstood your statement about overflowing occurring if the solution to a subtraction problem took the same sign as the subtrahend when the signs differ.
		That&apos;s a good way to check for overflow.
		I guess even though I thought about cases in which overflow (or underflow) could occur during subtraction, I didn&apos;t really think about how to detect it without help from the hardware.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Like you said, subtracting is like adding the inverse.
			In fact, this is how it was suggested that we implement it in our assignment this week.
			It&apos;s odd that the instructions told us not to bother implementing support for negative numbers, as negative numbers are vital to how our circuits deal with subtraction.
			In fact, without any added effort at all, our circuits can deal with negative numbers and positive numbers alike.
			To the circuit, there is no difference between them; no special logic is needed.
		</p>
	</blockquote>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<h3>Follow-up from last week</h3>
	<p>
		You said to email you, but I don&apos;t have your email address.
		It doesn&apos;t seem to be on your profile.
		I tried using the university&apos;s messaging system to reach you, but it looks like the university&apos;s done some remodelling since I&apos;ve used the messaging system.
		The messaging system is now completely borked and no longer functions.
		I can type a message to you, but when I hit the &quot;send&quot; button, nothing happens.
		I&apos;m not sure when the change was made, as the messaging system worked perfectly fine the last time I used it, but it&apos;s been a while since I&apos;ve tried to send any messages with it.
		I&apos;m always happy to respond to emails though; I can be reached at <a href="mailto:alex@y.st"><code>mailto:alex@y.st</code></a>.
	</p>
	<p>
		I tried writing up my journal entries as a fluent conversation as you suggested, but you marked me down for that (Unit 1), saying I needed to answer it as questions.
		As for the feedback on other assignments, that&apos;s one of the questions asked by the learning journal instructions.
		With the discussion post drafts, again, the learning journal instructions say to draft our discussion posts in the learning journal, so that&apos;s what I did.
		I&apos;ll avoid doing that for the learning journal assignments in the remainder of this course.
	</p>
	<h3>This week</h3>
	<p>
		The main thing I learned this week is how to build time-sensitive gates out of time-insensitive nand gates.
		What that one of the lessons I was <strong>*supposed*</strong> to learn?
		No, not really.
		Still, it was important for the rest of the material that I figure that out.
		I don&apos;t do well with black boxes; I can&apos;t wrap my head around them well.
		I need to know what&apos;s inside.
		Nand gates themselves are black boxes, but we have to start with some sort of building block, right?
		I don&apos;t think we&apos;ll need <strong>*multiple*</strong> types of black boxes though, if the book is being honest that all other gates can be constructed from nand gates alone.
		And after this week, I&apos;m taking that claim even more seriously.
		I imagine the clock itself isn&apos;t composed of any gates.
		It&apos;s probably a quick motor or something; spinning; connecting and disconnecting in quick succession using electromagnetism.
		Everything else besides input/output hardware and non-volatile storage ... is probably nand gates.
	</p>
	<p>
		I found it very surprising that a time-insensitive gate could be used to construct a time-dependent gate.
		It&apos;s quite counter-intuitive to me.
		Not much else specific to this lesson has made me wonder though, aside from the questions mentioned that I already pursued and no longer wonder about.
	</p>
	<p>
		Not specific to this lesson though has been a growing curiosity of mine.
		This is all hardware.
		How is software such as an operating system altering the flow of logic?
		Software (including the operating system) can&apos;t be run unless it&apos;s in $a[RAM].
		It&apos;s not enough for it to be on disk.
		That might be part of the key to all this.
		I&apos;m not yet sure how data gets copied from disk into $a[RAM], but the $a[RAM], as we&apos;ve seen this week (or at least, as <strong>*I&apos;ve*</strong> seen in my turn-nand-gates-into-a-data-flip-flop side adventure), is just another part of the logic circuit.
		I have no doubt that the numeric machine instructions correspond to control input into multiplexers or something similar.
		Each set of bits must tell the processor which sub-circuit to look at the computed output from.
		Once the software is in $a[RAM], and is therefore pare of the circuit, its bits should be able to feed into the rest of the circuit without too much difficulty.
	</p>
	<p>
		The D-latch isn&apos;t a complete $a[RAM] card though.
		There needs to be a way to combine it with multiplexors and potentially something else to produce a data storage that changes not every clock cycle, but only during clock cycles in which we actively seek to change it.
		I say we need multiplexers because $a[RAM] addresses, no doubt, as the control bits to some big multiplexer.
	</p>
	<p>
		The grading instructions presented this week for last week&apos;s coursework don&apos;t match what we were told to do last week.
		First of all, the instructions last week specifically presented us with instructions for building a 4-bit and operation and 4-bit or operation circuit on the one-bit level.
		So that&apos;s what I did.
		However, the grading instructions show us that we were supposed to do it on a 4-bit level.
		That is to say, we weren&apos;t supposed to build the circuit components for that ourselves, but use the built-in ones.
		I did as I was instructed, and so I failed.
		Had I not followed the directions, I would&apos;ve gotten it right.
		Next, it looks like we were supposed to use the built-in multiplexer unit, not build our own.
		I was under the impression that we were required to use only certain gates for our design.
		That said, the instructions last week <strong>*technically*</strong> only said it limited us for the adder and subtracter, not the full arithmetic logic unit.
		I&apos;d say the instructions were misleading, but unlike with the and- and or-gate issue, the instructions weren&apos;t an outright mismatch for the grading sheet.
		That said, I put in the effort to build my own multiplexer, and I&apos;ll likely be marked down for that because the students grading it won&apos;t recognise that it&apos;s a multiplexer at all.
		There&apos;s just no way for me to win here.
	</p>
	<p>
		The book presented us with the concept of data flip-flops this week, and made the claim that despite their sequential nature, they can be implemented using non-sequential nand gates alone.
		Of course, the book offered zero explanation as to how that could even be done.
		How are we supposed to learn how things work if the lesson treats important components as confusing black boxes!?
		It makes sense to treat nand gates that way, if they&apos;re as elementary as we&apos;ve been lead to believe.
		To figure out how they work, we might need to be physicists or something.
		However, anything that we can build from what we know, we should be learning.
		I pretty much spent the entirety of one of my days off from work trying to decipher data flip-flop functionality.
		It didn&apos;t help that Logisim kept locking up on me because it didn&apos;t like dealing with such statefullness in non-stateful gates.
		I came up with what I think to be a good solution, though it&apos;s not what I presented in the discussion assignment.
		In the discussion assignment, I showed the building of a data flip-flop from D-latches, so the nand gates in my diagram had to be arranged as D-latches.
		Throwing out that requirement though, and building a data flip-flop directly from nand gates, we optimise a bit.
		We&apos;re still borrowing the concept of a D-latch, but we&apos;re omitting some of the overhead of keeping the two D-latch-like structures isolated:
	</p>
	<img src="/img/CC_BY-SA_4.0/y.st./coursework/CS1104/better_data_flip-flop.png" alt="data flip-flop" class="framed-centred-image" width="409" height="146"/>
	<p>
		I ran into problems building my D-latches and data flip-flops.
		Logisim would recalculate the values carried by the wires as I moved things around, and in some cases, the gate outputs are somewhat ambiguous.
		You can calculate the new output any time you want, but to do that, you need the current input, which depends on the existing state of the wires.
		Logisim seems to be designed to model in a time-insensitive manner, which isn&apos;t always possible when building time-sensitive gates.
		Logisim would detect &quot;oscillation&quot; and lock up on me.
		It wasn&apos;t oscillation in the wires though, but in Logisim&apos;s calculations.
		To fix it, I always had to save my project, close it, and open it again.
		A fellow student found a way to get Logisim to play nicely though: there&apos;s a reset option within the program that fixes the simulation.
	</p>
	<p>
		If my understanding of the assignment this week is correct, the registers we&apos;re developing are supposed to allow continued modification of data as long as the control signal is on.
		They don&apos;t wait for a single instant to make the update; they&apos;re D-latches, not data flip-flops.
		It took forever to grasp how D-latches (and data flip-flops) function, but now that I have that down, the assignment was a breeze.
		I tried to copy my D-latch circuit from my discussion main post for the week, but it seems Logisim doesn&apos;t allow copying from one project and pasting into another; I had to recreate the circuit.
		From there, it was only a matter of running four of the circuit in parallel, with a small modification to link the clock input of the four copies.
		Come to think of it ... with four-bit gates, only one of this circuit would even be needed.
		That said, to make that design work, the clock input would have to be branched, then merged using a splitter.
		It&apos;d look strange, but it&apos;d function.
		Getting the registers onto the logic unit, again, took manually recreating the register circuit, as Logisim doesn&apos;t allow copying structures between projects.
	</p>
	<p>
		Two parts of the assignment were unclear to me though.
		First, the assignment seemed to imply that each bit of the register should have it&apos;s own, independent control bit.
		That&apos;s easy enough to implement, but logically, it makes no sense.
		When you want to write a value to the register, you open up the register as a whole for writing.
		This is especially true because we&apos;re attaching the register to our arithmetic logic units; you don&apos;t modify <strong>*some*</strong> bits of an integer, leaving other bits at their previously-assigned values.
		You&apos;d end up with some bizarre number that you have no idea where it came from.
		As such, I went with the intuitive approach, and used a single control bit to control the entire register.
		If I get marked down for that, so be it.
		Second, when attached to the arithmetic logic unit, should the three registers use the same control bit(s) or should they be controlled separately?
		My first thought was to control them all with the same bit.
		When you update the input, you update <strong>*both*</strong> the inputs.
		However ... the output needs to be updated separately, so that plan wouldn&apos;t work out.
		As such, I gave all three registers separate controls.
	</p>
	<p>
		Last week, I said I&apos;d work at making my designs thinner, as to make them easier to view on narrow screens.
		I didn&apos;t realise we&apos;d be building off last week&apos;s design though.
		I put in the effort needed to rearrange things enough to avoid making the design any <strong>*wider*</strong>, but like last week, I didn&apos;t have time this week to redesign that circuit for narrow monitors.
		Some width was lost due to the changed placement of the input pins to allow for buffer placement, but that was purely unintentional.
		If we keep building off this design, I thought it likely I&apos;d leave it at its then-current width.
		I&apos;d also somewhat given up my efforts to optimise my use of space.
		I was still making <strong>*decent*</strong> use of space, just not making <strong>*good*</strong> use of space.
		There simply isn&apos;t time.
		This has been one of my busiest terms so far, and to add to that, I&apos;ve got a lot going on outside of school right now too.
		Furthermore, each addition to this chip would require completely re-optimising old work each time to account for new additions.
		If I knew ahead of time when the final addition would be added, I might optimise that version and leave the versions leading up to it unoptimised and ready for the final compaction.
	</p>
	<p>
		That is, that was the plan until I figured out how to make the gates have less input ports, which made the smaller-sized gates I&apos;d tried to use in the past look as good as they should&apos;ve looked from the start.
		I&apos;d avoided using them before because they looked stupid with more input ports than would reasonably fit.
		Using the smaller gates, I was able to compact my design to where it was a manageable size, but that required building the whole chip, including what we build last week, from scratch.
		It was the only way to avoid crossing the wires inappropriately and messing things up.
		It was a major pain having to start from scratch, but it made this week&apos;s design easier to fit on a small monitor and laid a better foundation for future assignments that build off this one, if any.
	</p>
	<p>
		First, I recreated my half-adder.
		The fun thing about my half-adder is that it completes its work without using any xor gates.
		The grading instructions this week show an example using xor gates, but the assignment instructions from last week don&apos;t list xor gates as one of the basic gates we&apos;re allowed to use.
		I stuck with my original design&apos;s xor-less logic, despite the fact that that takes up significantly more space.
		Some things are worth the space taken up, and this isn&apos;t a real chip taking up space in a machine; it only takes up screen real estate and a little disk space.
		If this was a real chip, or if the instructions last week had allowed for xor gates, xor gates are what I&apos;d use.
		From there, I skipped building last week&apos;s adder and went right for the subtracter.
		My original chip from last week uses the subtracter sub-circuit for both addition and subtraction, not including a regular adder in its design at all.
		That allows it to reclaim some of the space lost due to not using xor gates.
		I probably should&apos;ve built the new adder using xor gates this time, as I was having to build from scratch anyway, but I rather enjoyed my initial logic; just not amount of space originally needed to display it.
		I made much better wire-placement decisions on the subtracter this time, but the logic itself was identical to before.
		Next, I rebuilt my conditional bit inverter.
		This, along with a conditional carry bit, are what allow my subtracter to perform both addition and subtraction.
		I then set up my bitwise and and bitwise or logic, then my multiplexer.
		My multiplexer&apos;s a be strange in that it&apos;s three-way; not some-power-of-two-way.
		As I said, the subtraction logic handles both addition and subtraction using the same exact gates.
		As a result, it handles two of the four use cases.
		If the most-significant bit of the arithmetic logic unit&apos;s control is set to <code>0</code>, the subtraction sub-circuit&apos;s answer is the one we want, so my multiplexer uses this bit alone to choose one of the three multiplexer options.
		If the bit is instead set to <code>1</code>, it decides between the other two options, bitwise and and bitwise or, based on the least-significant bit.
		Fitting the multiplexer into its original spot was a challenge, and I wasn&apos;t sure I&apos;d be able to do it, but it worked out fine.
		Then finally, I recreated this week&apos;s work.
		With the smaller gate size, I don&apos;t have to be quite as frugal with space; I&apos;m still going to try a bit, but I&apos;m not going to go nuts with it like I have been doing.
	</p>
	<p>
		While redoing all my work from this week and last week was a pain, it&apos;s not wasted time.
		I&apos;m learning to use Logisim better.
		Over the course of the term, I&apos;ve already learned about several of Logisim&apos;s features that we haven&apos;t even covered.
		For example, I use Logisim&apos;s constant input feature last week.
		To subtract, we invert the bits of the subtrahend, add one, and add it to the minuend.
		To add one, we need an input that&apos;s always <code>1</code>.
		In the arithmetic logic unit, I was able to pull in that <code>1</code> from the control bits, but for the stand-alone subtracter, I needed a wire that&apos;d provide an on signal without such input.
		The plan was that if I couldn&apos;t find a constant input object (which thankfully I found), I&apos;d build something convoluted like a nand gate that always received one of the inputs as well as that same input inverted.
		A and not A will never both be <code>1</code>, so that would provide the constant <code>1</code> I needed.
		An or gate instead of a nand gate would also work for this.
	</p>
	<p>
		The first chapter of the reading material, as I said, left us with data flip-flops as black boxes.
		I had to stop the reading at that point, do my own research, and complete the week&apos;s activities.
		It was only after completing my work that I found time to go back and read more.
		Returning to the reading, I found there were later hints as to how to make data flip-flops work (due to the first chapter belonging to a different book than the second).
		It didn&apos;t actually complete the lesson, and only showed how to build set-reset latches, which themselves are not time-dependent.
		It treated all clock-enhanced gates as black boxes, like the other book.
		I&apos;m highly disappointed in both books.
		I&apos;m very thankful for the discussion assignment this week though.
		My research on my own pointed me in a bit of a wrong direction, and the discussion assignment allowed me to properly integrate what I&apos;d learned from the books from what I&apos;d learned on my own, producing a working model of a time-sensitive gate.
		The divide-by-two circuit, including the implementation of a counter with it, was interesting to think about.
		Again though, without having first done my side research, I wouldn&apos;t be able to understand this time-sensitive circuit.
	</p>
	<p>
		The third chapter discussed how memory addressing works.
		It&apos;s done through decoders?
		That&apos;s ... odd.
		I could&apos;ve sworn it would be done with multiplexers.
		Then again, I&apos;ve still failed to grasp the advantage of decoders for hardly any task.
		Multiplexers seem to be the more-useful tool when working with logic gates.
		Decoders seem more useful for turning hardware on and off instead.
		It also discussed buses being two-way.
		I don&apos;t understand how that could work.
		As far as I can tell, logic gates don&apos;t flip the direction in which they compute.
		That means that the in-side of anything built from logic gates, such as a processor or $a[RAM] card, will always be perpetually taking input from the same place and providing output to the same place.
		If they&apos;re providing output to the same lines they accept input from ... well, I just don&apos;t get how they could function.
		Again, it&apos;d help if the book went into the structure of how things work here.
		Without a lower-level look at both $a[RAM] and the processor, I can&apos;t understand buses.
		Unfortunately, I don&apos;t even know what to put into a search engine to get any help on this.
		The use of nand gates with a few potentially-inverted inputs to create the active-low signal for a given device was predictable, but without explanation as to how the inactive-high signals actually disable devices (in terms of gate logic), I&apos;m still not understanding how this all works.
	</p>
	<p>
		I&apos;m curious as to how dynamic $a[RAM] is constructed, but my guess is that that&apos;s outside the scope of this course.
		It reaches too far outside the logic gate view we&apos;ve been using.
		I hope we cover it in a future course though.
		I was happy to see how splitting an address and providing it in two parts is done.
		I was wondering how the circuitry could know when to store the first part of the address and when to use the stored first part with the incoming second part.
		The control bit idea presented was so much simpler than I was expecting.
	</p>
</section>










































<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		This week, we were to write a paper, one that&apos;s at least &quot;two type-written pages long&quot;.
		You can&apos;t realistically measure a paper&apos;s length in pages though when the document is digital.
		The page count depends too heavily on the font and font size.
		The font will depend on the computer of the person reading it!
		For example, I&apos;m on Debian, so I have a whole bunch of free fonts installed.
		I have no proprietary fonts installed whatsoever.
		Most graders of my work will be on Windows or OS X.
		That means they will have a bunch of proprietary fonts installed, with little to no free fonts.
		The page count (measured as a float, not an integer) will necessarily be different between my machine and theirs.
		If I were to aim for two pages, it might show up as 1.8 pages to them; or 2.2, if I&apos;m luckier.
		At least this is a minimum requirement though.
		That means I could make sure to meet it by going way above.
		This is normally the type of opportunity I jump on, but I just didn&apos;t have in in me to do that this week.
		(I tend to aim for about double the required length, even when better, more concrete units are used for measure.
		That said, surgery last week took a lot out of me, leaving me to catch up this week, and surgery this week used up even more of my time and energy, so I wasn&apos;t able to find it in me to write as much as I normally would.)
		The instructions said to submit the paper in certain Microsoft Office format based on the date that format was used by Microsoft or, optionally, $a[RTF] format.
		I wasn&apos;t sure how to tell which version of Microsoft&apos;s format LibreOffice uses, so I figured $a[RTF] would be the only viable option for me, but it looks like LibreOffice can save in both Microsoft formats (and has the two labelled by year).
	</p>
	<p>
		The grading this week made me sad.
		One student submitted a four-bit adder and <strong>*only*</strong> a four-bit adder.
		We were supposed to be building registers.
		Another student not only failed to submit their register design, but also tried to wire their registers onto the arithmetic logic unit backwards.
		I say &quot;tried&quot; because the wires don&apos;t quite connect, so they don&apos;t actually interact with the arithmetic logic unit in any way.
		One student got it right though.
		I recognise their arithmetic control unit from a past week, too; their was different in that they&apos;d used the built-in subtracter, but their own adder.
	</p>
	<p>
		As for the grading from the week before ...
		As I expected, students didn&apos;t see that I&apos;d built a multiplexer out of logic gates, so I got marked down for not using a multiplexer.
		I honestly thought we were supposed to build one.
		Even worse though, one student claimed I didn&apos;t provide a circuit capable of adding, subtracting, and performing bitwise and/or operations.
		Now that is a blatant lie.
		Clearly, they didn&apos;t even <strong>*try*</strong> my circuit.
		It may not use the built-in multiplexer unit, but it is very much capable of performing all the requested operations.
		The operations are selected using the control bits, just as if I&apos;d used the built-in multiplexer.
		I even labelled all the inputs, so it was exceedingly clear what inputs were what.
		Whatever though.
		It&apos;s not like I can do anything about it.
	</p>
	<p>
		The second of two assigned reading chapters seemed incredibly familiar; I was recognizing some word-for-word phrases even.
		It looks like it&apos;s one of the assigned chapters from <a href="#Unit3">Unit 3</a>, so there wasn&apos;t anything new to comment on there, especially as you&apos;ve asked me to be more brief.
		I&apos;m not sure if it was intended as review or if there&apos;s a mistake in the course content.
	</p>
	<p>
		I feel like I did most of my learning this week outside the assigned reading material.
		In particular, I was able to become more familiar with what control codes are, how they work, and how they relate to machine code.
		Control codes are almost like a simpler, lower-level machine code specific to a particular component of the $a[CPU].
	</p>
	<p>
		My main challenge for the week was catching back up after two surgeries, but I managed to pull it off.
		Hopefully, I&apos;ll be feeling much better tomorrow, so I can get a jump on next week&apos;s work.
		If I can&apos;t get the jump on the coursework from the get-go, next week is going to be even worse than these past two weeks.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		The grading instructions this week for last week&apos;s work are unfair.
		First, we have to grade students on whether or not they included a title page.
		The assignment instructions never <strong>*told*</strong> us to include a title page!
		Next, we&apos;re asked to grade based on whether or not the student used a double-spaced, twelve-point font.
		We were never <strong>*told*</strong> to use a twelve-point font, nor were we told to double-space our work!
		I just checked; the font size I used outside of headings is twelve-point.
		I didn&apos;t do that intentionally, as I didn&apos;t know we needed to, but random events seem to have landed me with the right font size.
		However, I didn&apos;t double-space it, as there was <strong>*no way*</strong> to know we were supposed to.
		That assignment was rigged for us to fail.
		On previous assignments, I felt the instructions were unclear, leaving me high and dry, but the lack of clarity was debatable.
		It could be said that maybe I just wasn&apos;t understanding things I should have.
		This time though, the problem is undeniably an issue in the assignment instructions and grading instructions, not something I could possibly be doing wrong.
	</p>
	<p>
		One student submitted a PYSC 1504 instead of a <span title="CS 1104">Computer Systems</span> paper.
		I can only hope that they submitted the same assignment to both their courses instead of submitting the their <span title="CS 1104">Computer Systems</span> in PYSC 1504.
		That way they&apos;ll get credit in at least one of their courses instead of neither.
	</p>
	<p>
		The assignment this week involved translating assembly into machine code by hand.
		It wasn&apos;t too difficult, but I found myself making a simple mistake as I was working, to I tried translating back from machine code into assembly the next day to catch any other mistakes I might&apos;ve made.
		There were a couple; in one case, I told the machine to run <code>M=M</code>, and in another case, I appended a <code>;JGT</code> to the end of a computational statement.
		The assignment says to try running our hand-compiled code on the Hack simulator, but as I discussed in <a href="#Unit1">Unit 1</a>, I was never able to get the TECS software suite even running.
		We&apos;ve still not been provided any sort of instruction for that, either.
		This week, we were presented with a slide show presentation about how to use the simulator after it is up and running, but that won&apos;t help when I can&apos;t even get the thing to start up.
		Thankfully, running the code had nothing to do with anything to put into the work to be submitted ... this time.
		In the next two weeks though, I expect we&apos;ll need to have this software suite operational.
	</p>
	<p>
		I feel like this assignment would&apos;ve been much more educational had we actually built the Hack computer in class as the textbook had wanted us to.
		We&apos;ve had no such assignment(s) though.
		This assignment could have shown us why Hack is likely built the way it is, but instead ... it just didn&apos;t, due to us having zero knowledge of how Hack is implemented.
		That said, I think if I had a week without other assignments, I&apos;d be able to set up a circuit that could process Hack binaries on my own.
		A couple of the main difficulties would be in trying to figure out how to get a program counter to tell the rest of the circuit which memory address to pull instructions from and figuring out how to build memory (figuring out bi-directional buses would be a huge bottleneck on that second one).
		I do feel like I better understand the logic of software binaries, though I still have almost no clue as to how they manipulate the hardware.
		Without more information on buses at a minimum, I doubt I&apos;ll be able to understand.
		Information on $a[RAM] and program counters would likely be very helpful as well.
	</p>
	<p>
		The textbook makes a very good point as to why we should learn machine language.
		Specifically, it helps us understand why the hardware is set up the way it is.
		Last week, when I was learning about the controller and control codes, I ran into some interesting information.
		The controller can simply route certain bits of the machine language instructions to the other hardware components in some cases, using those bits as the control codes.
		Other times, it has to generate the signals it sends itself based on the machine language instruction.
		What this tells us is that a machine language and the hardware can be set up optimally, where no real translation is needed.
		Bits are simply routed where they need to go.
		Hardware and a machine language can also be set up suboptimally, where extra effort is needed to generate the control codes.
		By learning both the machine language and the control codes, we can get a pretty good idea about how optimally we could build the hardware that implements it.
		If it&apos;s clear that an optimal implementation can exist, we can see exactly why the design is as it is.
		Otherwise, we see that the design is flawed and probably poorly-designed.
	</p>
	<p>
		The book claims that all computers have explicit load and store commands for moving data in and out of the registers.
		As we looked at last week in the discussion assignment, that&apos;s actually a lie.
		(Was it last week?
		I think it was last week.)
		$a[RISC] computers have explicit load and store commands.
		$a[CISC] computers do not.
		In fact, this is one of the main weak points in $a[CISC] computing.
	</p>
	<p>
		I was able to learn some new information on interrupts in the discussion forum this week.
		In another course, I learned about timer-based interrupts, but several students talked about input-induced interrupts instead.
		It makes sense that we&apos;d reach different forms of interrupts in our research; the textbook presented us with no concept of the sort, so when each of us found something that looked like it might be what the discussion assignment was about, that&apos;s what we went with.
		That said, some students skipped over that part altogether due to the bizarre nature in which the discussion assignment was worded.
		It told us to research interrupts, so most of us decided to discuss them, but it didn&apos;t actually <strong>*tell*</strong> us to discuss them or ask us any questions about them.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		It&apos;s another week in which we&apos;re supposed to use the TECS software suite, but as before, I still can&apos;t even get it running.
		I&apos;m not sure what I&apos;m doing wrong and we&apos;ve still been given zero guidance whatsoever for making this work.
		Likewise, I&apos;m still unable to access the manual, as it&apos;s hosted on a server that maliciously blocks my $a[IP] address.
		As such, I have no hope of figuring this out unless directions are provided, but at this point, I don&apos;t think they will be.
		Thankfully, I was able to complete the work yet again without relying on the TECS software suite.
		We were to write code, assemble it, and test it.
		I wasn&apos;t able to test it or use the provided assembler, so I wrote my own assembler.
		Assuming always-valid input, writing an assembler at this level isn&apos;t difficult.
		I just made it so the thing would halt if the input didn&apos;t make sense.
		I also needed to build this assembler because of the grading of last week&apos;s work.
		Instead of the grading instructions giving us an answer key, they said to assemble the original code and compare it to the hand-assembled code presented in the work we graded.
		Again, it&apos;d be nice if we had some sort of instructions for getting the TECS software suite up and running so I wouldn&apos;t have to hand build my own assembler, a task I doubt many other students in this course took on.
	</p>
	<p>
		The funny part is that I decided I&apos;d need to build the assembler to complete the assignment before I even started with the reading material for the week.
		Then, on the first page, this week&apos;s $a[PDF] starts talking about how easy it is to build such an assembler.
		It sounded like the book might cover this task, potentially pointing out things I might otherwise overlook.
		The main thing I overlooked was the use of symbols, both variables and labels.
		It seemed odd to start variable assignment with memory address <code>16</code> instead of memory address <code>0</code>, seeing as the executable code on the Hack platform exists in a separate memory space and doesn&apos;t need the allocated space at the beginning of the data memory, but I just went with it; it wasn&apos;t like it was a difficult rule to implement.
		(Later, the book actually explained the reason for this strange address-skipping: the predefined symbols.)
		I thought I could resolve symbols in the same pass as the regular translation, but then I found the reason this is so difficult.
		Jump statements can jump not only back, but <strong>*ahead*</strong>.
		To translate the A-command before the jump statement, we need to know the symbol used in the A-command, which we may or may not have encountered yet.
	</p>
	<p>
		In past units, the reading material discussed terminating Hack programs by putting them in infinite loops.
		I didn&apos;t understand this.
		I thought it was saying that by instructing the computer to jump back to the <strong>*beginning*</strong>, we&apos;d halt the execution because the Hack computer is set up to interpret this as the instruction to stop.
		This seemed like an incredibly arbitrary and hacky thing to build into the platform.
		The examples in this week&apos;s reading material cleared up my confusion.
		Basically, you write a jump instruction that jumps back to itself.
		I doubt the Hack computer ever actually stops running the program until you shut the whole computer down, but this prevents the Hack computer from reaching the next instruction, which is undefined.
		It just stays in its little jump loop instead.
		I&apos;m not sure what would happen if the machine were able to reach the undefined instructions; it&apos;d depend on the implementation of the instruction memory.
		If the unused portion were zeroed out, the machine would simply run <code>@0</code> repeatedly and harmlessly.
		(You&apos;d lose the value in the A register, but you lose that anyway if you set the value of it for the infinitely-repeating jump.)
		But if it&apos;s not zeroed out, any sort of bizarre instruction might be carried out.
		And what happens when it gets to the <strong>*end*</strong> of the instruction memory?
		I can&apos;t say for sure, but I suspect there&apos;d be an overflow in the program counter and the Hack computer would start back at instruction <code>0</code> again.
	</p>
	<p>
		I didn&apos;t like the $a[API] that the textbook proposed, so I defined my own.
		For example, the $a[API] for the symbol table required both a symbol and an address to map it to.
		That means that something external to the symbol table class must track which memory address should be allocated to the next variable.
		Um.
		No.
		The symbol table should track that itself.
		Additionally, the proposed $a[API] seems to imply that the assembler should check to see if a variable name is already associated with a memory address, every time a variable is used in the code, then allocate a memory address if need be.
		Again, this seems like a poor design choice to me.
		The symbol table should be able to handle all that transparently.
		The assembler should be able to just ask the symbol table what address is associated with a given symbol.
		If that symbol has not yet been encountered, according the the Hack assembly syntax, it&apos;s a variable and should be allocated an address at that time.
		With that in mind, I set up my symbol table class to not only keep track of what address the next variable should be allocated, but also transparently allocate said address if an unrecognised variable&apos;s address is requested.
		When encountering labels, the assembler can explicitly set the associated address, and when encountering variables, the transparent allocation is used.
		Due to the way my native language, $a[PHP], handles classes that want their instanced to be treated like arrays, I did set up a method for checking to see if a symbol exists in the symbol table.
		However, this method is not used internally by the class, nor is it used by my assembler.
		In fact, I could&apos;ve provided a dummy method instead of a working one, but I didn&apos;t as I prefer my methods function whenever it&apos;s reasonable for them to do so.
		The book recommended building the symbol table last, but as I had the clearest picture in my mind of how it should function, it was the component I completed first, though I did write part of the main assembler before finishing it.
		Later, having this part done first would be useful, as I was able to make calls to its $a[API] within the code-parsing script.
	</p>
	<p>
		With the symbol table built, I started back up on the main assembler script.
		I had to go back and modify the symbol table class to make it throw an exception if it was passed an invalid symbol name (one that uses invalid characters or begins with a digit), but otherwise encountered no problems.
		This was a vital change because of how I wrote up the assembler&apos;s logic.
		It assumes that any line beginning with an open parenthesis and ending in a close parenthesis (after comments and whitespace have been stripped) is a label.
		I wanted to keep the logic simple there, and didn&apos;t put any sort of validation.
		Originally, the logic also required that the line be at least three characters long, but I removed that limitation so that empty labels would be passed to the symbol table too, where the symbol table would promptly throw an exception and abort the assembler.
		With that fix in place, I completed the assembler using instances of a not-yet-defined class for processing individual lines using the only $a[API] that seemed obvious to me.
		In retrospect, the symbol table&apos;s $a[API] should&apos;ve been just as obvious to me as the command parser, but it wasn&apos;t.
		I knew how I wanted it to behave, just not the method names.
		Implementing the <code>\\ArrayAccess</code> interface in my symbol table instead of using regular method calls was a choice I made only after getting started; it&apos;s not a perfect fit, but it works well.
		I debated back and forth as to where to put the logic for determining whether something was a symbol or a constant; I could put it in the code-translator class that I was now working on, but I could also put it in the symbol table class.
		Both classes didn&apos;t really need to know whether a given value is a variable or a constant, but between the two of them, <strong>*one*</strong> of them had to be able to tell the difference.
		I decided it&apos;d be easier to have the symbol table handle this, and let the command processor ignore the specifics of where values come from.
		If my symbol table weren&apos;t so Hack-specific, I would&apos;ve gone the other direction, but as is, the symbol table class can&apos;t really be used outside of building a Hack assembler anyway.
	</p>
	<p>
		I&apos;m confused as to why the Hack platform uses seven bits in its comp field.
		It only has 28 commands, which can easily fit into five bits, with four command codes left over.
		I certainly hope we&apos;re not seeing the full command list (at least yet) in this course and that at <strong>*least*</strong> sixty-five commands are available (which would barely require a seventh bit).
		That said, the bits might represent different components of the equations being used.
		I know the book said this isn&apos;t the case and that the comp part of the commands in the assembly are indivisible, but I&apos;m seeing a definite pattern here.
		For example, the first bit seems to determine whether A is referenced or M is.
		The final bit seems to have something to do with inversion (both bit-negation and sign inversion).
		Other similarities between like equations seem to cause like binary output as well, though I didn&apos;t make a full list or anything.
	</p>
	<p>
		It took me most of the first day to write the code, then most of the second to document the classes.
		With that done, it was time to debug.
		The obvious way to test it was to feed it the code I actually needed assembled.
		I started with my own work from last week.
		The first bug caused crashes due to my misunderstanding of the <code>\\preg_match()</code> function, which I used when parsing dest components.
		I&apos;m still confused as to why, but it sometimes outputs array keys with empty values when a match isn&apos;t found, and other times doesn&apos;t include those array keys at all.
		There&apos;s a pattern to when it does which, but it&apos;s a bizarre behaviour.
		Accounting for that in my code, the next bug involved incorrectly padding values for A-commands.
		The default is to pad from the right, but we obviously need to pad from the left.
		I hadn&apos;t considered padding from anywhere but the left, so my assembler got all A-commands wrong.
		With those bugs fixed, I was able to generate my answer key, and I verified that both my assembler and my submission from last week were correct.
		Grading the work of my peers, I had to compare each bit by hand, but I found all three fellow students submitted work that matched the output of my assembler.
		This further verified that my assembler is correct, and not that I&apos;d made the same mistake in both my submitted work and my new assembler.
	</p>
	<p>
		A couple days later, I had time to revisit assembly and get started on the assignment for the week.
		Quickly, I found more flaws in the assembler.
		I knew I hadn&apos;t gotten a chance to test the symbol-handling code yet.
		It wasn&apos;t handling symbols correctly, rejecting them if they were only a single character in length.
		Multi-character symbols were dealt with fine though.
		Checking the code, I found I wasn&apos;t using anchors in my $a[regexp].
		I corrected that problem, but of course that wouldn&apos;t&apos;ve caused the error I&apos;d seen (it would actually allow some invalid symbol names, not reject some valid symbol names), and in fact, broke things worse.
		Now no label names at all were accepted.
		Eventually though, I found the issue: a missing end bracket.
		With that put in place, symbols seemed to be handled correctly.
		With the symbol-handling code now working, I decided to actually turn the third part of the assembler into a proper module.
		It now only has an $[API], no user interface.
		I set up a small wrapper script to restore the previous command line functionality and continue my assembly work.
		I ran into no further issues, but I&apos;m not convinced my symbol-handling code is well-tested yet.
		In particular, I did not attempt to make sure invalid labels are rejected; I used only valid labels as I was only working with valid code.
		On a later date, I might do some better testing.
	</p>
	<p>
		The week&apos;s assignment was mostly straightforward, and the only real challenge at first was in getting the assembler to work properly.
		The final problem for the week did have a twist though that took me a while to figure out a solution for.
		In particular, we were to work with an array.
		Hack assembly doesn&apos;t have a way to allocate multiple memory segments with a single variable assignment, so I had to create a bunch of dummy variables to reserve the memory addresses and prevent the next variable from overlapping with the array.
		As a side note, I see now why languages such as Java require array size to be specified at array-creation time and why arrays cannot change in size.
	</p>
	<p>
		The goal for the week was to learn how code in high-level languages translates to commands in low-level languages.
		I think I&apos;ve successfully done that, and I have a greater appreciation for the developers of cross-platform, high-level languages (at least those whose code is freely available; I have no love for proprietary languages).
		Quite frankly, there are several reasons people don&apos;t often write in low-level languages.
		The main one is no doubt that writing programs directly in assembly is a pain in the neck, but the cross-platform aspect also prevents the same code from having to be rewritten several times for several $a[CPU]s.
		But in addition to learning the intended lesson, I also learned how to build a simple assembler, furthering my understanding of the translation process between assembly and machine language.
		It&apos;s a pain that I still can&apos;t get running the Hack assembler (and Hack emulator) that we&apos;re supposed to be using, but some good definitely came from that difficulty.
	</p>
	<p>
		I have one more week to go.
		Hopefully like this week, the assigned emulator won&apos;t be of vital necessity.
		Either that, or hopefully we&apos;ll finally be given directions on how to even get the thing started.
		As I mentioned back in <a href="#Unit1">Unit 1</a>, I&apos;ve been unable to start the TECS software suite, and furthermore, have no access to the online manual because the website it&apos;s hosted on maliciously discriminates against users based on their $a[IP] addresses; and I have an $a[IP] address the site doesn&apos;t like.
		Looking back in my notes though, this course offers a virtual computing lab.
		I found the first guest account&apos;s credentials wouldn&apos;t work, but the second account&apos;s did.
		The virtual computing lab is painfully slow, but if worst comes to worst, I should be able to get what I need done there.
	</p>
	<p>
		It occurred to me on the final day that I think I could build a Hack emulator, aside from the input/output components.
		Had I had the time to do so, I could&apos;ve properly tested my assembled code before submitting it.
		This week was far too busy to allow me to do that though, even if I&apos;d thought of doing it sooner, especially as I already had to build the assembler.
		Maybe the emulator&apos;s a project for next week if it&apos;ll still be useful then.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		The assigned reading material for the week is hosted on a Web server that maliciously discriminates against users based on their $a[IP] addresses.
		As a result, I spent the first half of the week unable to access the reading material, which was worrisome.
		What if this material is on the final exam!?
		Thankfully, I have a dynamic $a[IP] address; it changes every ten minutes.
		Mid-week, I spent an entire day (aside from my hours on the clock at work) attempting to load the page every ten minutes.
		I had no guarantee this would work, but it was my only chance at making sure I was prepared for the test.
		Eventually, I landed on an $a[IP] address that the website allowed through.
		And what did I find?
		It was the exact same chapter we&apos;ve read in two other units.
		This is the third time we&apos;re reading this chapter!!
		Both other times, we were given a link to a copy hosted on the university website though, which doesn&apos;t block $a[IP] addresses, so it was much easier to access.
		It took me hours to get ahold of the assigned reading material, and I have absolutely nothing to show for it.
	</p>
	<p>
		Anyway, as the reading material is an exact repeat of something we&apos;ve read twice before, I have no further notes on it.
	</p>
	<p>
		After last week&apos;s assignment, the ungraded exercise this week wasn&apos;t difficult at all.
		It was pretty much a repeat of what we&apos;d already done, but with a slightly more complex algorithm.
	</p>
	<p>
		Speaking of which, the assignment &quot;solution&quot; for last week&apos;s work is seriously janky.
		One of the main focuses of last week&apos;s learning material was the use of symbols.
		However, the solutions for the assignment completely ignore the possibility of including symbols.
		Leaving out symbols makes the fourth problem a lot easier though.
		To be honest, I wasn&apos;t sure how we were supposed to do that.
		We needed an array, which meant consecutive memory addresses that are basically referenced from one variable name, but yet the next variable used would automatically be assigned an address that overlaps with the array&apos;s second memory location.
		My solution was to use dummy variables to reserve the addresses I needed for the array.
		That way, the next real variable used would be assigned the address just past the end of the array.
		The official solution provided by the course instead bypassed that issue entirely, as numeric addresses were simply hard-coded into the program, so it was trivial to simply not assign the same addresses to multiple uses.
		I swear, I&apos;m not understanding what this course is asking of me.
		For most of this term, I&apos;ve been getting it all wrong.
		The worst part is that with clearer directions, I could get it right.
		I understand how to do what needs to be done, I&apos;m just not understanding what exactly it is that does need to get done.
		This is the end of the term though, so that problem is over with.
	</p>
	<p>
		From the submissions I graded this week, it seems I&apos;m one of the few students that seems to understand assembly.
		Maybe it helped that I put in the effort to build my own assembler.
		In any case, the code submitted by other students this week either makes no logical sense or uses outright invalid commands, such as <code>M[0]=5.</code>.
		There&apos;s so much wrong on just that one line.
		First, you can&apos;t index a register.
		In fact, the assembly code can&apos;t include indexing at all; that all has to be simulated using a series of lower-level commands.
		Second, you can&apos;t assign a constant value to M.
		You need to assign it to A using the <code>@5</code> notation, copy it to the D register, set the value of the A register to the correct memory address, then copy the value from D to M.
		And finally, why does that command end in a full stop?
		Where did that even come from!?
		Assembly isn&apos;t even that complicated, but people aren&apos;t getting it.
		I understand overlooking something.
		Perhaps you forget to store a value you need to or something.
		But the submissions this week were completely ridiculous.
	</p>
	<p>
		The ungraded quiz had a couple bad questions on it.
		First, a question asked what the data structure used for storing the names and values of symbols is.
		Obviously, the correct answer is that it&apos;s the symbol table.
		However, that wasn&apos;t an option; the options were queue, stack, hash table, or list.
		We never covered what kind of data structure the symbol table should be implemented as.
		Obviously, the symbol table shouldn&apos;t be implemented be implemented as a queue or a stack.
		I shouldn&apos;t even need to justify my reasoning on that.
		My understanding though is that the book wanted the symbol table implemented using pairs.
		Not key/value pairs, just pairs.
		That seems like something you&apos;d put in a list.
		In other words, I think the book&apos;s implementation is to create a list of pairs, and check through every pair when a symbol is encountered.
		How inefficient.
		It seems like if you know what you&apos;re doing, the hash table is the best option.
		In the assembler I built, I used a $a[PHP] array, which is like the deformed child of a hash table and something other languages refer to as arrays.
		($a[PHP] arrays use key/value pairs, but also preserve element order.
		$a[PHP]&apos;s a mess, I know, but $a[PHP] and Hack (a modified version of $a[PHP] created by Facebook) are the only two languages I know that treat classes the way I want them treated: as more than just objects themselves.
		That&apos;s why I use $a[PHP] for everything.)
		I was basically using the hash table side of $a[PHP] arrays though.
		I needed a quick way to look up values and I didn&apos;t care about the order.
		So I chose hash table as my answer.
		That was marked correct, but it really is something we should&apos;ve covered if we were to be tested on it.
		So I guess the question wasn&apos;t a bad one, the course just didn&apos;t prepare us for it and if I hadn&apos;t already taken a different course that happened to cover what hash tables even are and how they work, I&apos;d&apos;ve had a 50% chance of getting that one right.
		The second questionable question was a statement to be marked true or false:
	</p>
	<blockquote>
		<p>
			The <code>addEntry</code> routine is being called only during the first pass of the Assembler.
		</p>
	</blockquote>
	<p>
		This time, the problem with the question was reversed.
		We <strong>*did*</strong> cover the answer to this in class, but the answer we covered was incorrect.
		The book says the <code>addEntry</code> routine is used in both passes; in pass zero to add the labels and in pass one to add the variables.
		However, the correct answer is this is <strong>*implementation-specific*</strong>.
		The book&apos;s implementation had the program check to see if a symbol exists before treating it as a variable.
		If the symbol didn&apos;t exist in the table yet in the second pass, the <code>addEntry</code> routine was called to add a variable.
		I wrote my symbol table to manage itself though and keep this logic out of the assembler&apos;s main module.
		(I said before that my symbol table was a $a[PHP] array, but that was an oversimplification; it was actually an array wrapped in a table-keeper object that ensured all values were valid and automatically handled the address-allocation for variables.)
		In my implementation, the <code>addEntry</code> routine is called <strong>*only*</strong> during the first pass and <strong>*only*</strong> to add labels.
		The <code>GetAddress</code> routine, used to get the address a known-existing symbol in the book&apos;s implementation, automatically recognises missing symbols as new variables in my version, automatically allocating an address to them in the manner described by the book (it starts at address <code>16</code> and increments by one for each variable).
		(As a side note, my symbol table class also automatically recognises the predefined symbols by initialising the internal array to hold these values, so the main logic doesn&apos;t have to explicitly check for those either.)
		That means that in my implementation, the <code>contains</code> routine isn&apos;t even necessary (it&apos;s included anyway due to satisfy the demands of an interface I implemented, but it&apos;s never used) and the <code>addEntry</code> routine is only used in the first pass, where the labels must be explicitly added instead of simply being initialised when referenced for the first time.
		My way produced more intuitive and readable code in the code that depends on the symbol table, while the book&apos;s way allows the symbol table itself to be simpler to implement.
		Either way is perfectly valid; it&apos;s just a matter of where you want to put your complexity.
	</p>
	<p>
		Near the end of the week, I was thinking about that question on symbol tables and how I wouldn&apos;t have known what to do if not for a previous course that other students in this course may not have taken yet.
		It got me thinking about hash tables and how they&apos;re implemented as lists that store chains of objects.
		That got me thinking about lists, as well as a language I&apos;ve been thinking about writing for the past several days.
		$a[PHP] is tolerable, but it&apos;d be nice to have a language without its quirks.
		More importantly though, $a[PHP] doesn&apos;t really have a way to hook into graphical interface libraries.
		You cannot develop graphical applications in $a[PHP] without an extra module, and the module available is severely outdated.
		There are hacky systems for building applications in $a[PHP] that are more up to date, but they involve embedding a Web browser into your application and using the $a[PHP] to generate $a[XHTML] output.
		Um.
		No.
		Not interested.
		Anyway, that got me thinking about the base classes I need to provide in my language that all other classes would need to be built up from.
		Other classes can either inherit from these classes or use properties that contain instances of these classes.
		In my language, unlike in $a[PHP] and Java, all values would be objects and have methods.
		Addition with the <code>+</code> operator, for example, would behind the scenes call some method on the first operand passing the second operand as an argument.
		I think this is what Python does.
		As I was thinking about the primitive classes, I started wondering if lists should be one of them or if lists should somehow be built up from something else.
		So I started thinking about what a list really is.
		In a list, elements can be accessed easily in any order.
		That means they can&apos;t be implemented as a chain of nodes.
		Lists in Java can&apos;t change size either, as the amount of space allocated to the list has to be known when the list is created.
		Then it dawned on me: this is why strings in Java are implemented as objects, not primitive values!
		If strings were primitive values, their sizes would need to be known beforehand every time you define a variable to hold one.
		Objects instead hold references to other data, so they can effectively grow without their own base node growing.
		For example, if you had strings implemented as primitive values, you&apos;d need to know the size of every string you were going to include in a list before instantiating the list, as this would affect the list&apos;s data size.
		Furthermore, variables and list items alike that had been assigned a specific string size would not be able to hold strings of different sizes.
		The system would be too rigid and strings could not be properly updated.
		(I say &quot;updated&quot;, but a better word would be &quot;replaced&quot;.
		Strings in Java are immutable, but you can remove the string object held in a variable and replace it with a different-sized string that is potentially a permutation of the first.)
		I think I better understand the complexities of strings, and I wonder how this is dealt with in a lower-level language such as C.
	</p>
	<h3>Epilogue (Unit 9)</h3>
	<p>
		The exam presented an interesting question that sort of touched upon something I&apos;d wondered about, didn&apos;t have time to look up right away, and forgot about.
		It asked the following question:
	</p>
	<blockquote>
		<p>
			A demultiplexor with 5 outputs must have a minimum of ____ select bits?
		</p>
	</blockquote>
	<p>
		Clearly, to assign each output a unique bit combination, you need three bits.
		With five bit combinations in use, three bit combinations are then ... I don&apos;t know what.
		It might depend on the implementation.
		These bit combinations may be unused, resulting in no output when used as the control value, or there might be some overlap, so (for example) three outputs would each be assigned two bit combinations each (or one could have four; or one could have two and one could have three).
		I wasn&apos;t sure if n-way (de)multiplexers were a thing, when n is not a power of two.
		It seems such an implementation is allowed though.
	</p>
</section>
END
);
