<?php
/**
 * <https://y.st./>
 * Copyright © 2017-2018 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 2401: Software Engineering 1',
	'<{copyright year}>' => '2017-2018',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		The reading assignment for the week was sections 1.1 though 1.4 and section 2.1 from our <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf">textbook</a>, as well as the following:
	</p>
	<ul>
		<li>
			<a href="https://cs.ccsu.edu/~stan/classes/cs530/slides11/ch3.pdf">Ch3.pptx - ch3.pdf</a>
		</li>
		<li>
			<a href="https://csrc.nist.gov/csrc/media/publications/shared/documents/itl-bulletin/itlbul2009-04.pdf">ITL Bulletin The System Development Life Cycle (SDLC), April 2009 - itlbul2009-04.pdf</a>
		</li>
		<li>
			<a href="https://www.tutorialspoint.com/sdlc/sdlc_tutorial.pdf">SDLC - sdlc_tutorial.pdf</a>
		</li>
		<li>
			<a href="http://agile.csc.ncsu.edu/SEMaterials/AgileMethods.pdf">404 Not Found</a>
		</li>
		<li>
			<a href="http://home.hit.no/~hansha/documents/software/software_development/topics/resources/SDLC%20Overview.pdf">SDLC Overview - SDLC Overview.pdf</a>
		</li>
		<li>
			<a href="http://www.ijarcsse.com/docs/papers/May2012/Volum2_issue5/V2I500405.pdf">404 Not Found</a>
		</li>
	</ul>
	<p>
		Two of those pages are obviously <code>404</code> error pages, so hopefully the content they used to contain isn&apos;t too important to this course.
		The &quot;Ch3.pptx - ch3.pdf&quot; and &quot;SDLC - sdlc_tutorial.pdf&quot; files were composed almost entirely of white text on a white background.
		The only parts that were even readable were the tables on the former, which instead used white text on a light blue background, and the diagrams on the later.
		Even selecting the text didn&apos;t bring it to visibility.
		My best guess is that those $a[PDF] files use some Adobe-specific hack for &quot;propper&quot; display and don&apos;t function in other $a[PDF] readers such as the one supplied as a part of Firefox.
		I don&apos;t have Adobe software at my disposal though, both because Adobe doesn&apos;t support Linux and because the lack of source code means I can&apos;t trust Adobe software on my machine even if my operating system <strong>*was*</strong> supported by Adobe.
		Again, hopefully these two files weren&apos;t very important for the course and were just a reiteration of what the textbook had to say.
		&quot;SDLC Overview - SDLC Overview.pdf&quot; was a 53-page $a[PDF] of mostly comics and diagrams.
		I&apos;m not sure what we were supposed to take from that.
		It didn&apos;t help either than many of the comics were in what I believe (based on the $a[ccTLD] to be Norwegian.
		I don&apos;t know Norwegian.
		It appears that some non-comic, non-diagram text was provided, but it was scarce and likely not containing much information.
		It&apos;s a good thing too, as like the two previously-mentioned $a[PDF]s, the text in this one showed up as white-on-white.
		People like to get fancy in their presentation, but you don&apos;t know how your readers will perceive it.
		You don&apos;t know what technology they have or what their disabilities are.
		When conveying information, simple formatting is almost always best.
		The &quot;ITL Bulletin The System Development Life Cycle (SDLC), April 2009 - itlbul2009-04.pdf&quot; file was a good read, explaining why security considerations should be taken into account from the beginning, before even a single line of code is written.
		I agree (and have always agreed), but I don&apos;t have much to respond with.
		The bulk of my thoughts for the week are in regard to the content of the main textbook readings.
	</p>
	<p>
		It&apos;s difficult to take the author of the textbook (or anyone) seriously when they&apos;re using phrases such as &quot;$a[ATM] machine&quot;.
		An $a[ATM] is an automatic teller machine, so an &quot;$a[ATM] machine&quot; would be an &quot;automatic teller machine machine&quot;.
		Intelligent people that actually know what they&apos;re talking about don&apos;t use the same word both in and out of their acronyms, such as &quot;$a[ATM] machine&quot; or &quot;$a[PIN] number&quot;.
		Still, the textbook does present some good points, especially about modularisation and abstraction; nothing overly insightful, but still very valid points.
		It also says we&apos;ll be working with data representation in $a[XML] later, which I look forward to.
		I&apos;m a fan of $a[XML] because of the structure and order it provides; even <a href="https://y.st./">my website</a> is written in $a[XHTML] (a form of $a[XML]) instead of $a[HTML] because of how much cleaner $a[XML] is than other $a[SGML] variants and especially $a[HTML].
		($a[HTML] used to be an $a[SGML] language, but has since deteriorated into simply a mess; it&apos;s too messy to even follow $a[SGML]&apos;s structural rules any more.)
	</p>
	<p>
		The book also mentions how telephone numbers are difficult to remember, given that humans have a tendency not to be able to store much in short-term memory.
		This is one of many reasons that the telephone number system is terrible.
		You may have noticed that email addresses are in a much easier to remember format.
		Our minds can store them as meaningful words instead of strings of digits
		This is an example of the &quot;chunking&quot; the book mentions; strings of digits are much less easily chunked.
		Not only are email addresses easier to remember, they&apos;re more meaningful as well, and looking at an email address, you often have an idea of who it might belong to (either their name or what type of person they are), even if you&apos;re unfamiliar with that particular email address.
		There are also technical advantages to an email-address-like system, such as better distribution and the capacity to service multiple email addresses from a single endpoint (at least not without cooperation from the telephone company).
		It&apos;s also worth noting that here in the United States, telephone numbers are no longer even machine-readable.
		They&apos;re meaningless numbers that correspond to information in a lookup table telling where to route a call or $a[SMS]/$a[MMS] message.
		In other words, telephone numbers in the United States no longer have the advantage of a direct $a[IP]-address-like system (the numbers aren&apos;t the actual information on where to route things, so a lookup must be performed) and they lack at least three of the biggest advantages of domain-name-like system (the client can&apos;t own the name between times of service and reuse the name later, sub-names aren&apos;t possible, and they&apos;re not human-readable/-memorable).
		The telephone number system is garbage and should be retired; we should really be moving to something better for voice communication, such as $a[SIP].
	</p>
	<p>
		The content of the textbook has a few parallels to what we&apos;re doing in <span title="Computer Systems">CS 1104</span> right now.
		It says when we get to chapter five, we&apos;ll be working with boolean algebra, which we&apos;re already using in <span title="Computer Systems">CS 1104</span>.
		The main example for use throughout the textbook is also a home security system, which we used as an example in <span title="Computer Systems">CS 1104</span> this week as well.
	</p>
	<p>
		The book mentions the difficulty in having a hard-wired connection between remote home security systems and the centralised security company server, but is it really that hard?
		I might be missing something, but couldn&apos;t a connection be established over the Internet?
		No dedicated lines would be needed for this; the already-in-place Internet infrastructure would provide all the hard-wiring needed between the two distant locations.
		The only dedicated hard-wiring that would be needed would be within the home itself to connect the locks, control panels, and sensors.
		I like that the book covers that we might have unhappy customers with our system, as the system doesn&apos;t always know best.
		The example alarm system is a great example of this.
		The system is set to turn on the lights if the user unlocks the doors, but the user might want them to remain off.
		Personally, as a customer, I wouldn&apos;t want my lock interfering with my lights at all.
		If I don&apos;t turn them on myself, I don&apos;t want them on.
		Part of this is a matter of principle; I feel separate functionalities should remain isolated and not artificially tied together.
		Another part is practicality though; Except at night when it&apos;s too dark to see, I keep my lights off and use the natural light that comes in through the windows.
		I don&apos;t want my lights turned on when I come in when I&apos;m going to have to turn them back off right away almost every time!
		Certain other customers, on the other hand, would greatly appreciate not having to turn the lights on when they walk in.
		It&apos;d save them time and effort, especially if they&apos;re carrying large bags of groceries sometimes and can&apos;t easily reach for the light switch.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			Part of the problem in that example is that of expectations.
			People can wait for the development of a word processor.
			Work processors are non-vital.
			Personally, I prefer plain text files (or $a[XHTML] files) over more-complex document formats, but even if you prefer word-processing document formats, you can make due plain text files.
			Additionally, other word-processing applications had been available at the time, so demand for Microsoft&apos;s version may not have been urgent.
			With highways and bridges though, alternatives aren&apos;t readily available.
			You could take a detour, but the costs of doing so regularly stack up.
			With an aircraft, the machines are huge and expensive; You don&apos;t keep spares around because of monetary and spacial costs, so when you need one, you need it soon.
		</p>
		<p>
			Another part of the problem is that it compares apples to oranges.
			Aircrafts are somewhat uniform, using the same moulds and blueprints to make several of the same type of aircraft.
			Building an aircraft is more like compiling a word-processor from source code than developing one from scratch; you already have the moulds and blueprints.
			Developing a word-processor would be more like building the moulds and drafting the blueprints for an aircraft than actually building one.
			I would not at all be surprised if the development of a new type or model of aircraft ran a bit late or a bit over budget.
			Compiling from existing source code shouldn&apos;t take more than the expected amount of time and money, and neither should building an aircraft from existing moulds and blueprints.
		</p>
		<p>
			Another problem is the complexity of the issue at hand.
			In building a complex program or complex blueprints, you&apos;re going to find it often takes more time to solve the initial problem than you thought, and by the time you solve it, you&apos;ve run across several other problems in your design that must be solved as well.
			It&apos;s recursive, too; as you solve each problem, there&apos;s a high probability that more problems will come up.
			When building source code or complex blueprints, you&apos;re doing complex <strong>*design work*</strong>.
			A bridge requires design work as well, but the complexity of this design work is smaller.
			Structural integrity is a factor, but for the most part, a bridge is a bridge; it doesn&apos;t need to do anything besides remain intact and be shaped such that it can be crossed.
			On the other hand, when you build an aircraft, the design has already been provided, either by someone else or by your past self.
			The design work is already complete, and that&apos;s the hardest part.
			With the highway bridge example, it&apos;s worth noting that because every patch of land is different, the blueprints and designs must be redrafted every time a bridge is built.
			I don&apos;t know a whole lot about bridge-building, but I imagine certain optimisations have had to be made in the design process because of this need to always redesign for every built instance.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Good catch, but the reason I didn&apos;t cite a source is because I didn&apos;t use one.
			Those are my own thoughts on the matter before having read the reading material for the week; With my busy schedule, I wouldn&apos;t have made the Sunday deadline, had I completed the sizeable reading assignment first.
		</p>
		<p>
			I could be very wrong, but I&apos;m not of the opinion that word processors are overly necessary.
			Better options exist, but people ignore them.
			You do make a valid point about competitors though.
			Assuming you offer bug patches at no charge, getting your product on the floor before your competitors get theirs out, even if yours is buggy, has advantages.
			Of course, if you decide not to offer gratis bug patches, you&apos;ll&apos;ve convinced your early adopters that your products are garbage, and they won&apos;t buy your later-upgraded version.
			In that case, you&apos;re better off waiting until your product works before releasing it as to spare your company&apos;s reputation.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You make a good point that the problem of going over time and budget isn&apos;t isolated to software projects.
			I think part of the problem is that people like to try to push unreasonably-tight constraints.
			In the case of the software company, the company knows from past experience that they&apos;ll almost always go over time and budget constraints, but instead of factoring that in by, say, multiplying the expected time and budget needed by some calculated factor, will continue basically lying about the budget internally.
			They know the new project will go over, but they don&apos;t care.
			They won&apos;t even cancel the project when it does prove that it&apos;ll cost more (in time and/or in money) than expected.
			In other words, the quoted &quot;budget&quot; isn&apos;t the actual budget.
			Lying like that internally is one thing.
			Does it really hurt anyone, seeing as the ones doing the lying are the ones doing all the decision-making for the company?
			Probably not.
		</p>
		<p>
			However, in the case of building a project for a customer, these lies are quite intentional and malicious.
			For example, take the construction projects you mentioned.
			The contractor knows very well that unexpected problems arise, but instead of factoring those into the quote, they intentionally give a best-case scenario price.
			In other words, they&apos;re almost guaranteed not to go <strong>*under*</strong> the quoted price and timeline, but there&apos;s a fairly good chance they&apos;ll go <strong>*over*</strong>.
			So why do they do this?
			Basically, customers don&apos;t push for honesty enough.
			Instead of going with the company that gives it to them in writing that the cost won&apos;t go above a certain price, they go with the company that gives them the false hope that the cost will be smaller.
			To undercut the competition without actually charging any less, companies such as construction companies (or software developers working on projects for specific clients) will intentionally avoid factoring in unplanned (but completely expected) costs, knowing they&apos;ll be more likely to get the job because of the lower quote.
			In other words, these people intentionally and maliciously lie for their own personal gain.
			They therefore go over budget strictly because the budget they said they&apos;d work within was made smaller than they knew would likely work out.
		</p>
		<p>
			You make a good point too about new frameworks popping up and obsoleting old knowledge.
			Some of these new frameworks are important to learn and use (for various reasons, depending on the framework in question), but doing so changes the equation too much to make proper estimates from past experience possible.
			Other frameworks are trendy garbage that everyone jump onboard with, even though they don&apos;t actually help with anything productive.
			Either way, if your client (or your boss) expects you yo use a given framework, be it a valid one or garbage one, you&apos;re stuck complying and that messes with your ability to give a proper quote beforehand.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I completely agree about the aircraft being based on already-known technologies.
			The blueprints exist, the moulds exist ... nothing new is being developed!
			It&apos;s not at all like building a new word processor.
			You&apos;re not inventing word processors in general, but you&apos;re inventing a new one.
			It&apos;s like inventing a new model of aircraft instead of building a copy of an existing model.
		</p>
		<p>
			You also make a good point about using parts built by others.
			In software development, you often build everything from scratch.
			There&apos;s a chance that you might build on top of a framework, but often times you won&apos;t.
			You&apos;ll likely use a mid- to high-level language too, instead of writing your code in assembly.
			But that&apos;s about it.
			You&apos;re doing the rest yourself.
			With bridges and aircrafts, the parts are mass-produced and you buy those off-the-shelf parts instead of starting completely from scratch.
		</p>
	</blockquote>
	<blockquote>
		<p>
			If we can inspect bridges and aircrafts for completeness as we build them, why can&apos;t we do that for code?
			You say that we do run tests on the code as we go, but that some parts may still be incomplete.
			How is that different from a bridge or aircraft?
			Why is it any more likely that some unfinished parts will slip under the $a[radar] in code and not in bridges and aircrafts?
		</p>
	</blockquote>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		The reading assignment was sections 2.2 and 2.3 of the textbook mentioned in the last entry, as well as (optionally) the following:
	</p>
	<ul>
		<li>
			<a href="https://ece.uvic.ca/~itraore/seng422-06/notes/arch06-1.pdf">arch06-1.pdf</a>
		</li>
		<li>
			<a href="https://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Architecture">Introduction to Software Engineering/Architecture - Wikibooks, open books for an open world</a>
		</li>
		<li>
			<a href="https://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Planning/Requirements">Introduction to Software Engineering/Planning/Requirements - Wikibooks, open books for an open world</a>
		</li>
		<li>
			<a href="https://msdn.microsoft.com/en-in/library/ee658098.aspx">Chapter 1: What is Software Architecture?</a>
		</li>
		<li>
			<a href="https://processimpact.com/articles/telepathy.html">When Telepathy Won&apos;t Do: Requirements Engineering Key Practices</a>
		</li>
		<li>
			<a href="https://www.cs.cmu.edu/afs/cs/project/vit/ftp/pdf/intro_softarch.pdf">saintro.word - intro_softarch.pdf</a>
		</li>
		<li>
			<a href="https://www.site.uottawa.ca/~bochmann/SEG3101/Notes/SEG3101-ch3-1%20-%20Intro%20to%20Analysis%20and%20Specification.pdf">Microsoft PowerPoint - SEG3101-ch3-1 - Intro to Analysis and Specification.ppt - SEG3101-ch3-1 - Intro to Analysis and Specification.pdf</a>
		</li>
		<li>
			<a href="https://www.tutorialspoint.com/software_engineering/software_requirements.htm">Software Requirements</a>
		</li>
		<li>
			<a href="http://www.ece.rutgers.edu/~marsic/books/SE/instructor/slides/lec-3%20RequirementsEng.ppt">LECTURE 3: Requirements Engineering</a>
		</li>
		<li>
			<a href="http://www.ece.rutgers.edu/~marsic/books/SE/instructor/slides/lec-4%20SoftwareArch.ppt">LECTURE 4: Software Architecture</a>
		</li>
	</ul>
	<p>
		The line between functional and non-functional system requirements seems arbitrary at best.
		At first, I thought the difference was that functional requirements were carried out by the system itself, while non-functional requirements were carried out by an outside force.
		The example given by the textbook was that data backups are considered non-functional.
		Normally, a backup would be carried out by some other program, not the one using the data.
		However, that doesn&apos;t seem to actually be the distinction between functional and non-functional requirements at all; the textbook also specifies several requirements that <strong>*must*</strong> be realised by the system itself as non-functional, including reliability and security.
		My best guess is that the so-called &quot;functional&quot; requirements are classified as such based on the fact that they&apos;re the &quot;action&quot; the system takes, not a quality the system must have.
		Like I said though, that&apos;s an arbitrary distinction, and I&apos;m not sure I see any value in separating the requirements into these two categories.
	</p>
	<p>
		The security alarm example is starting to get ridiculous.
		Now, the author has added that if you don&apos;t close your door within a set period of time, the police are notified.
		There are plenty of legitimate reasons why you might want your door open, and you shouldn&apos;t be forced to close it to avoid false alarms.
	</p>
	<p>
		In the discussion for the week, I found I might be using an outdated version of the textbook.
		That said, I&apos;m using the version that the university links to in the course materials.
		That means if I&apos;m using the wrong version, it&apos;s because the university in linking to the wrong version.
		However, it seems at least some students have the right version, so the university might be linking to multiple versions of the textbook in different places.
		This wouldn&apos;t be the first course at this school in which there were problems in the materials such as these.
		Last term was particularly bad in that regard, and most students in one of my two courses were completely lost as to what the assignments even were.
	</p>
	<p>
		The main assignment for the week seemed vague.
		We were supposed to write up acceptance tests, but we haven&apos;t covered at all how to do that.
		That is, unless it&apos;s true we&apos;ve been given multiple versions of the textbook.
		In that case, the unlucky few of us that have found the incorrect version haven&apos;t been shown how to do this, while the lucky few that found the correct version have been.
		Well, I did my best with what I have, and haven&apos;t been able to locate any other textbook links in the materials.
		There&apos;s not much else I really <strong>*can*</strong> do.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			The book didn&apos;t really cover this very much, but this is my understanding of the strengths and weaknesses of users and developers during the specification of requirements.
		</p>
		<h4>Weaknesses of users</h4>
		<p>
			Users know their field, so there is certain knowledge about their field that they take for granted.
			As a result, they often fail to mention critical requirements of the proposed system, not even considering the fact that the developers likely have no clue that the users even need them.
			They also use their field&apos;s jargon, not considering the fact that the developers aren&apos;t familiar with it.
			Users also often don&apos;t <strong>*know*</strong> what they need or want from a system because they lack the ability to visualise.
			If presented with a system, they can tell you how to improve it by incremental degrees, but they can&apos;t often tell you what they need before you&apos;ve put a good chunk of the system together already.
			They also often don&apos;t know what technology is capable of, further limiting what they&apos;ll even ask for (Marsic, 2009).
		</p>
		<h4>Weaknesses of developers</h4>
		<p>
			Developers are not fluent in the users&apos; field.
			As a result, they don&apos;t know the users&apos; jargon, so they don&apos;t always understand the requirements given to them (Marsic, 2009).
		</p>
		<h4>Strengths of users</h4>
		<p>
			The users are familiar with the environment in which the system will be used.
			They know what tasks need to be performed (though they don&apos;t always know how to have the system be involved in said tasks).
		</p>
		<h4>Strengths of developers</h4>
		<p>
			Developers are pretty well aware of what computers can handle and not.
			When presented with a requirement for the proposed system, the developers can pretty quickly decide whether that requirement is even viable.
		</p>
		<div style="APA_references">
			<h4>References:</h4>
			<p>
				Marsic, I. (2009, June 27). Software Engineering. Retrieved from <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf"><code>https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			The course instructions tell us to get the book at <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf"><code>https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf</code></a>, so that&apos;s why I&apos;m using that version.
			If we&apos;re using the newer book instead, that that should be updated in the course instructions.
			Where is the newer book, anyway?
		</p>
	</blockquote>
	<blockquote>
		<p>
			I&apos;m not sure being able to explain the problem in domain-specific terms is a strength.
			The user needs to communicate what they need to the developer, and the developer doesn&apos;t know the domain-specific jargon.
			Rather, inability of the user to <strong>*avoid*</strong> such domain-specific terms is a <strong>*weakness*</strong> during requirements elicitation.
			I noticed you put use of domain-specific terminology as both a strength and a weakness of users that are trying to communicate with developers, but it can only be one or the other.
		</p>
	</blockquote>
	<blockquote>
		<p>
			That&apos;s strange.
			That&apos;s the link I&apos;ve been using, and it shows up as the 2009 edition, for me.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You make a good point that the developer can suggest features the user might like.
			At first, the developer isn&apos;t going to understand the problem that needs to be solved, but once they have a good enough grasp on what is expected and what the user is looking for, the developer can add extra features (after asking the user permission for each feature) to make the user&apos;s job easier.
		</p>
	</blockquote>
	<blockquote>
		<p>
			A thought just occurred to me: if users had the right mindset to specify unambiguously their requirements, they&apos;d probably develop their own software!
			That&apos;s all programming really is: telling the computer exactly what you want and how you want it.
			Programming in lower-level languages such as C isn&apos;t for everyone, but anyone that can specify exactly what they want done and the steps in which it must be accomplished can write in bash, $a[PHP], or Python.
			That&apos;s where we developers come in.
			We take a client&apos;s ambiguous babble and shape it into something concrete and specific.
		</p>
	</blockquote>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		The reading material for the week is section 2.4 and section 4.2.1 of the textbook, <a href="http://agilemodeling.com/artifacts/useCaseDiagram.htm">UML 2 Use Case Diagrams: An Agile Introduction</a>, and (optionally) the following:
	</p>
	<ul>
		<li>
			<a href="https://sparxsystems.com/uml-tutorial.html">UML Tutorial - UML Unified Modelling Language - Sparx Systems</a>
		</li>
		<li>
			<a href="https://www.ibm.com/developerworks/rational/library/769.html">UML basics: An introduction to the Unified Modeling Language</a>
		</li>
		<li>
			<a href="https://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/bell/">UML basics: The class diagram</a>
		</li>
		<li>
			<a href="http://www.uml.org/resource-hub.htm">UML Resources | Unified Modeling Language</a>
		</li>
		<li>
			<a href="http://www.uml.org/what-is-uml.htm">What is UML | Unified Modeling Language</a>
		</li>
	</ul>
	<p>
		The textbook said that entities within a system, called &quot;concepts&quot;, should be grouped into the same package if they communicate with one another.
		If this were the case though, everything should be grouped into the same package, which would effectively make packages meaningless.
		Concepts within the same system will all communicate, whether directly or indirectly.
		Otherwise, they wouldn&apos;t be the same system.
		Let&apos;s say concept A communicates with concept B.
		Concept B communicates with concept C, which in turn communicates with concept D.
		Concept A and concept D do not communicate with each other in any way, shape, or form; at least not directly.
		However, as A and B communicate, they share a package.
		B is in the same package as C for the same reason, and D is put into that package as well because it communicates with C.
		Most systems will be more complex than this, but the point here is very clear: if concepts that communicate with one another are packaged together, <strong>*everything*</strong> is packaged together.
	</p>
	<p>
		The concept of cyclomatic complexity seems strange to me.
		It&apos;s used to measure the complexity of a program and its method is similar to finding the number of paths through the program as it&apos;s measure.
		It&apos;s not complete in that regard though.
		It&apos;s not possible for it to be.
		The book states that the conditional statement in an iterational continue test is considered to be one branching of the program.
		But is that really a good way to look at it?
		The iterational loop cycles back to it&apos;s own beginning and tests again.
		Each new iteration is basically a new branch in the flow of the program.
		In many loops that process user input, there&apos;s no limit to how many times the loop may be executed.
		That means there&apos;s no limit to the branching and no limit to the number of paths through the program.
		I can&apos;t help but feel that the author has attempted to simplify a concept for clarity, but has oversimplified it into something that doesn&apos;t actually make any sense at all.
		Honestly though, I&apos;m not surprised, given what we learned about the author in the first unit of the course.
		Whatever metric of complexity measurement you use though, I agree that keeping modules simple is in your best interest.
		It makes testing and maintenance much more feasible.
	</p>
	<p>
		In the discussion this week, a fellow student helped me find the right textbook.
		It seems the scheme on the $a[URI] I was using was incorrect.
		When I link to things (including for the purposes of citation), I normalise the $a[URI] against a few rules for consistency and security.
		If the page at the other end of the normalised $a[URI] doesn&apos;t match the intended one, I go back to the original.
		The security precaution I take is to try switching all instances of the <code>http:</code> scheme with the <code>https:</code> scheme.
		(For consistency, I also remove any <code>www</code> subdomains and <code>index.*</code> file names, when possible.)
		Usually, either it works or it doesn&apos;t.
		However, when I did that with the textbook&apos;s $a[URI], it led to an older copy of the same textbook.
		That&apos;s such an incredibly rare and strange thing to have happen that I didn&apos;t even think to check for it.
		When the <code>https:</code>-scheme $a[URI] led to a textbook that looked like what we need, I assumed it was the right one.
		Once I found out I was probably using the wrong textbook, I didn&apos;t think to go back and try the original $a[URI] either, both because there was no reason to suspect the $a[URI] scheme would affect more than just the security of the transmission and because I don&apos;t remember which $a[URI]s I&apos;ve had to normalise and which have already been provided in a normal format.
		I&apos;ve been doing somewhat poorly in the exams for this course, but now that I have the correct textbook, I should hopefully do a lot better.
		For my own notes (and for future reference), the <a href="http://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf">correct textbook</a> can be found here.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			According to our textbook (Marsic, 2009), use cases can be written as either scenarios or scripts.
			What this tells us is that scenarios are a medium with which to present a particular use case.
		</p>
		<p>
			With that in mind, a use case is used every time you want to convey a type of interaction a user or other external entity might have with your system.
			It&apos;s a case that must be handled; it&apos;s type of situation you must account for when building the system.
			On the other hand, a scenario is a step-by-step, theoretical interaction between a user (or other external entity) and the system.
			It&apos;s more detailed and more specific than the use case, but doesn&apos;t capture everything that the use case does due to the use case being a bit more broad.
		</p>
		<p>
			I think use cases are what we&apos;re really after.
			Given a use case, we can program the features the system needs for interacting with the user (or other external entity).
			However, scenarios still serve a number of purposes.
			First of all, natural languages such as English are needlessly ambiguous.
			In fact, English itself is one of the hardest languages to learn due to the combination of complex rules and ambiguity.
			A scenario or two can help clear up any confusion about a presented use case and what is meant by it.
			Second, non-developers often think in terms of specific interactions instead of broad use cases.
			When soliciting input from the client for which we&apos;re building the system, we can get scenarios from them and work backwards to derive the actual use case that we need.
			Third, use cases can highlight specific problem areas; corner cases and security-related issues, for example.
			These things can be vital to account for, but aren&apos;t always captured well in a general use case.
		</p>
		<h4>Example 0</h4>
		<p>
			Let&apos;s say we have a use case and scenario about a project-management application.
			This software&apos;s purpose is to help manage code written by various members of a team.
		</p>
		<p>
			A use case for this could be that one of the project leaders needs to be able to merge changes written by different team members into the same source tree, thus unifying the different versions of the project.
		</p>
		<p>
			A scenario would instead give a detailed step-by-step situation in which the leader might interact with that feature, potentially introducing a problem and how it can be handled.
			The leader needs to be able to look at the different versions for review, audit the code to see that it looks fine, run tests on the code, et cetera.
			From there, the leader would need to click some button or run some command to tell the software it&apos;s fine to merge changes in the different versions into the main code repository.
			However, the leader didn&apos;t notice that two developers edited the code in differing ways.
			The software has no way to know which version to keep!
			Even if it chose one, it could be the case that either choice would break the code, as other code merged at that time from both developers depended on their respective changes to that one code fragment.
			From here, we say that our software halts the merge and presents the code difference to the leader.
			The leader can either cancel the merge altogether, tell the software which version to use, or write their own third version to use instead that will work with all the merged changes.
		</p>
		<h4>Example 1</h4>
		<p>
			In another situation, we might be developing an automated package-tracking system.
			This system keeps track of what workers take what packages from the package-sorting facility out for delivery, as well as what new packages come in.
			Using chips of some sort in the package labels, the workers don&apos;t have to scan anything in or out; they simply walk through the door with their $a[ID] badge (also chipped) and several packages, and sensors in the doorway take the required information.
		</p>
		<p>
			One use case for this would be that multiple employees try to exit through the same door at the same time with at least one package between them.
			The system shouldn&apos;t guess at who has the package(s); we need to be certain.
			Due to the size of some packages, an employee might need help carrying one to the delivery truck, so we can&apos;t prohibit multiple employees from exiting together with a package.
			Likewise, for security reasons, the employees shouldn&apos;t set down their $a[ID] badges to bypass the problem.
			Somehow, the employees must be alerted to the problem and be able to provide an answer of who is responsible for the package to the system.
		</p>
		<p>
			A scenario for this might be as follows.
			A pair of employees is tasked with delivering a large, fragile package to a destination.
			Due to the awkward shape of the package, using a hand truck isn&apos;t a feasible option, so the two will be carrying it to the truck, then going together to the destination to unload it.
			However, only one employee can be held responsible for the package by the system, as the employees don&apos;t typically go out for delivery in pairs; this particular delivery just represents an an odd corner case.
			The two try to walk out the door with the package, an audio alert goes off, and the two set down the package.
			One is designated as the responsible employee, and they scan their badge at the terminal at the door.
			The ambiguity is resolved, and the system lets them continue out with the package together.
		</p>
		<div style="APA_references">
			<h4>References:</h4>
			<p>
				Marsic, I. (2009, June 27). Software Engineering. Retrieved from <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf"><code>https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			Thank you so much for clearing that up for me!
			Usually, switching from <code>http:</code> to <code>https:</code> results in either a vastly different page (or no page) or the same exact page, so I switch everything I can to <code>https:</code> automatically for security reasons.
			It&apos;s not often that something like this occurs, where a different file is presented, but the file is so similar that it can slip by unnoticed.
			I&apos;ve downloaded the newer version of the book now, and will correct the problem in my notes by tonight.
		</p>
	</blockquote>
	<blockquote>
		<p>
			That&apos;s a very good way to put it.
			A scenario is just a detailed walk-through of one possible path through a particular use case.
		</p>
		<p>
			The book does tend to use some terms interchangeably ... but then again, it also uses phrases such as &quot;$a[ATM] machine&quot; (automatic teller <strong>*machine machine*</strong>) and &quot;$a[PIN] number&quot; (personal identification <strong>*number number*</strong>), which hints that the author might not know what they&apos;re talking about all the time.
			I&apos;m taking everything they say with a grain of salt and consulting other sources when things seem strange.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Due to an odd Web server configuration and my own tendency to normalise $a[URI]s that I plan to cite (to make sure my submitted work is as clean as it can be), I haven&apos;t had a copy of the correct textbook until today.
			I was a bit lost as the book I had wasn&apos;t explaining things that we&apos;re covering in class, but that quote you used assures me I was on the right track in the discussion this week.
			A scenario is a specific sequence, while a use case is a general task we want to have performed.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Hmm.
			It looks like you and I had opposite takes on the differences between use cases and scenarios.
			I interpreted the term &quot;use case&quot; to mean the broad, unspecific goals the system would help with, similar to the layman&apos;s definition of the term.
			Meanwhile, I interpreted a &quot;scenario&quot; to be the one with a step-by-step walk-through of a particular instance of use.
		</p>
	</blockquote>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		The reading assignment for the week is sections 2.5 and 2.6 from the textbook, as well as the following:
	</p>
	<ul>
		<li>
			<a href="https://csis.pace.edu/~marchese/CS389/L8/DomainModel-UML_short.pdf">Unified Modeling Language (UML) for OO domain analysis - DomainModel-UML_short.pdf</a>
		</li>
		<li>
			<a href="https://www.cs.cmu.edu/~charlie/courses/15-214/2015-fall/slides/03b-assigning-responsibilities.pdf">Microsoft PowerPoint - 03b-assigning-responsibilities.pptx - 03b-assigning-responsibilities.pdf</a>
		</li>
		<li>
			<a href="http://aptprocess.com/whitepapers/DomainModelling.pdf">Domain Modelling - DomainModelling.pdf</a>
		</li>
	</ul>
	<p>
		I&apos;ll never understand why slide show presentations are assigned as reading material.
		They&apos;re ... incomplete.
		Without the context of a lecture, much of the information they intend to convey is just lost.
		It&apos;s possible to glean <strong>*some*</strong> information from them, but reading a slide show presentation without the lecture to accompany it is like trying to learn from someone else&apos;s notes that were never intended for you.
		They jotted down bullet points about various items to spark memories about bigger ideas, but since you don&apos;t have those memories, all you get is the fragments that you see; fragments that don&apos;t always make sense out of context.
	</p>
	<p>
		The quote at the beginning of section 2.5, <q cite="http://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf">I am never content until I have constructed a mechanical model of the subject I am studying. If I succeed in making one, I understand; otherwise I do not.</q>, really hits home for me.
		It also pretty well sums up my attitude about my studies in my other course, <span title="Computer Systems">CS 1104</span>, this week.
		We&apos;re learning about data flip-flop gates, and the book tells us that they can be built out of basic nand gates (the building block we&apos;re using for everything else in the course), but never tells how to do it.
		I could not at all grasp the concept of building sequential gates out of non-sequential gates, and that was throwing off my ability to grasp data flip-flop gates.
		It was only after looking up how to build data flip-flop gates, constructing my own virtual model of one, and tinkering with the parts of said model to understand how they fit together that I truly began to understand how data flip-flop gates work and why.
	</p>
	<p>
		The text of section 2.5 seemed very familiar to the point that I was convinced I&apos;d already read at least part of it word for word.
		Had we already been assigned this section last week?
		Checking back, no, no we had not.
		Next I tried comparing the old book I was working with up to this point with the new book.
		Sure enough, therein lied the problem.
		In the old book, section 2.4 is &quot;Analysis: Building the Domain Model&quot;.
		In the new book, section 2.5 is &quot;Analysis: Building the Domain Model&quot; instead.
		It&apos;s no wonder I&apos;ve been so lost these past few weeks!
		I&apos;m wasn&apos;t just reading outdated information; I was reading the <strong>*entirely wrong*</strong> information!
		The new book has many more pages than the old one, so it&apos;s likely sections were renumbered because new sections were interjected between the old ones.
		With the correct textbook in hand, I should do <strong>*so*</strong> much better in class.
		I don&apos;t have time to go back and read the correct reading assignments from past weeks this week; among other things that have taken away my time this week, oral surgery put me out of commission for a full day, as I wasn&apos;t able to concentrate on much and I had to go shopping for food I could eat without chewing; not to mention that the surgery itself took hours.
		I need to go back when time allows though.
	</p>
	<p>
		Because I already read what would later become section 2.5 last week, I don&apos;t really have any new notes on that section.
		I&apos;ll reiterate that putting concepts that interact into the same package makes no sense though.
		The entire system will end up in the same package if every communication channel results in the communicating components being grouped.
		Instead, components that are likely to always be needed together in any system in which they are used should be packaged together.
		When the need for one implies the need for the other, in both directions, that is when a package should be formed.
		Likewise, similar concepts can be grouped together into a package, even when they&apos;re often used separately, such as a maths package that contains both the functionality of finding logarithms and the pi constant.
		Logarithms and pi aren&apos;t commonly used together, but they&apos;re both used in mathematical computation, so they often get packaged together along with a bunch of other components.
	</p>
	<p>
		The book mentions a &quot;does it need to be mentioned&quot; test, but whether something needs to be mentioned is entirely dependant on your level of familiarity with the problem your software will need to solve.
		You need to take into account what your team knows about the domain.
		What doesn&apos;t need to be mentioned in one team&apos;s model may very well need to be mentioned in another team&apos;s.
		A great example of this is the <code>numOfKeys</code> attribute the book mentions.
		The book says this attribute doesn&apos;t need to be mentioned, as it&apos;s &quot;self-evident&quot; that we need to know the number of keys.
		... but why?
		Why do we need to know the number of keys?
		As far as I&apos;m concerned, all we need to know is if a key provided by the user matches one of the keys we have on hand.
		We don&apos;t at all need to know the number of keys to validate a key against the key list.
		Furthermore, if we did want to know the number of keys, we could have the system count them.
		There&apos;s no need for the <code>numOfKeys</code> attribute.
		Or to be more accurate, if we do need this attribute for some reason, it&apos;s probably worth mentioning unless we&apos;re working in a team that generally agrees that this sort of thing is always needed; it&apos;s not a self-evident attribute.
		As a side note, I&apos;d be interested to know what the author would use this attribute <strong>*for*</strong>.
	</p>
	<p>
		An interesting point is made about documenting alternative solutions that were decided against and why.
		It makes a lot of sense to do that, as when you leave the project, others that pick it up may question the design of the project and refactor it.
		They may very well find that you were right to do it your way and that they&apos;ve wasted a lot of time and effort.
		If you document why you didn&apos;t choose other options, they may better understand potential pitfalls, so if they decide to refactor anyway, they&apos;ll&apos;ve already come up with a solution to the problems of have deemed them a necessary issue.
		They&apos;ll also know which options you considered and which you hadn&apos;t thought of; they might come up with something even better!
	</p>
	<p>
		One student didn&apos;t submit any work this week.
		I want to grade it, and it said &quot;See attached sheet! Thanks!&quot;
		That was all.
		There was no attached sheet.
		It was kind of sad, as they must&apos;ve put at least some effort in, but because they failed to submit it, they&apos;ll no doubt be getting a zero from all their graders this week.
	</p>
	<p>
		I thought I&apos;d do better in class now that I have the right textbook, but it seems I&apos;m still not cut out for this.
		I scored pretty low in the exam this week.
		I&apos;ll need to find a way to fit even more study time into my already-packed schedule.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			As the book says, systems interact with their environment (Marsic, 2012).
			They don&apos;t operate within a vacuum.
			Unless we take into consideration the environment in which they&apos;ll be used, we can&apos;t begin to understand what kind of input they&apos;ll likely receive.
			We also need to be able to consider what kind of data our system can <strong>*ask*</strong> for.
			What do we have at our disposal for accomplishing the task at hand?
		</p>
		<h4>Example 0</h4>
		<p>
			Let&apos;s say we have an inventory-management system for a department store.
			A user wants to know if an item is currently in stock.
		</p>
		<p>
			If the user is a potential customer accessing our system over the Web, certain considerations will need to be taken into account.
			First of all, potential customers looking for a product are unlikely to know the product code that uniquely identifies it.
			If our system can only perform lookups based on product code, our system will be pretty much unusable.
			If we don&apos;t have a product name search option, the potential customer won&apos;t get far with the system.
			Second, while we might want to make it known that we do or do not have a given product in stock (so the customer doesn&apos;t waste time coming to the store to find we don&apos;t have what they need), we likely don&apos;t want them to know <strong>*how many*</strong> we have; that might be sensitive information.
			If we&apos;re a chain store, we might want to make it known which local stores have the product they want as well.
		</p>
		<p>
			On the other hand, if the context of the system is that it is to be accessed by personnel, we&apos;ll have a very different use case on our hands.
			First of all, a product name search is time-consuming.
			The employee will have a lot to check over, and they likely have the product code on hand.
			Second, we want to reduce the amount of ambiguity.
			A product name search works for customers because they know what they&apos;re after and can see which product in the search results is what they want, or even if the product doesn&apos;t show up.
			An employee is not going to be intimately familiar with every product they need to look up, so if searching by name, they&apos;re not likely to notice if the product they need is missing if a similar product shows up on the list.
			Again, a product code search is vital here to determine which products have been removed and ensure we return the exact product needed.
			Third, we&apos;re going to want to return the number of the product in stock for inventory purposes, such as ordering more.
			And finally, the employee likely doesn&apos;t care if nearby store locations have the product; they only care about the store they work in.
		</p>
		<p>
			Knowing the context in which the system will be used in is vital for knowing what information can be expected from the user and what information the system should provide in return.
			If we don&apos;t know the context, we could make unreasonable demands of our users, leak sensitive information to them, fail to provide them with enough information, or fail to provide them with a reasonably way to convey their query to the system.
		</p>
		<h4>Example 1</h4>
		<p>
			At a library, we need to keep track of which books we own and which books we currently have on the shelves.
		</p>
		<p>
			If the context of our system is that it&apos;s used by library patrons, it&apos;ll need to allow access to information about what books the library owns, but also which ones are currently checked in.
			It&apos;s not enough to just tell which books are currently available.
			The patron will want to know if they&apos;ll have access to the book after it gets checked in, so we need to show which books are checked out, when they&apos;re due back, and how long the waiting list for said books is.
			However, we will <strong>*not*</strong> want to provide any information about <strong>*who*</strong> currently has the books checked out, as that&apos;s private information.
		</p>
		<p>
			However, if the context is that it&apos;ll be used by librarians, we additionally want to provide information about borrowers.
			We&apos;ll need to display the names of the borrowers and their email addresses, so the librarians can contact them if they fail to bring the books back before they&apos;re due.
			We might even want to display the postal addresses of borrowers so overdue notices can be sent via post as well.
		</p>
		<p>
			Again, the key here is that by knowing the context, we have a better idea of the level of information we should provide to the user.
			Both providing too much and providing too little information would be a problematic error.
			In one case, the system leaks sensitive information, and in the other, it makes work difficult for the librarians.
			That said, it&apos;s likely such a system will be used in multiple contexts: both by patrons and librarians.
			In such a case, we need to take all necessary contexts into account and provide a way for the system to distinguish between them.
			(For example, perhaps restricted information such as patron names, email addresses, and postal addresses can only be accessed by logging into the system with a librarian&apos;s credentials.)
		</p>
		<div style="APA_references">
			<h4>References:</h4>
			<p>
				Marsic, I. (2012, September 10). Software Engineering. Retrieved from <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf"><code>https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			My examples both presented the same errors, I&apos;ll give you that, but there were several errors shown.
			First, leaking of sensitive data is an issue.
			Second, failure to show data needed to perform one&apos;s job.
			Third, making unreasonable (given the context) demands from users.
			And fourth, asking for input in a format not easily usable by the user.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You make a good point about a system not functioning the same in practice.
			My workplace is run by a franchise; our owner therefore has to take orders from the corporate company sometimes.
			Corporate recently mandated that we install a new computerised system that was to replace our old pen-and-paper system.
			Our owner didn&apos;t want to do it, but did as they were told and and had it installed anyway; a hefty out-of-pocket expense, I&apos;m sure.
			Anyway, the computerised system has a number of both quirks and outright glitches.
			Even the parts that work as they&apos;re supposed to don&apos;t take into account the reality of the store.
			As such, it&apos;s not able to meet our needs most of the time.
			We&apos;ve had to resort to using our old pen-and-paper system to get any work done.
			As corporate demands that we use the new system, we&apos;re having to use the new system and old system in parallel, which obviously slows us down considerably.
			Mostly, the system could be a good thing if it was only modelled to work in an actual store and not an idealised environment where everything always runs the way corporate imagines it does.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Unexpected input can be problematic.
			Normally, when I build Web application, I sanitise user input in two places.
			First, I sanitise it before adding it to the database.
			This prevents special characters from disrupting the data stored there.
			Second, I sanitise it after pulling it out of the database, before outputting it as $a[XHTML].
			This prevents users from engaging in page markup manipulation.
		</p>
		<p>
			One time though, I built a Web application for my mother&apos;s private use.
			She&apos;s not going to try to bug up her own tool with bad input, right?
			And even if she wanted to for some reason, she&apos;s really not that tech-savvy.
			She wouldn&apos;t know how.
			So I didn&apos;t even think to perform this second layer of sanitation.
			At one point though, she input a quotation mark.
			A simple, seemingly-harmless quotation mark.
			But quotation marks are one of the five characters with special meaning in $a[XML].
			It broke the output so it wouldn&apos;t render, and she had to have me come fix the bug.
			It&apos;s not the most extreme example, but knowing what sort of input to expect can be vital.
			Also, always sanitise your input, even from non-malicious users.
			They might put something in there that seems benign, but can actually break stuff.
		</p>
	</blockquote>
	<blockquote>
		<p>
			It pays to get it right the first time.
			A lot of time and effort can be wasted on speculation.
			By modelling the context the software will be used in, you can verify your thoughts with the client before getting started.
			If something doesn&apos;t look right, the client can let you know before you start building the wrong thing for them.
		</p>
	</blockquote>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		The introduction to the unit this week mentions that a bad-intentioned programmer can introduce malicious features, either for personal gain or for revenge.
		(This later turned out to be an exert from the textbook.)
		I like that this is brought up and recognized.
		One such common malicious feature is $a[DRM].
		Programmers (either on their own or at the request of a boss/client) introduce user-impeding features that make using the product a pain.
		Often times, $a[DRM] does zero to fight so-called &quot;piracy&quot;, which is what $a[DRM] is supposedly intended to prevent.
		Those who download copies of software illegally usually have no problem getting past $a[DRM].
		Instead, it&apos;s the <strong>*legal*</strong> users that are usually burdened.
		For example, Apple&apos;s music store uses $a[DRM] to ensure that music purchased there can only be used on a certain number of devices.
		If you replace your device too many times (either because your devices broke or you wanted something newer), you have to buy your music again!
		Keurig uses $a[DRM] without any guise of anti-piracy.
		They lock their coffee makers down to prevent users from using third-party coffee mix.
		You have to buy Keurig-manufactured or Keurig-licensed coffee pods to use your own coffee maker if you bought it from that company!
		Of course, loads of other malicious features exist, such as the spyware built into newer versions of Microsoft Windows, but $a[DRM] is one of the big ones that most people don&apos;t even seem to notice is malicious.
	</p>
	<p>
		The reading assignment for the week was sections 2.7, 3.1, 3.3, and 3.4 of the textbook, along with the following:
	</p>
	<ul>
		<li>
			<a href="https://en.wikipedia.org/wiki/Software_testing#The_box_approach">The box approach</a>
		</li>
		<li>
			<a href="https://people.csail.mit.edu/dnj/teaching/6898/lecture-notes/session8/slides/mj-problem-frames.pdf">MIT2002 - mj-problem-frames.pdf</a>
		</li>
		<li>
			<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108.8841&amp;rep=rep1&amp;type=pdf">C:\\Documents and Settings\\Michael\\MAJ Data\\Writings\\WIP\\PFrame7.prn.pdf - download</a>
		</li>
		<li>
			<a href="http://www2.sas.com/proceedings/sugi30/141-30.pdf">141-30: Software Testing Fundamentals—Concepts, Roles, and Terminology - 141-30.pdf</a>
		</li>
	</ul>
	<p>
		Test-driven development, covered by our textbook this week, is something that I&apos;ve tried to do with my include.d library for $a[PHP].
		However, it&apos;s not something I keep at as much as I&apos;d like to.
		The problem is that I like to have at least a little working code before I start trying to test it.
		Additionally, I don&apos;t always know exactly what the finished product&apos;s $a[API] will be until later.
		Often times, I change the $a[API] repeatedly as I take more and more into account.
		I also try to do regression testing, and I keep a suite of test code in a separate repository.
	</p>
	<p>
		Boundary testing, including corner case testing, is something I&apos;ve always considered a part of the bare minimum requirement for making sure code works.
		It&apos;s something I do as I go along, often throwing out the test code later because it doesn&apos;t fit with the $a[API] once the $a[API] is established.
		It&apos;s also something I do when I see a system that isn&apos;t my own that I think might be screwy.
		I poke.
		I prod.
		I throw data at random systems such as dynamic websites just to see if they might&apos;ve been coded correctly.
	</p>
	<p>
		The &quot;sandwich&quot; integration testing method presented by the text book seems ... really dumb, to say the least.
		Top-down is a reasonable way to work, as is bottom-up.
		Bottom-up seems like the most logical direction to move, but I can easily see arguments made for a top-down approach.
		Working from both directions at once though ...
		What&apos;s the point?
		You have your known-working components separated; the two sides may work fine, but you may find when they meet in the middle, you did something wrong and have to go back and find the problem.
	</p>
	<p>
		At first, when I got to Section 2.7.6, I thought the author had gone bonkers for a bit.
		They were talking about using polymorphism as a substitute for conditional statements.
		That&apos;s a completely ludicrous idea, and unlike what the textbook says, makes code more difficult to follow.
		As I read on, I found that&apos;s not what the author actually means though.
		They&apos;re not talking about using polymorphism to take advantage of the fact that different data types need to respond differently.
		If you have any clue what you&apos;re doing, these are situation in which you&apos;d never even <strong>*think*</strong> to use conditional statements.
		In short, the message is to keep data-type-specific logic in the classes where it belongs instead of moving it outside the classes.
		Yes, if the logic is moved outside the classes, you&apos;re going to need particularly ugly conditionals.
		So don&apos;t do that.
		Also, as the author mentions, keeping the class-specific logic in the classes where it belongs instead of in outside conditionals allows you to keep the class-specific logic all in one place, making the adding of new data types, new classes, that much easier.
	</p>
	<p>
		I tried a bit of fiddling when I read the part with the $a[DVD] player example.
		I tried to refactor the &quot;playing&quot; state into one of the other state variables to eliminate the invalid states.
		That obviously doesn&apos;t work.
		There were five valid states, so if one state variable keeps two sub-states and the other keeps three, there&apos;s a total of six states; one has to be invalid.
	</p>
	<p>
		The book also shows that &quot;2(Counter)&quot; looks odd in the book&apos;s adopted notation of representing the state of object properties.
		The unstated reason it look odd, no doubt, is because the notation looks like a function call, and that makes the digit <code>2</code> the name of a function.
		The book&apos;s recommendation is to instead, as a special case, white it as &quot;Equals(Counter, 2)&quot;.
		That&apos;s a terrible idea.
		First of all, exceptions like this make your notation inconsistent.
		Consistency is key for expressing your ideas and eliminating unnecessary ambiguity.
		It also makes reading and understanding of stated ideas much quicker and easier.
		Even understanding what the &quot;Equals()&quot; notation meant, my reading speed dropped to about a fifth of what it&apos;d just been every time I hit an instance of the notation, speeding back up after I was past it.
		If your notation is causing slow reading like this, it&apos;s a pretty good sign it&apos;s terrible.
		This is especially true about certain characteristics of your notation when only certain characteristics are causing the bottleneck in reading.
		Second, oddities like this point toward flaws in your system; in this case, a flaw in the notation system.
		The name isn&apos;t &quot;2&quot;, that&apos;s the data.
		The data isn&apos;t &quot;Counter&quot;, that&apos;s the name.
		Therefore, it should be written as &quot;Counter(2)&quot;, not &quot;2(Counter)&quot;.
		That same logic applies to the other state variables.
		It&apos;s not &quot;On(PowerButton)&quot; (even though that&apos;s plenty easy to read), it&apos;s &quot;PowerButton(On)&quot; (which, I might add, is still easy to read; in my opinion, even easier).
		This provides not only consistency and state names that look reasonable, but also a more sensible way to look at the data.
	</p>
	<p>
		The book started using a strange character in comparing states, before ever explaining it&apos;s use of the character.
		I tried to copy the character to find out more about it, and found it showed up differently in my clipboard.
		As it turns out, the character is in the &quot;private use&quot; section of the Unicode table.
		In other words, it&apos;s not a valid Unicode character, and uses a code point set aside for use-case-specific purposes such as this.
		That means though that I can&apos;t be sure I&apos;m even seeing it correctly.
		Not only was I unable to search for it, I might even be seeing the wrong character altogether.
		Most computers have only proprietary fonts installed, so most documents are written using proprietary fonts.
		I instead have only free fonts installed, making the fonts available mutually exclusive to the fonts most documents are written in.
		Standards, people.
		Use the standard Unicode characters, not the private use code points, if at all possible.
		A bit later though, the symbol&apos;s use was explained, but not before I&apos;d tried to find the information on my own.
		New symbols should be explained <strong>*before*</strong> using them, not after.
	</p>
	<p>
		I laughed a little when the book said we don&apos;t get to define the given domains.
		It amused me to think of someone trying to decide the state of the world.
		While it&apos;s obvious to <strong>*me*</strong> that such isn&apos;t possible, it is something worth mentioning in a textbook.
		For someone who didn&apos;t get what given domains are, it might help clarify.
	</p>
	<p>
		My main challenge for the week was in remembering the phrase &quot;degenerate triangle&quot;.
		Degenerate triangles represent two different corner cases in the assignment this week.
		They&apos;re very much valid triangles, but they mark a strange situation that I&apos;m guessing most students won&apos;t account for.
		Only three students will be grading my work, but if I can potentially educate them about triangles, I should take the opportunity to do so.
		Furthermore, having the technical term for them allowed me to look up sources to cite, validating their classification as triangles at all, and showing that my testing of them is not only valid, but a mandatory action if we are to make sure the program runs correctly.
		Thankfully, where there&apos;s a lower limit on the number of tests we can propose, there&apos;s no upper limit.
		We can plan our tests to do this right instead of trying to capture ans many bugs in as few tests as we can.
		If something seems like it might cause an error, no matter how small the chance, we can test for it.
		All three side lengths will no doubt be handled in different spots in the code.
		That also triples the number of tests we need to test certain situations.
		(A=5, B=5, C=6 might have no error, while A=5, B=6, C=5 might have an error; both need to be tested separately, as well as A=6, B=5, C=5.)
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			Boundary testing is part of the bare minimum that I think should be tested, even before having any sort of stable code.
			As soon as you have the lines of code that deal with the boundary, I&apos;d test them before building the rest of the module.
			Sometimes, you just don&apos;t get the boundary condition right, or you accidentally apply the boundary condition in reverse.
			(For example, you might mistakenly check to see if a number is greater than five instead of less than or equal to five.)
		</p>
		<p>
			State testing is another test method I use frequently.
			State testing is great when you&apos;re working with objects, but I do a lot of debugging with it outside that context as well.
			Often times, when I introduce a bug into my code, it&apos;s because I&apos;ve forgotten exactly what a variable or property contains.
			For example, I have a set of scripts I built to compile my webpages for me.
			For technical reasons, the variable that contains the path of the file currently being processed doesn&apos;t have the path in an expected format.
			Putting it in the expected format would take extra processing, which would have to be done for each and every file path; there are hundreds of files, so all that processing adds up.
			Very few spots in the code directly rely on that variable, but when I add a new feature that uses it, I&apos;m likely to mess something up.
			I usually remember to do state testing beforehand, adding scaffolding code at particular spots to find out what the expected state should be, then add the feature and test again.
			When I forget to do that though, I end up with wacky results, and need to do some state testing to find out why the results I&apos;m getting aren&apos;t what I expected.
			State testing was also pretty much the main testing needed for my $a[URI]-parsing modules.
		</p>
		<p>
			Control flow testing is one that I should use a lot more than I do.
			It guarantees that each line of code is run at least once, so it pinpoints most of the trivial bugs introduced either by typographical error or absent-mindedness.
			Back to the example of my website-building code, my scripts for that are designed to crash in case of error.
			Specifically, I use an error handler that outputs a stack trace and halts the scripts specifically so I don&apos;t fail to see an error occurred and is in need of fixing.
			In user-facing code, this would be a poor design decision, but I&apos;m the only one that sees the output, so it works well in this case.
			When I fail to perform control flow testing after adding code to the library in which my website-building code depends, I often later end up causing a crash in the website-building scripts due to some obscure corner case I hadn&apos;t tested at the time.
			I&apos;d accounted for the condition when I was planning, and I wrote up code to deal with it, but since I hadn&apos;t tested that particular path through the program, I didn&apos;t notice that (for example) I tried to reference some variable that would never be defined on that path through the function.
			Had I used edge testing when initially testing the new (or modified) function, I&apos;d avoid later difficulties.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I think the point was to decide in general, not for a specific use case.
			For a specific use case, yes, it would depend on the situation.
			However, if you were requires to choose only three types of tests to run, and apply those same three to <strong>*every*</strong> use case, which three would you choose and why?
			It&apos;s less about having a favourite and more about what you think either applies to most cases or is good practice in general.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You make a good point about error guessing.
			It&apos;s very flexible.
			You get to look at the code and see what looks amiss.
			It requires that the testers have access to the code though.
			Personally, I don&apos;t think code access should ever be restricted, but for some companies, they may not want the testers (or the public) having access to the code.
			That&apos;s their loss though.
			If the testers can read the code, they&apos;re much more likely to spot the problem areas in it.
		</p>
	</blockquote>
	<blockquote>
		<p>
			Boolean-type tests are an easy way to check for certain conditions.
			They don&apos;t work for every situation, but when they do work, the results and their implications are exceedingly clear.
			I also liked your vending machine example for state testing.
			The machine does need to know how much money a user has already input so it can tell which of the differently-priced items are currently acceptable for purchase.
		</p>
	</blockquote>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		The reading assignment for the week was sections 4.1, 4.2.2, and 4.3 through 4.6, as well as the following:
	</p>
	<ul>
		<li>
			<a href="https://ifs.host.cs.st-andrews.ac.uk/Books/SE7/SampleChapters/ch26.pdf">ch26.pdf</a>
		</li>
		<li>
			<a href="https://mccabe.com/pdf/mccabe-nist235r.pdf">mccabe-nist235r.pdf</a>
		</li>
		<li>
			<a href="https://wiki.c2.com/?CouplingAndCohesion"><em>(no <code>&lt;title/&gt;</code> element)</em></a>
		</li>
		<li>
			<a href="http://literateprogramming.com/mccabe.pdf">mccabe.pdf</a>
		</li>
		<li>
			<a href="http://www.tfzr.uns.ac.rs/emc/emc2011/Files/F%2003.pdf">F 03 - F 03.pdf</a>
		</li>
	</ul>
	<p>
		The book makes an interesting point about ordinal scales.
		Although some people do things such as average things measured on such scales out, this is not actually a valid operation.
		Points on an ordinal scale are at unknown distances apart; we <strong>*cannot*</strong> assume they are equidistant.
		We can tell which things are greater or lesser than other things, but that is the <strong>*only*</strong> thing we can tell.
	</p>
	<p>
		When we studied cyclomatic complexity in a previous unit, I was confused as to why a loop should be treated the same as a simple branching.
		Clearly, a loop with a conditional statement is far more complex than a single branching, as the loop is basically an infinite number of branches rolled together.
		In the other course I&apos;m taking at the moment though, we&apos;ve now taken a look at how loops and branches are implemented on a low level.
		They&apos;re the same.
		Exactly the same.
		They&apos;re basically just <code>goto</code> statements attached to conditionals.
		In a loop, you jump back to the a previous point if the condition is met.
		On a branching, you jump ahead if the condition isn&apos;t met.
		On a high level, a loop definitely adds more possible paths through a program than a branching does.
		But on a low level, they both add the same level of complexity to the program.
	</p>
	<p>
		High cohesion is something I definitely strive for in my code, except in rare cases.
		For example, high cohesion in Minetest mods isn&apos;t really feasible.
		Just about everywhere else though, I do my best, and I kind of beat myself up over it if I can&apos;t quite reach a level of cohesion I can feel satisfied with.
		In particular, I try to make sure each and every class and function is not specific to the project I&apos;m working on at the time, and can be reused logically in other projects.
		My main failure in that regard is a closure I use in my website-compiling scripts.
		I don&apos;t have the time build the function properly, so I&apos;ve hacked something together that does this one and only job.
		When I have time, I&apos;d like to replace it with either a template-using function or a full template-using class.
		In either case, I should be able to remove all project-specific logic from the code of the function/class and generalise the rest.
		I&apos;m a long way off from finding that time though, and it bothers me a bit every time I need to open that file to update something and I see that code there again.
		I&apos;d say cohesion is probably the main metric with which I measure my code when I&apos;m programming in my native language, $a[PHP].
		Outside of $a[PHP], I haven&apos;t found a language in which modularity can quite be handled in a way that I can agree with.
		This is the one reason I use $a[PHP] so much, for both Web- and non-Web-related projects, despite $a[PHP]&apos;s numerous flaws.
	</p>
	<p>
		The idea of limiting modules to a certain number of lines seems counterproductive though.
		Sure, you keep things modular, I guess, but things that logically belong in the same module often times won&apos;t be able to be.
		One of the classes I&apos;m particularly proud of is rather large in file size compared to my other modules.
		It parses $a[URI]s, normalises them, and verifies their syntactic validity.
		From there, it allows the manipulation of $a[URI]s in several ways.
		It only implements the code used for the generic $a[URI] syntax, but other classes extend it to provide scheme-specific features, such as validation of <code>gopher:</code>-scheme $a[URI]s.
		I had to pack a lot of code into that one class, but every line of it is necessary.
		It allows $a[URI]s to be treated as their own data type, without invalid data being accepted.
		Anything less would be like the built-in <code>int</code> data type accepting &quot;45TThn99&quot; as a valid integer.
		Breaking that class into multiple modules would be possible, but it&apos;d require relaxing the strict validity enforcement by requiring attributes be accessible from the outside.
	</p>
	<p>
		Using shared attributes as a way to measure cohesion makes sense in theory, but when the book describes how that can be done, the huge flaw in doing so becomes readily apparent.
		Perfect cohesion, by this definition, comes from all methods accessing all of the same attributes.
		Um.
		What?
		That means you can&apos;t have methods that operate on only part of the data.
		It&apos;s all or nothing.
		If it doesn&apos;t use every bit of data the object has available to it, it doesn&apos;t belong in the class.
		Clearly, this assessment can&apos;t be right.
	</p>
	<p>
		Semantic cohesion is more what I strive for.
		To me, classes represent user-defined data types.
		Any and all methods and attributes must be necessary and useful for that particular data type, or they must be removed.
		I find that when I deviate from this ideal, I always have to go back and fix things later; I&apos;ve always done something wrong.
		I used to try to group related functions into a single class as well, one with no attributes, but that never worked out well.
		Whenever I feel compelled to group functions together like this, there&apos;s some underlying reason for it.
		They operate on similar data, and upon closer inspection, data is needing to be passed between these methods by the program that uses them.
		They&apos;re operating on the same exact data instances, and that data should be stored as attributes in an object; it&apos;s a data type by its own right, but it wasn&apos;t being properly represented.
	</p>
	<p>
		The assignment this week was mean.
		We were asked to draw a flowchart representing a doubly-recursive algorithm.
		We did not at all cover how to handle recursion on flowcharts.
		Then we were asked to calculate the cyclomatic complexity of the same algorithm.
		Again, we didn&apos;t cover how to handle recursion when calculating cyclomatic complexity.
		In theory, we can add the complexity of any functions to the overall complexity when the function is called.
		Two of the function calls made within the <code>QUICKSORT()</code> function are to itself though, meaning that to find the complexity of the <code>QUICKSORT()</code> algorithm, we need to know the complexity of the <code>QUICKSORT()</code> algorithm, double it, and add in any other branching.
		So ... Q = 2Q + P, where Q is the complexity of <code>QUICKSORT()</code> and P is the combined complexity of <code>QUICKSORT()</code>&apos;s conditional and the the <code>PARTITION()</code> function.
		If P is nonzero, and it is, the only solution is that the <code>QUICKSORT()</code> algorithm is infinitely (or negatively infinitely) complex.
		That can&apos;t be right.
	</p>
	<p>
		I tried to research recursion in flowcharts, but didn&apos;t understand what I was seeing in the examples.
		When that wasn&apos;t working, I tried working backwards.
		I looked up the effects of recursion on cyclomatic complexity.
		If I could understand that well enough, I figured I could build a reasonable representation in the flowchart.
		What I found was helpful, but not helpful.
		Basically, recursion can be rewritten as a loop.
		(This isn&apos;t always actually true; recursion does have some almost-magical use cases in which it&apos;s really the only tool capable of performing a task.
		The basic logic of this still applies though.)
		For a loop, you have the conditional statement, which adds to the cyclomatic complexity.
		In recursion, you instead have your conditional inside a function, causing cases in which the function <strong>*won&apos;t*</strong> call itself.
		This is your base case, and is part of what prevents infinite recursion.
		(The other part being that you always make sure that what you call the function with is closer to the base case than the current function call has.)
		With this in mind, the logic follows that the recursion itself can be removed from the equation and ignored, as long as the conditional that prevents infinite recursion is <strong>*not*</strong> ignored.
		From there, calculating the cyclomatic complexity is easy.
		So what&apos;s the catch?
		The straightforward flowchart that&apos;d match the complexity calculation doesn&apos;t match the flow of the software.
		Furthermore, I could chart the flow as one connected component or at two (one for each of the two functions).
		Splitting it into two would result in a higher complexity and show the flow less accurately.
		That said, splitting it into two would be more consistent, given the way I&apos;d likely need to treat the recursion.
		It felt wrong to split it, but that&apos;s what I ended up doing.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			As the textbook states, a program with zero branching and zero loops has zero cyclomatic complexity (Marsic, 2012).
			Does that mean that a program with no branches or loops is infinitely easy to maintain?
			I&apos;m thinking not.
		</p>
		<p>
			There are other things besides the number of branches to consider when determining the level of difficulty in maintaining a project.
			For example, let&apos;s say we have two programs with no branching.
			The first performs task A.
			The second performs the same task A, immediately followed by some task B.
			Neither program branches, so they have the same low cyclomatic complexity, but are the two honestly equally easy to maintain?
			The second program performs more tasks.
			You can&apos;t even claim them to be simpler tasks, as it performs everything the first program does and more.
			Clearly, the second program will be more difficult to maintain, but we have no way to measure that.
		</p>
		<p>
			Humans aren&apos;t systematic or logical, either.
			We&apos;re not going to be able to find a single, measurable metric that corresponds with how easy it is to maintain something.
			The best we can do is strive for several certain ideals that appear to correlate with maintainability, and deviate from those ideals whenever it appears to aid maintainability in specific cases.
			Most hard and fast rules will fail in some cases.
			That said, some rules, such as avoiding the use of <code>goto</code> statements when working in high-level languages, will always work.
			(Avoiding <code>goto</code> statements in machine languages is only possible if we don&apos;t allow our program to branch or loop, so these statements are tolerable in such low-level languages.)
		</p>
		<div style="APA_references">
			<h4>References:</h4>
			<p>
				Marsic, I. (2012, September 10). Software Engineering. Retrieved from <a href="https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf"><code>https://www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<blockquote>
		<p>
			You make an excellent point that maintainability cannot directly be measured.
			Cyclomatic complexity can be measured, but if maintainability can&apos;t <strong>*also*</strong> be measured, how are we supposed to be able to draw a definite correlation between the two?
			It doesn&apos;t even make sense to try.
			We can still attempt to look at general trends though.
			A module with low cyclomatic complexity may <strong>*often but not always*</strong> be easy to maintain, for example.
		</p>
		<p>
			Subjectivity is another issue.
			At the heart of the problem, I think the subjectivity of the concept of easy-to-maintain is the reason we can&apos;t measure maintenance ease in the first place.
		</p>
	</blockquote>
	<blockquote>
		<p>
			It&apos;s very true that the complexity of the branching logic isn&apos;t the same as the complexity of the maintenance.
			Attempting to measure one through the other isn&apos;t really going to work out well.
			The branching logic has an effect on the difficulty in maintaining the module (potentially), but it&apos;s not the only factor at play.
			There are so many other things to take into account, such as comment quality, code clarity, and cohesion.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I hadn&apos;t considered that; you&apos;re right.
			Specific people will have an easier time maintaining the same code based on how much they know the code.
			There&apos;s no good metric to measure that, just as there&apos;s no good replacement for those people should they leave the project.
			We should do our best to prevent our projects from being only maintainable by a few specific developers, but that isn&apos;t always a possibility.
			Sometimes complicated code has to be written, and when it does, it&apos;ll be difficult for new recruits to pick up and continue it.
		</p>
	</blockquote>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		I strongly agree with Kerckhoffs&apos; Principle.
		The security through obscurity model is complete garbage; not only is it a sign of a weak system, it also helps prevent the flaws in the system from being found and corrected.
		There are two reasons I don&apos;t trust software that doesn&apos;t have its source code publicly available.
		The first is that the developer may have intentionally included malicious features, but the second is that code that cannot be publicly seen cannot be publicly audited and checked for vulnerabilities.
		Keys should certainly be kept secret, but the rest of the system should not be.
	</p>
	<p>
		The reading assignment for the week was Sections 5.1, 5.2, and 5.5, as well as the following:
	</p>
	<ul>
		<li>
			<a href="https://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Architecture/Design_Patterns">Introduction to Software Engineering/Architecture/Design Patterns - Wikibooks, open books for an open world</a>
		</li>
		<li>
			<a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-170-laboratory-in-software-engineering-fall-2005/lecture-notes/lec18.pdf">lec-design-patterns.dvi - lec18.pdf</a>
		</li>
		<li>
			<a href="https://www.ida.liu.se/~chrke55/courses/SWE/bunus/DP02_1slide.pdf">Microsoft PowerPoint - Lecture 02.ppt - DP02_1slide.pdf</a>
		</li>
	</ul>
	<p>
		The book&apos;s tables of contents have been ... bizarre.
		They keep listing section numbers with no titles, some of which actually exist as section (complete with titles at the sections themselves) and some of which don&apos;t exist at all.
		Given what we learned about the author in <a href="#Unit1">Unit 1</a>, it hasn&apos;t surprised me, though it&apos;s meant I need to go through each section before reading, checking to see if the subsections listed in the reading assignment are all the ones that actually exist.
		(That way, I can list the section without listing all the subsections when recording the assignment.)
		This time though ...
		When I reached Section 5.5.3 in the textbook, it had a title as usual, but it had zero content.
		Likewise, problems 2.2, 2.4 through 2.7, 2.17, 2.18, 2.23 through 2.28, 2.33, and 2.36 have zero content; there&apos;s no actual problems to solve there.
		I swear, while the author may some things to teach on the subject matter, they don&apos;t know what they&apos;re doing when it comes to communication and book-writing.
		They need an editor to help them out a bit.
	</p>
	<p>
		The publisher/subscriber model, if I understand it correctly, is very much like the callback model of a game I used to play back when there was time in my live for games.
		The awesome thing about this game was (and is) that not only is all the source code available so you can see how it works (and even help improve it if you wish), but also provides a great $a[API] against which to program your own game-altering modules.
		You can customise the game, usually without touching the main source code of the game, and the only limit is yourself.
		Part of this was (and is) accomplished via a function call-back system.
		You create your own function, then pass that function as an argument to a registration function.
		Basically, you&apos;re subscribing your function object to the game&apos;s built-in publisher.
		Not every part of the game&apos;s $a[API] involved such publications and subscriptions, but a great many parts did (and do).
		Because of this subscription-like registration system, a module would be alerted to when certain events occurred in the form of the registered function being called.
		The function could just perform some simple action and exit, but it could also integrate with some larger system and do something more complex.
		In either case, the Minetest engine didn&apos;t (and doesn&apos;t) need to know beforehand the identity of any of the modules.
		Modules can therefore be added and removed at the user&apos;s discretion without any major effort on the user&apos;s part.
		That said, Minetest modules are still very Minetest-specific, so they aren&apos;t really reuseable for other purposes.
		The engine though is frequently reused without modification to provide several different styles and modes of play.
		The very rules of the game sometimes differ between instances based on which modules are in play.
	</p>
	<p>
		The command pattern is ... interesting.
		I can see how it would be useful for certain situations.
		It seems a bit complex and convoluted for most situations though.
		Actions aren&apos;t represented by method calls, but instead by objects.
		What I don&apos;t get though is how this makes any sort of rollback easier.
		If the object is holding a copy of the former state in order to roll back if need be, you&apos;re going to end up with a lot of stored copies of what is mostly the the same information.
		It can be implemented in a reasonable way by only storing the information that changed, but then rolling back requires first rolling back every change that happened after the change you want to roll back to.
		You need the command objects to hold references to the previous and especially next command as references.
		I guess this setup would allow you to undo a particular change you&apos;ve made without undoing everything that came after it ... but you could end up with a broken or invalid state in some cases.
		That said, it doesn&apos;t sound like that&apos;s what the book means.
		The book talks about backing down the chain of actions as each is redone, and I&apos;m not sure how command objects would help with this any more than simply using the singular command history object to perform that task.
	</p>
	<p>
		The decorator design principle seems useful in some contexts as well, though those contexts are limited by the language itself.
		Back to my Minetest example, part of the $a[API] was exposed in the form of function objects that could be outright replaced.
		If you replaced this internal functionality though, you usually wanted to include the original functionality as part of your own new functionality.
		Therefore, you&apos;d wrap the original function in the new function&apos;s logic.
		In particular, the <code>minetest.is_protected()</code> method was specifically designed to be wrapped.
		I think a callback system would&apos;ve been more consistent with how the rest of the $a[API] functioned, but this particular method was meant to be replaced by modules that provide protection.
		If something is protected, the new method should return <code>true</code>.
		Otherwise, it should call the version of the method it replaced and return that method&apos;s return value.
		In this way, an entire chain of versions of this method could be created, and each would report the protection status of anything they recognised as being protected.
		The core implementation always returned <code>false</code>, as protection isn&apos;t a built-in feature, just a feature the other built-in components recognise.
		In some of my other code, I use a class that&apos;s similar to a decorator class, but instead of just piggybacking onto a function&apos;s logic, it acts as a translator between two functions that don&apos;t quite understand each other.
		It acts as a wrapper, allowing functions that couldn&apos;t normally be passed as callbacks to a particular other function due to input format issues to be used as callbacks to that function anyway.
	</p>
	<p>
		The state design pattern is widely used, but I don&apos;t have much to say about it.
		A wide range of graphical software makes good use it.
		I haven&apos;t built much that&apos;s complex enough to need this design pattern myself, but I have built a couple of finite state machines which required such a design pattern.
		The last one I built was for another course here at the university, but due to the rules imposed by the school, my course notes are in disarray with no hope of getting them organised and searchable until after I graduate.
		I&apos;m not able to easily look up what course that was in or why I needed a finite state machine to begin with.
	</p>
	<p>
		The translating wrapper class I mentioned a couple paragraphs back is also similar to a proxy object in the proxy design pattern.
		It acts as a stand-in for the function it wraps because the function cannot be directly used as a callback to the specific other function.
		The protection design pattern is a bit strange, and warrants further consideration later.
		I&apos;m not honestly sure when it would even be good to use, from a security standpoint.
		It retrospect though, perhaps the <code>minetest.is_protected()</code> method fits better with the protection design pattern than the decorator design pattern, not because it handles virtual protection, but because it only even calls the next method in it wraps if protection appears to be disabled.
		It doesn&apos;t add functionality, it adds cases in which the core method will never be called.
	</p>
	<p>
		The book explains that communication security is more difficult to achieve than computer security.
		Though the book doesn&apos;t go in-depth as to why, the reason should be apparent.
		To secure your individual machine, you need to be sure your hardware and software can be trusted.
		This means, at a minimum, not using hardware that is known to have back doors, not using software for which the source code isn&apos;t widely available, and password protecting everything of value.
		To some, the need to avoid sourceless software might not be apparent, but it&apos;s actually a pretty important concept.
		Sourceless software cannot be checked for security holes.
		Perhaps more importantly, it can&apos;t be checked for malicious features, such as back doors and spyware, added by the programmer.
		If you are using software for which the source code is not available, you <strong>*must*</strong> consider your machine to be insecure.
		For communication security though ... you have to make the same demands not of your own system, but all the middlemen systems.
		Instead of securing one system you control, you have to secure a bunch of systems that you don&apos;t.
		Clearly that&apos;s not an easy task, and yet it&apos;s only the beginning.
		You also need to secure all the communication channels between these machines (such as the physical wires) and make sure none of the middlemen are intentionally leaking information to your adversaries.
		Unless ... you can keep the middlemen from knowing what they&apos;re passing along.
	</p>
	<p>
		It&apos;s a nearly hopeless endeavour unless you use encryption.
		Encryption allows you to pass messages without any of the middlemen knowing what those messages are.
		Are the middlemen systems secure?
		Doesn&apos;t matter.
		They either leak what looks like jumbled garbage or they don&apos;t.
		Your actual message remains pretty safe, as long as you use a proper encryption scheme.
		That said, encryption can be used to aid in confidentiality and to aid in verifying the source of the information, but it can&apos;t really prevent middlemen from dropping messages altogether.
	</p>
	<p>
		I can&apos;t say I knew how asymmetric encryption is implemented before this unit, but it&apos;s something I use on a daily basis.
		I use $a[PGP] for email, but I also sign all the pages on <a href="https://y.st./">my website</a> to show they did actually come from me.
		The link to the signature file for a given page can be found in that page&apos;s navigation menu.
	</p>
	<p>
		It&apos;s true that any system that can run arbitrary software can get viruses.
		And a system that can&apos;t run arbitrary software is extremely limited in its usefulness.
		However, there are things that can be done to make the catching of viruses less likely.
		For example, the textbook mentions infected files.
		Infected files are harmless on Linux systems, though they can be devastating on Windows systems.
		Why is that?
		Simply put, Windows often uses embedded instructions in data files.
		Some of those instructions could be a virus.
		However, Linux programs tend to treat data files as data and not as instructions to be run.
		One type of Windows virus is known as a macro virus, and hides in Microsoft Word documents.
		In the past, Microsoft Word simply executed these embedded instructions in such documents without any sort of defence.
		I&apos;m not sure, but I think Microsoft removed the automatic execution of in-document macros in Microsoft Word, so that problem isn&apos;t as bad now.
		Linux, on the other hand, instead uses software such as LibreOffice, which was never designed to store embedded instructions in word-processing documents.
		I mean, a word-processing document should be a document and nothing more; it shouldn&apos;t have instructions for <strong>*doing*</strong> anything.
		Software on Linux also needs to be marked as executable before it can be executed.
		If a user downloads something,t hey need to approve it for running as software before it&apos;ll be treated as anything other than pure data.
		Secondly, Windows grants administrative privileges to the main user, at least last time I checked.
		They may have corrected this issue later.
		Anyway, a virus runs on behalf of the user, so anything the user can do, the virus can do.
		When the user had administrative permissions, viruses wreck havoc.
		On the other hand, Linux users on a well-configured system don&apos;t have administrative permissions until they request them, and even then, they only have them for the duration of the action they request them for.
		And such a request made by the user causes the system to prompt the user for their password.
		The virus doesn&apos;t know the user&apos;s password, so system-breaking damage is prevented.
		(A virus on Linux could still delete all the user&apos;s photographs for example, but it can&apos;t install/uninstall software or harm system files.)
		I use Debian these days, but back when I used Ubuntu, the system kept prompting me for a password randomly after a certain point.
		My guess is that I&apos;d somehow caught a virus and the virus was trying to do something to the system.
		It couldn&apos;t do it though, as I wasn&apos;t trying to do anything that should require a password, so I refused to provide it.
		After the system had been repeatedly prompting for my password for a while (several days, if I recall), I backed up my data, wiped the machine, and restored my data.
		Overkill?
		Maybe.
		But it sure got rid of the virus and my data remained secure.
		It happened again later, and I repeated the process.
		I don&apos;t recall it happening a third time though.
		To be clear, I&apos;m not one of those people that claims Linux doesn&apos;t get viruses; I&apos;m pretty sure I&apos;ve had a couple viruses (or the same one twice, who knows?) on Linux.
		Linux has a much better security setup than Windows on several levels though.
		In fact, I don&apos;t know a less secure system than Windows.
		I&apos;ve read about certain viruses that Windows can catch by simply <strong>*loading a webpage*</strong> containing the virus in Internet Explorer.
		I&apos;ve never heard of that happening on <strong>*any*</strong> other operating system.
	</p>
	<p>
		It&apos;s worth making a couple more notes on viruses.
		The reason it&apos;s so hard to stop viruses is two-fold.
		First, there&apos;s nothing different between a virus and a different piece of code besides the intent.
		Viruses are just software, so a computer that runs software will run viruses.
		This is why viruses run on behalf of the user and use the user&apos;s account; it&apos;s just like how your image editor or Web browser runs on your behalf and uses your account.
		Second, people are attacking the problem from the wrong direction.
		At least, they are in the world of proprietary software.
		Antivirus software is a sign something is wrong.
		It looks for specific patterns and tries to eliminate threats that way.
		In other words, it&apos;s treating the symptom, not the problem.
		Viruses usually exploit some vulnerability.
		Like the book says, a thousand different viruses may exploit the same vulnerability, yet the antivirus software will only detect a few of them because they all have differing code signatures.
		But patch the vulnerability and the viruses stop working.
		This isn&apos;t so easy in the proprietary software world where people hide their code, but it&apos;s a big part of what keeps the free software world so much safer.
		The free software world patches the vulnerabilities, treating the problem instead of just the symptom.
	</p>
	<p>
		As a side note, I write my learning journal entries as I read; I don&apos;t complete the reading material entirely first.
		I&apos;m very passionate about security, and each time I read about something new in the textbook, I immediately began writing up a segment on that particular threat from an angle I thought wasn&apos;t going to be addressed.
		And then that angle was addressed.
		I&apos;m somewhat impressed by the author at this point for looking at things from the obvious angle that almost everyone else seems to ignore or are too blind to see.
		I&apos;m particularly impressed with the author&apos;s description of antivirus software as a &quot;dirty hack&quot;; that&apos;s exactly what it is.
		They hit the nail on the head with that one.
	</p>
	<p>
		When grading this week, I found a couple students were trying to feed the number of &quot;enclosed regions&quot; into the cyclomatic complexity equation.
		It took me a bit to understand where they&apos;d get such a bizarre idea.
		However, I think they were substituting them for connected components!
		The book doesn&apos;t explain in any way what connected components even are.
		I remember having to look that up on my own just to understand the third equation.
		The term comes from graph theory, which isn&apos;t what we&apos;re studying, so it really should have been explained in the textbook.
	</p>
	<h3>Discussion post drafts</h3>
	<blockquote>
		<p>
			It&apos;s not really specified by the book what part of the code handles deciding what class subscribes to what class.
			I&apos;m assuming this is handled not in any of the classes themselves, but in the main logic of the program.
			For example, perhaps a subscriber object is passed into a method on a publisher object, and from that point on, the subscriber receives the published events.
			If this is the case, we can add a lot of subscriber/publisher pairs to the problem at hand, mostly to break up hard-coded dependencies among classes.
			The controller is the main class that&apos;ll probably need to retain its hard-coded dependencies, as it&apos;s pretty specific to the system; the controller for one system isn&apos;t going to serve the needs of another system.
		</p>
		<p>
			Admittedly, I&apos;m having trouble properly wrapping my head around the concept.
			My first attempt wasn&apos;t so successful.
		</p>
		<h4>Attempt 0</h4>
		<p>
			I thought I could mostly look at the arrows in the diagram to see where information is flowing.
			Keep in mind that the arrows represent what objects call other objects, so information is flowing in the reverse direction of the arrows.
			Information needs to be passed from publishers to subscribers, so in the reverse direction, subscribers need to subscribe to publishers.
			At first, I thought there were a couple different places where this logic doesn&apos;t work out though: the places in which an object is suddenly instantiated, yet needs to be the <strong>*publisher*</strong>.
			Other objects have no time to react and subscribe before they need to know of the event.
			These would be the <code>Payment</code>-class objects.
			However, the publisher/subscriber model doesn&apos;t have to be about being able to dynamically react like that.
			It can be about not hard-coding dependencies where they&apos;re not needed.
			The same logic that instantiates the <code>Payment</code>-class objects (the code that calls the constructor, not the constructor itself) can easily set up the subscriptions at that time as well, just before invoking the methods needed to pass that data along.
			We also need to keep in mind that a class can implement multiple interfaces; a subscriber can also be a publisher.
		</p>
		<p>
			If we use the publisher/subscriber model as a sort of callback registration system, everything besides the controller becomes a publisher and/or subscriber.
			The classes implementing the <code>Publisher</code> interface would therefore be <code>ItemInfo</code>, <code>SellerInfo</code>, <code>BidsList</code>, <code>Bid</code>, and <code>BuyerInfo</code>.
			The classes implementing the <code>Subscriber</code> interface would be <code>ItemsCatalog</code>, <code>ItemInfo</code>, <code>Payment</code>, <code>BidsList</code>, and <code>Bid</code>.
			That didn&apos;t seem quite right though.
		</p>
		<h4>Attempt 1</h4>
		<p>
			One of the points of the publisher/subscriber model is that it&apos;s asynchronous and it reacts to events.
			It&apos;s not a simple callback system for decoupling nearly all our modules.
			I thought the first step to correctly solving the problem is to realise that some of the classes presented represent events; these are what&apos;s being reacted to.
			If we identify the events, we can better identify the publishers, and with them, the subscribers.
			Undeniably, the <code>Payment</code> class represents a type of event.
			I&apos;d argue that <code>Bid</code> could also be an event, but based on how it&apos;s used, it sort of isn&apos;t one; it&apos;s more of an item to be listed.
			Using <code>Payment</code> as the only event though, who is the publisher?
			<code>Payment</code> objects request information from three other classes, but they don&apos;t answer to <strong>*any*</strong> class.
			This attempt was short, but likewise ended in failure.
		</p>
		<h4>Attempt 2</h4>
		<p>
			Next, I tried something new.
			First, let&apos;s create a list of events.
			Anything that changes the data will be considered an event, while anything that only performs lookups will not.
			When I say that the data is changed, I&apos;m only referring to the creation of new objects that existing objects need to know about.
			If a seller updates their address, this will not be considered an event.
			Likewise, if a new buyer registers for the service, this will not be considered an event.
			Of particular note, no object needs to be notified when a seller accepts a bid and closes an auction, as the item remains in the item list until the payment is processed.
			The only thing this will affect is the <code>ItemInfo</code> object, which will be reflected on pages that are generated from it.
			The main events seem to be as follows:
		</p>
		<ul>
			<li>
				A new item is listed
			</li>
			<li>
				A bid is placed
			</li>
			<li>
				A payment is processed
			</li>
		</ul>
		<p>
			These events don&apos;t need to be classes themselves, but it&apos;s important to recognise that none of these events are represented by classes in the initial setup.
			Even the <code>Payment</code> class represents the payment itself, not the making of the payment.
			The classes that would be immediately notified of these events would be the publishers, and they would need to push it to their subscribers.
			With that in mind, the classes implementing the <code>Publisher</code> interface would have to be <code>Payment</code>, <code>ItemInfo</code>, and <code>Bid</code>.
			<code>ItemInfo</code> and <code>BidList</code> could implement the <code>Subscriber</code> interface to be notified when new items and bids are created, though this wouldn&apos;t be strictly necessary.
			If they didn&apos;t, and these things were added to the lists by the code that generated them, <code>ItemInfo</code> and <code>Bid</code> wouldn&apos;t need to implement the <code>Publisher</code> interface after all.
			As for the <code>Payment</code> class, the subscriber that needs to subscribe to it is somewhat unintuitive and isn&apos;t represented in the diagram at all.
			Items are removed from the <code>ItemsCatalog</code> when payments are processed, so the <code>ItemsCatalog</code> class needs to implement the <code>Subscriber</code> interface and subscribe to the <code>Payment</code> class.
		</p>
		<h4>Confusion</h4>
		<p>
			I think I got close with that last try, but I&apos;m still a bit unsure how to work with this publisher/subscriber model.
			Hopefully other students are doing better at this than I am and I can learn from some of you.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You came at this from a very different angle than the ones I tried.
			It makes sense that the buyers and sellers would want to be notified.
			The classes you chose as subscribers contain the contact information for their real-world counterparts, so they could easily be set up to either email these people when they received such notifications or interact with some other object that could send out emails on their behalf.
			These notifications might also only need to be emailed depending on the users&apos; settings, which again, would be known to those objects.
		</p>
	</blockquote>
	<blockquote>
		<p>
			You make a very interesting point about publications and subscriptions.
			I agree, the publisher should publish whatever information is available.
			The whole point of the publisher/subscriber model is, I think, to allow better decoupling and reuse.
			Therefore, we have no way to know which information will be useful and not, so it should all be published just in case.
			As for subscriptions, the subscribing object knows exactly what it needs.
			There&apos;s no reason to subscribe to information it isn&apos;t even going to use.
		</p>
	</blockquote>
	<blockquote>
		<p>
			I like how your focus seemed to be about getting problems solved.
			In your model, subscribers subscribed to the publishers they needed to in order to get their own work done.
			If a given object didn&apos;t need help, it didn&apos;t need to subscribe to anything else.
			Objects that were able to provide the needed help acted as the publishers.
		</p>
	</blockquote>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			Sections 28.1 through 28.5 from <a href="https://ifs.host.cs.st-andrews.ac.uk/Resources/Notes/Evolution/SWReeng.pdf">SWReeng.pdf</a>
		</li>
		<li>
			Sections 30.1 through 30.4 from <a href="http://iansommerville.com/software-engineering-book/files/2014/07/Documentation.pdf">Documentation</a>
		</li>
	</ul>
	<p>
		Both $a[PDF]s refuse to render correctly in Firefox; the text shows up white on a white background.
		They render fine in Evince (a desktop application for reading $a[PDF]s and similar files) though.
		It&apos;s a bit less convenient (especially as Firefox makes downloading $a[PDF]s painful), but it allowed me to get the reading done for the week.
	</p>
	<p>
		I wasn&apos;t previously aware that translating a program from source code in one version of a language to another was a form of re-engineering.
		It&apos;s something I do about every year or so though.
		When a new stable version of Debian is released, I go through and update all my code to use that new Debian release&apos;s version of $a[PHP].
		Facebook also created a modified and in most ways improved version of $a[PHP] called Hack.
		While I would never use the Facebook social media website due to their noxious policies, if their Hack interpreter, $a[HHVM], ever makes it into a stable Debian release, I&apos;ll likely re-engineer all my code to be written in Hack instead of $a[PHP].
		$a[HHVM] is already available in Debian Unstable, which is a good sign, but it&apos;s not in Debian Testing, which is a bad sign.
		It might not make it into a release.
		If not, I&apos;ll continue using vanilla $a[PHP] for the wider compatibility.
		I could install $a[HHVM] from an external package (and I plan to between terms when there&apos;s more time, so I can experiment with the language), but I can&apos;t ask other people using my code to do the same.
		The bad news for me is that I don&apos;t have automated tools for translation.
		The good news is that the new interpreter accepts old code unmodified.
		I have to add modifications by hand for the new features, but that&apos;s all I need to do.
		For example, $a[PHP] didn&apos;t used to have namespaces.
		When they added them, I needed to go through and change all my class, function, and constant names to be properly namespaced.
		Most recently, $a[PHP] added better type hinting for functions and methods, which needed to be added to my code to aid in debugging future code that depends on it.
	</p>
	<p>
		The book presented reverse engineering in a light I hadn&apos;t thought of it before in.
		I come from the free software community.
		We deal with the fact that hardware manufacturers often selfishly hide their design specification, including the parts of the specification necessary for designing the software that runs on their hardware, such as device firmware.
		We can&apos;t use the manufacturer&apos;s provided firmware binaries, as there&apos;s no source code available for them.
		We can&apos;t trust binaries without source, as we can&apos;t know for sure what they&apos;re programmed to do; it simply isn&apos;t safe.
		We also can&apos;t improve those binaries, as without source code, we can&apos;t alter the source and recompile.
		Instead, skilled people in the community have to reverse engineer the hardware to understand how it works, then re-engineer the firmware needed to get the hardware to function.
		It&apos;s all because people are trying to hold back development of their competitors at the cost of holding us back developmentally as a species.
		As a species, one of the greatest strengths of humans is our ability to build off the designs of past generations, but corporations want to take that away from us as much as possible.
		The textbook presented another purpose for reverse engineering instead.
		It can be used not only to restore maliciously-hidden specifications, but also for recovering carelessly lost or under-documented specifications; you might reverse engineer your own products.
	</p>
	<p>
		Currently, while programming is a hobby of mine, I make sure to clean up my code logic when making modifications.
		Sometimes, I don&apos;t have time, so I put my code down for months at a time until I can find the time to do it right.
		However, paid programmers don&apos;t always have that luxury, and it&apos;s sad.
		That said, I get the feeling there are also a lot of lazy programmers that wouldn&apos;t bother even if and when they have the time.
		There are also certain corporate environments that are counterproductive to clean code, so even a good programmer is less likely to to do what should be done.
		I read once about one company in particular in which employees are discouraged from making improvements in code maintained by other departments.
		Even though a better solution often exists, these solutions don&apos;t make it into the code because the people that can see the solutions aren&apos;t the ones in the team maintaining a particular project.
		I can easily see such a backwards environment preferring deployment speed over maintainability and discouraging proper refactoring after new features are added.
	</p>
	<p>
		Abstracting legacy data structures seems like a major pain.
		A long time ago, I built my own Web forum software.
		(This software was later lost due to a hard drive failure, but I&apos;m much more careful about performing backups these days.)
		The goal was to build forum software that didn&apos;t rely on any sort of external database software.
		As such, it used flat files to store everything.
		This was one of my earliest projects, and I was using it to help me learn to program.
		I had to restructure the data several times as I came to understand the system I was building and its needs better.
		Restructuring the data format and its interface that I knew intimately well was an ordeal certain times, depending on how much I changed it.
		I can only imagine how bad it&apos;d be to restructure and abstract a legacy format I&apos;d never come in contact with before.
		On my own project, I eventually settled on an object-oriented interface that needed less code restructuring for future modifications.
		If I recall, older versions of the data were detected via the fact that there were properties missing in those objects, and those properties were set to some default or calculated value.
		That prevented the need to continue making changes to the code that accessed the data every time I needed to add a new data-modifying feature.
	</p>
	<p>
		Inconsistent data validation rules are are something I run into on a somewhat common basis in regards to email address data.
		A well-programmed system will accept my email address as valid, be cause it is; my email address follows the basic syntactic rules for email addresses, and doesn&apos;t even rely on any of the complex exceptions or legacy grandfathering rules.
		My email address is simply <a href="mailto:alex@y.st"><code>alex@y.st</code></a>.
		However, there are two <strong>*invalid*</strong> rules that some systems apply when &quot;validating&quot; email addresses that prevent my email address from being recognised as valid.
		One thing poorly-programmed email address &quot;validators&quot; do is check the $a[TLD] against a short list of popular $a[TLD]s.
		For example, it might demand that all email addresses end in <code>.com</code>, <code>.org</code>, or <code>.net</code>.
		Any email address that doesn&apos;t end in one of those strings is rejected.
		There are <strong>*hundreds*</strong> of valid $a[TLD]s though, not just those three, and they&apos;re all capable of being used for email!
		The other popular invalid rule that my email address gets checked against is that the $a[SLD] of the domain has to have at least two characters.
		Again, this is completely arbitrary!
		My domain name is perfectly valid even though it&apos;s short, as are the email addresses attached to it.
	</p>
	<p>
		I found it particularly bizarre that a development team would need process documentation to defend their process in court.
		I mean, who would take someone to court over the way they develop software?
		Also, how would the process documentation prove anything?
		The people defending themselves are the ones that wrote it, so they could&apos;ve written anything.
		The process shown in the documentation presented in court could be <strong>*nothing*</strong> like what actually went down.
	</p>
	<p>
		The author also suggests that having a title page on your documents is vital.
		I whole-heartedly disagree.
		In printed works, sure, cover (which is sort of like a title page seems important for tying everything together and making it look complete.
		Most books also include a cover page within the cover, though arguably, this is a waste of paper.
		In any and all forms of virtual document, a cover page is absolutely worthless.
		It serves no valid purpose, and if required by your employer or client, is just another form of time-wasting red tape.
		By all means, follow the instructions given to you by your boss and/or client, and if they say to include a cover page on your documentation, you&apos;d better do it.
		Just know that the cover page isn&apos;t doing them any actual good.
		It does <strong>*you*</strong> good, as you make those with power over you happy with you, but they get nothing valid from it.
		That said, it&apos;s worth noting that I&apos;m not a fan of virtual formats that emulate printed pages.
		For example, why have documantation in a $a[PDF] file if there&apos;s no intention of printing it?
		A series of $a[XHTML] files would be a much easier to use format, and would even allow linking to other pages or even sections within a page (using a fragment identifier).
		In this type of setup, each chapter (or section) has it&apos;s own page, and page lengths aren&apos;t determined by what will fit on a sheet of paper.
		After all, in a virtual document, the size of a physical sheet of paper is an incredibly arbitrary size to use.
		Also in this type of setup, there&apos;s no real place to <strong>*put*</strong> a cover page.
		Information that some people would put on a cover page is better off in the &quot;about&quot; section, &quot;licensing&quot; section, and/or &quot;copyright&quot; section, with the table of contents (complete with links to the other sections) being the first thing most readers will see.
		There are advantages to having a printed document, but those advantages are granted via the printing of the document, not the size of the pages.
		If you plan to provide printed documentation, a $a[PDF] can be helpful, but otherwise, it&apos;s disadvantageous to emulate paper pages.
		Like the book says, there are different designs should be used for virtual and physical documentation, and paper page sizes are most definitely an artefact that belong only in printed media.
		The book mentions that trying to simply convert a printed document into a webpage rarely results in a good Web-based document, and along that same train of thought, simply uploading to the Web the $a[PDF] you use for printing does not make for a good online document.
	</p>
	<p>
		The rest of the information on documentation was either review for me, was intuitive, or made pretty good sense.
	</p>
	<h3>Epilogue (Unit 9)</h3>
	<p>
		I feel like I knew the answers to most of the questions on the final exam, but at the same time, I haven&apos;t been doing very well on the quizzes in this course.
		There&apos;s a very good chance I didn&apos;t do as well as I fell like I did.
		Two particular questions stood out as not making any sense.
		The first involved filling in three blanks:
	</p>
	<blockquote>
		<p>
			_____________ domain is one whose properties include predictable __________ relationships among its ____________ phenomena.
		</p>
	</blockquote>
	<p>
		At first, it seems like a reasonable question, but then you take a look at the options.
		All four choices consisted of a single word each; Either I missed something big and the same word is supposed to be used in all three blanks, or the someone input the options incorrectly.
		The second odd question was a true or false statement:
	</p>
	<blockquote>
		<p>
			The main advantage of software re-engineering is that there are practical limits to the extent that a system can be improved by re-engineering.
		</p>
	</blockquote>
	<p>
		How could that possibly be an <strong>*advantage*</strong>?
		Even if I hadn&apos;t taken the course, I&apos;d be able to recognise that as a disadvantage.
	</p>
</section>
END
);
