<?php
/**
 * <https://y.st./>
 * Copyright © 2019 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 3305: Web Programming 2',
	'<{copyright year}>' => '2019',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		The reading assignment for the week included the following:
	</p>
	<ul>
		<li>
			<a href="https://my.uopeople.edu/pluginfile.php/388450/mod_book/chapter/178824/Web1V2Diffs.pdf">Web1V2Diffs.pdf</a>
		</li>
		<li>
			<a href="https://static.zend.com/topics/php_leads_web2_0.pdf">Microsoft Word - WP_WP_Web2.0_092606.doc - php_leads_web2_0.pdf</a>
		</li>
		<li>
			<a href="https://tomassetti.me/difference-between-compiler-interpreter/">The difference between a compiler and an interpreter</a>
		</li>
		<li>
			<a href="https://w3techs.com/technologies/overview/programming_language/all">Usage Statistics and Market Share of Server-side Programming Languages for Websites, February 2019</a>
		</li>
	</ul>
	<p>
		We&apos;re discussing &quot;Web 2.0&quot; this week.
		&quot;Web 2.0&quot; is just a buzzword, and isn&apos;t actually a term that&apos;s worth much.
		Most of the hype around it goes toward inaccessible sites designed to cater only to users using the Web via very specific types of Web browsers.
		For example, not screen readers, and not Web browsers with no JavaScript enabled.
		It&apos;s a term I doubt I&apos;ll ever put any stock into.
		But for class, I&apos;ll do what I can to try to meet the assignment objectives revolving around it.
	</p>
	<p>
		The first student besides me to post didn&apos;t seem to know what they were talking about, claiming <code>POST</code>, <code>PUT</code>, <code>GET</code>, and <code>DELETE</code> requests to be a new feature added in Web 2.0.
		These request types have been around since the beginning of the Web though, or at least near the beginning.
		And <code>PUT</code> and <code>DELETE</code> requests pretty much fell out of use when or before Web 2.0 was a thing.
		To further show their lack of understanding of the Web, they included $a[VoIP] as something built on top of Web 2.0, even though $a[VoIP] doesn&apos;t run on the Web at all.
		I know my own initial post was rushed and didn&apos;t include everything it should have.
		I have a schedule to keep, and I tend to only have two or three hours to compose my first post of the term (it&apos;s only the first of the term, not the first of each unit).
		That includes reading the material needed to make my posts coherent.
		There&apos;s also the fact that I deny the very existence of the thing we&apos;re discussing this week.
		But even my post was more coherent than that.
		I feel like this is going to be a long term.
		The next student likewise seemed to be confused about the difference between the Internet and the Web.
		We&apos;re supposed to be third- and fourth-year information technology students here.
		We should know what the Internet and Web are, and we should know that there&apos;s more to the Internet than just the Web.
		It makes me feel sad.
	</p>
	<p>
		The reading assignment got us off on the right track on the topic for the week.
		The very first sentence tells us that &quot;Web 2.0&quot; is a buzzword, as I mentioned above.
		It then goes on to tell us that it&apos;s mostly a marketing term, which again, tells us it&apos;s mostly hype and has very little substance and/or meaning.
		However, it then goes on to say they don&apos;t know of any technical comparison between this &quot;Web 2.0&quot; and &quot;Web 1.0&quot;.
		If you&apos;re aware of what buzzwords and marketing terms are, you know very well that a technical comparison is very likely impossible.
		You can&apos;t compare vague ideas on a technical level.
		It just doesn&apos;t work.
		You need something more concrete for a technical comparison.
		With vague ideas, the best you can hope for is a superficial comparison.
		It then goes on to tell us that precise definitions of &quot;Web 2.0&quot; and &quot;Web 1.0&quot; can&apos;t be found, which again, is one of the key characteristics of buzzwords and marketing terms.
		Words are much less effective at building undue hype around the mundane when you give them precise definitions.
	</p>
	<p>
		One good point made by the first document of the reading assignment is that in &quot;Web 1.0&quot;, there are few content creators.
		The reason for this is that you have to work with the code and build your pages from the ground up.
		That&apos;s actually one of the reasons <a href="https://y.st./en/">my own website</a> fits so nicely into the category of &quot;Web 1.0&quot;.
		I <strong>*like*</strong> building my pages from the ground up.
		I <strong>*enjoy*</strong> getting to structure my pages as I see fit.
		I don&apos;t like being forced to build within the strict confines of whatever platform I&apos;m working with.
		For example, take WordPress.
		It allows non-technical users to build websites without the need for technical knowledge.
		It&apos;s awesome!
		It&apos;s opening the doors of the Web to a large group of people that otherwise wouldn&apos;t get to have their own websites.
		However, in exchange for that, those users aren&apos;t <strong>*able*</strong> to use the large variety of options open to those of us that do write our own pages.
		In &quot;Web 2.0&quot;, the doors are open to a greater audience, so there&apos;s a much larger number of creators.
	</p>
	<p>
		It also touches upon the fact that social networks do not allow messages to be transmitted between one another usually, so when a user is deciding which social network to join, they&apos;re likely to choose the one more of their friends are on in order to facilitate communication.
		This is exactly the sort of monopoly social networks are generally trying to create.
		They don&apos;t <strong>*want*</strong> to allow users to receive messages from off-network users, because they don&apos;t have all the data they want on off-network users.
		Most modern social networks sell your data to advertisers, so they want as much data on you as they can get.
		You&apos;re not the social networks&apos; customer.
		That&apos;s why you don&apos;t tent to need to pay to join and use a social network.
		Instead, you are the social networks&apos; <strong>*product*</strong>, which they sell to their real customers, the advertisers.
		At least three federated social networks exist, though they haven&apos;t gained enough popularity to come even close to overthrowing the social giants that abuse their users.
		These networks are diaspora*, Pump.io, and $a[GNU] Social.
		On any of these platforms, you can communicate with other users on servers completely unaffiliated with the administrator of the server your information is on.
		In fact, you can even run your own server and keep your data in your own hands!
		However, there&apos;s still no communication between servers running different software, though.
		For example, a diaspora* server won&apos;t let you communicate with someone on a Pump.io server.
	</p>
	<p>
		Another important aspect of &quot;Web 2.0&quot; is that it allows relationships between user objects.
		For example, users can be in a group or can be friends.
		Some &quot;Web 2.0&quot; websites also have a public $a[API].
		This is used to allow other sites to tie into the main social site, but typically isn&apos;t used to tie main social network sites to each other for the monopolistic reasons I mentioned above.
	</p>
	<p>
		&quot;user-centric&quot; views presented by &quot;Web 2.0&quot; websites also presents a hurdle to usability.
		The article mentions this in terms of Web-crawling.
		A spider can&apos;t very well index pages that show up differently to each user, and much &quot;Web 2.0&quot; content is locked away behind login walls.
		It&apos;s worth adding though that this problem is much bigger than just spiders.
		It&apos;s true that if spiders can&apos;t properly crawl a site, that site&apos;s content won&apos;t show up in search engines.
		However, links provided by users won&apos;t even work correctly, as when presented to other people, those pages may not show the same content or may not display content at all!
		&quot;Web 2.0&quot; is, at times, exceedingly bad at allowing sharing of content, which is ironic, seeing as sharing content in in fact one of its main goals.
	</p>
	<p>
		I was glad to see that the paper attributed the desire for site stickiness to maximising ad revenue.
		This is a key difference between &quot;Web 2.0&quot; and &quot;Web 1.0&quot;, too.
		&quot;Web 1.0&quot; sites tend to try to present information on a single thing and do it well.
		They link to external sites as is appropriate when mentioning other subjects.
		However, &quot;Web 2.0&quot; sites instead try to keep everything on the same site, both to display ads on these pages that would otherwise be links to other sites that make those other sites ad money, and to noxiously track you to find what you&apos;re interested and how they can serve you to their customers.
		That said, while &quot;Web 2.0&quot; websites have a wider range of content than a single &quot;Web 1.0&quot; website would, I&apos;d argue that as jacks of all trades, &quot;Web 2.0&quot; websites tend to be masters of none.
		They pour all their resources into trying to keep everyone on the same site that they don&apos;t have enough resources to procure the best content for any of their subjects.
		&quot;Web 2.0&quot; websites also tend to get most of their content from laymen, who know little to nothing about the content they write about.
		There are a few experts, sure, but they get drowned out by the uneducated masses.
		You can&apos;t find good information easily in &quot;Web 2.0&quot;.
	</p>
	<p>
		The article also mentioned that most &quot;Web 2.0&quot; websites each have their own inboxes, and send you a real email when you get a message through their mail system.
		Personally, I find this activating.
		Most of these websites don&apos;t include the body of the message in the email, so you have no idea of knowing what it says or even just what it&apos;s about unless you log into the site.
		It makes messaging cumbersome.
		I don&apos;t want to check my email just to find I need to check some other messaging system.
		I understand not allowing you to respond without logging in, but there&apos;s no valid reason not to include the message in the email body if they&apos;re going to send an email alert anyway.
		When sites do that, I tend to not check it right away, then forget the message is waiting for me.
		It can be months before I read the message if it&apos;s not sent properly through email like it should be.
	</p>
	<p>
		Surprisingly, the article did fine one aspect to some &quot;Web 2.0&quot; websites.
		Sort of.
		It mentions that they know the upper bound for traffic because they know how many registered users they have.
		A &quot;Web 1.0&quot; website doesn&apos;t have that luxury, and can experience unexpected surges of traffic.
		However, this very much comes at a cost.
		Unregistered users aren&apos;t able to use the site at all.
		When you look at why this benefit exists, you realise it&apos;s not actually doing more good than harm.
	</p>
	<p>
		The second article made a good point about many &quot;Web 2.0&quot; websites trying to behave as desktop applications.
		That&apos;s one of the reasons I think it&apos;s so idiotic for users to install portal applications for a company&apos;s services.
		Why install an application for ordering pizza from your favourite pizza chain when you could instead just bookmark their website, which is designed to act like an installed application anyway?
		The website also provides better cross-platform support, at least as long as the company running it isn&apos;t moronically sniffing user-agents and serving different content to different Web browsers as some companies like to do.
		This article also mentions that there&apos;s no need for users to update the application when it&apos;s not an application, but a website.
		This is also something to consider.
	</p>
	<p>
		Next, I read the page on the differences between interpreters and compilers.
		I was already well-aware of the differences, but the page provided a nice review.
		The main thing to note is that an interpreter allows you to run uncompiled source code, while a compiler compiles the source code to run later on a specific processor.
		This means that the interpreter allows you to distribute the same program (the source code) and run it anywhere that has the right interpreter, but the interpreter does need to be installed as a separate program or programs on each and every machine that needs to run your source code.
		On the other hand, the compiled program doesn&apos;t need the interpreter to be installed, but in exchange, the code has to be compiled for a specific processor type and operating system.
		You can&apos;t just distribute a file that&apos;ll run on all platforms.
		Also, obviously, it&apos;s easier to debug an interpreted script than a compiled executable because you don&apos;t have to recompile each time you make a change, but it&apos;s slower for your users to run an interpreted script than a compiled executable.
	</p>
	<p>
		The final page we were asked to read simply listed the percentages of server-side language use on sampled sites.
		Unsurprisingly, $a[PHP] took the lead by a wide margin.
		Disappointingly, ASP.NET still made it into second place.
		Admittedly, I&apos;m biased against Microsoft, so I would have liked to see their language, ASP.NET, much further down on the list.
	</p>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		The reading assignments for the week are as follows:
	</p>
	<ul>
		<li>
			<a href="https://secure.php.net/manual/en/install.php">PHP: Installation and Configuration - Manual</a>
		</li>
		<li>
			<a href="https://secure.php.net/manual/en/install.unix.apache2.php">PHP: Apache 2.x on Unix systems - Manual</a>
		</li>
		<li>
			<a href="https://www.w3schools.com/php/default.asp">PHP 5 Tutorial</a>
		</li>
	</ul>
	<p>
		It seems this week is mostly about installing Web server software from source.
		I find it a bit amusing that we&apos;ve waited until a 3000-level course to cover that.
		It&apos;s a good skill to have.
		I actually got my start on the Web on an OS X machine.
		It came with Apache and $a[PHP] bundled and integrated with the operating system.
		However, Apple hardware is expensive, so I had a really old matching that Apple didn&apos;t support with newer operating system versions.
		Apache and $a[PHP] were really old.
		Once I figured out how to even get my website accessible from outside my home, using my machine as the Web host, I started trying to build a cool and interactive website out of $a[PHP].
	</p>
	<p>
		I invented pseudo-namespaces, fearing that if someone used my code, my function names would clash with theirs.
		I basically just prefixed all my function names with the domain name of my website followed with two underscores, with the dot converted to an underscore as well.
		Even without having learned about namespaces, I understood their value and importance.
		I later learned my version of $a[PHP] was painfully old, and had to learn how to install a newer version.
		It was tied so heavily into Apache though that I needed to upgrade Apache as well.
		Both $a[PHP] and Apache had to be installed from source code.
		And with the newer version of Apache, I lost the $a[OS] integration, as Apple had specifically tied that version of Apache to the $a[OS].
		If I recall, it took days or weeks to figure out how to get everything installed correctly.
		I had no idea what I was doing, and had to learn how to use Apple&apos;s proprietary development tools to compile, as that was necessary at the time.
		It might even be necessary now, but I wouldn&apos;t know.
		With the upgrade though, I had real namespaces, and was able to remove my ugly prefixes in favour of doing things the right way (<code>net\\example\\function_name()</code> instead of <code>example_net__function_name()</code>).
	</p>
	<p>
		Later, my computer died on me.
		My pibling gave me a new laptop (well, new to me and much newer than my old machine, but still very old).
		It ran Windows though, and Windows and I have <strong>*never*</strong> gotten along.
		There&apos;s a reason I was on an ancient OS X machine when I could have afforded a newish Windows machine for about the same price.
		So I looked into how to install OS X on non-Apple hardware.
		It turns out it&apos;s very possible, but also illegal due to language in the OS X license agreement.
		Not wanting to break the law, I looked into alternative systems.
		Anything had to be better than Windows, right?
		<strong>*Anything*.</strong>
		Even if it wasn&apos;t as good as OS X, I&apos;d have a better chance of getting by with whatever I ended up with than I would sticking with the copy of Windows already on the machine.
		I&apos;d heard about something called &quot;Linux&quot;, but I wasn&apos;t sure how usable it was.
		So I looked into that first.
	</p>
	<p>
		I found Linux wasn&apos;t a single operating system, but a multitude of operating systems that shared a similar codebase.
		But the developers of these systems <strong>*wanted*</strong> me to install their system.
		Apple was telling me to go away if I wasn&apos;t going to buy not only their operating system but their hardware as well.
		The Linux communities didn&apos;t care what hardware I used though.
		I agonised over the decision of what system to install.
		I don&apos;t recall how long it took me to decide.
		With so many options, how do I know which is right for me?
		I ended up choosing Ubuntu 12.04, which had come out that month, if I recall.
		It seemed popular, compared to other Linux versions.
		Maybe people knew what they were doing and chose it because it was good.
	</p>
	<p>
		The interface was a bit strange, but it was usable.I found Apache and $a[PHP] were in the software repositories.
		Installing them took almost no effort.
		I greatly appreciated that these tools were cross-platform.
		I was learning to use a new system, but I didn&apos;t need to learn new server software right away.
		That was awesome.
		I spent about a week on Ubuntu, if I recall, but then I had an epiphany: Canonical never charged me for Ubuntu.
		Most of the other distribution vendors weren&apos;t charging either.
		All it cost me was the price of a $a[DVD] to burn the system on.
		I could afford to try out other systems!
		And try them out I did.
		I tried Debian, I tried Mint.
		Mint was kind of cool at the time.
		I stuck with it a while, and later went to Xubuntu, a variant of Ubuntu produced by the same Canonical, but with a better desktop interface.
		For a class, I tried out Fedora as well, though it was never my cup of tea, and now I&apos;m back on Debian for licensing reasons.
		Because I&apos;ve learned to care about licensing.
		No matter where I drift though, $a[PHP] and Apache follow me.
		They&apos;re free software, so the source code is readily available.
		And when the source code is available, there will be people that want to port it to their system.
		Porting to Linux once takes care of running it on all Linux distributions, but if I were ever to switch to a drastically different system, such as $a[BSD], $a[PHP] and Apache have already beaten me there, too.
		While I might not always choose them, they&apos;ll always here with me for me to choose if I want to.
	</p>
	<p>
		I admit I&apos;ve gone soft, and no longer compile my own Web server software.
		I still know how to though, so the pages on doing that were just review.
		The $a[PHP]7 tutorial was of no help to me though.
		I use $a[PHP]7 on a daily basis.
		I&apos;m very familiar with the language and its syntax.
		I&apos;ve got to look up a function name every once in a while, when I&apos;m doing something I don&apos;t usually do, but a tutorial&apos;s not going to help me remember the names of functions I hardly ever use, or even ones I&apos;ve never used before.
	</p>
	<p>
		For the main assignment for the week, we were to take screenshots and paste them into a word-processing document.
		That&apos;s a format to transfer screenshots in, but I followed those instructions to the letter just the same.
		Alongside the uploaded word-processing document though, I submitted my work in a better format: simply uploading the images and using <code>&lt;img/&gt;</code> tags to display them as a part of the page.
		That way, other students don&apos;t have to download them as a separate attachment, then open them up in a word-processor.
		I mean, seriously, what a pain in the neck for no good reason, right?
		I did have to doctor one of the screenshots a bit though.
		The assignment said to screenshot the output of <code>phpinfo()</code> in a screenshot, up to the last <code>/etc</code> &quot;command (sic)&quot;.
		Between the top bar of the website and the bottom bar, too much space was eaten up, and that much couldn&apos;t be displayed at once without a bigger monitor than my laptop has.
		Firefox allows you to modify the code of pages you look at though.
		So I deleted the top and bottom bar, freeing up enough space to get all the requested output captured.
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://secure.php.net/manual/en/introduction.php">PHP: Introduction - Manual</a>
		</li>
		<li>
			<a href="https://www.w3schools.com/php/php_intro.asp">PHP 5 Introduction</a>
		</li>
		<li>
			<a href="https://www.wired.com/2010/02/PHP_Tutorial_for_Beginners/#The_Basics">The Basics</a>
		</li>
	</ul>
	<p>
		The assignment for the week was on a third-party website.
		This was a bit problematic, as signing up for an account on that website required the completion of one of those obnoxious Google reCAPTCHA $a[CAPTCHA]s.
		Google is always a pain, and this time, Google was refusing to sent the $a[CAPTCHA] on the grounds that Google thought I might be a robot.
		Seriously?
		Isn&apos;t the $a[CAPTCHA] supposed to be my way to prove that I&apos;m not?
		What&apos;s the point of a $a[CAPTCHA] if you only send it to people you <strong>*don&apos;t*</strong> think are robots?
		Google&apos;s got problems.
		This isn&apos;t the first time Google has hassled me though, or even the first time they&apos;ve hassled me in this particular way, so I knew they&apos;d send me the $a[CAPTCHA] to fill out if I left and waited long enough, then came back later.
		Sure enough, I tried again later, and they sent me thirteen $a[CAPTCHA]s in a row to fill out.
		It&apos;s not uncommon for them to send me about twenty before they&apos;ll let me through, and it&apos;s not unheard of for them to send me upwards of thirty.
		It&apos;s become a bit of a game for me: I count how many $a[CAPTCHA]s Google makes me fill out this time, and see just how badly Google wants annoy me today.
	</p>
	<p>
		<del>The website with the lessons claims that <code>echo</code> is a function.
		This actually isn&apos;t true, as can be verified in the $a[PHP] manual.
		Instead, <code>echo</code> is one of the basic language constructs, though it can be called like a function syntactically.
		However, the lesson website then proceeds to show an example using <code>echo</code> that <strong>*doesn&apos;t*</strong> call it as a function.
		Take the following pseudocode:</del>
	</p>
	<blockquote>
		<p>
			<code>function_name &apos;argument&apos;;</code>
		</p>
	</blockquote>
	<p>
		<del>This is <strong>*not*</strong> valid in $a[PHP].
		Instead, you need to use:</del>
	</p>
	<blockquote>
		<p>
			<code>function_name(&apos;argument&apos;);</code>
		</p>
	</blockquote>
	<p>
		<del>However, the website uses <code>echo &apos;argument&apos;;</code>.
		This is of course valid code, but is <strong>*not*</strong> a valid function call, meaning that <code>echo</code> <strong>*cannot*</strong> be and <strong>*is not*</strong> a function.</del>
	</p>
	<p>
		<ins>Actually, scratch all that.
		In a later lesson, the site admits <code>echo</code> isn&apos;t a function, and even brings up the same point as I did about the lack of parentheses.
		Why give us invalid information, just to later correct it?
		Why knowingly give us invalid information in the first place?
		Sheesh.</ins>
	</p>
	<p>
		The website also teaches us that variable names can only contain alphanumeric characters and underscores.
		This is actually untrue as well.
		Within the $a[ASCII] range, these are the only characters allowed.
		However, <strong>*all characters outside the $a[ASCII] range*</strong> are also available for use in variable names.
		Again, this is something the $a[PHP] manual teaches you right away.
		In practice, most people stick to the $a[ASCII] range when naming variables, but this should never be considered a hard requirement.
		Personally, prefix my variable names with a cent sign if the variable contains a closure.
		The only time I use closures is when I need to reuse code but I can&apos;t generalise the code enough to warrant a first class function.
		This is code no one else needs to call, so no one else will need to type this symbol later.
		Using the strange symbol allows me to put it into a separate pseudonamespace.
		I used a funny character in an actual <code>namespace</code> once too, for a class that should never be touched by code outside my own library.
		The class basically was a workaround for the fact that a feature I needed wasn&apos;t present in old versions of $a[PHP], but had just been added to the cutting-edge release.
		In other words, these characters are available for use in all symbol names, not just variable names.
		I wouldn&apos;t recommend using them in most cases, but they are very useful to have sometimes.
	</p>
	<p>
		Next, we&apos;re introduced to the <code>define()</code> function.
		Like the lesson says, <code>define()</code> defines constants.
		However, the $a[PHP] manual will tell you that it&apos;s not the <strong>*preferred*</strong> method of defining constants.
		The <code>cont</code> keyword is better for a few reasons.
		I forget all the reasons, but one reason is because the <code>const</code> keyword runs faster than the <code>define()</code> function.
		This may be because <code>const</code> does its thing at compile time, while <code>define()</code> waits until run time.
		If you need to define a constant conditionally and/or in terms of some variable, <code>define()</code> is your only option.
		For all other cases, use <code>const</code> instead.
	</p>
	<p>
		It drives me batty when modulus division is described as a finding of the &quot;remainder&quot;.
		When dividing, there <strong>*is no remainder*</strong>.
		Five divided by two is <strong>*not*</strong> two remainder one.
		It&apos;s two and a half.
		Modulo division is more like overflow, in how it works.
		Take a two-bit, unsigned integer, for example.
		There are four possible values, so if you store a value in such a space, your result is the value you attempted to store, but modulo 4.
		This shows us too that as <code>4</code> gets stored as <code>0x00</code>, a number modulo itself is zero.
		Modulus division doesn&apos;t have to involve powers of two, obviously.
		Powers of two just make the point easy to explain to computer science majors.
		If you have fifteen modulo ten, it just means you&apos;re trying to find the result you&apos;d get from trying to store fifteen in a space that has only ten values.
		Nine, zero (because of roll over), one, two, three, four, five.
		So you end up with a result of five.
		Dealing with negative numbers in modulus division is undefined, so you get different results depending on how modulus division is implemented in the environment you&apos;re working in.
		However, when working with only positive numbers, you get the same results when you look at modulus as roll over as when you look at it as &quot;remainders&quot;.
		So I guess, do it whatever way makes it easy for you.
		However, when you <strong>*talk*</strong> about modulus being a finding of &quot;remainders&quot;, it makes you look like the type of idiot that doesn&apos;t know how to find fractions or decimals when dividing.
		It makes you look too stupid to figure out that there doesn&apos;t need to be &quot;leftovers&quot; just because something doesn&apos;t divide <strong>*evenly*</strong>.
	</p>
	<p>
		The quiz for the week startled me a bit.
		The first question asked what command you use to compile code.
		The next was asking about how to change the working directory on a Linux system.
		What?
		I didn&apos;t even remember studying those things in this course, though I&apos;ve used the <code>make</code> and <code>cd</code> commands countless times on my Debian system, so I knew the answers.
		And isn&apos;t this a course about Web development?
		Compiling code and changing the working directory are things you do locally, not in regards to websites.
		What else did I miss, that maybe I <strong>*didn&apos;t*</strong> just happen to know because of my personal situation?
		After a bit, it struck me: we were supposed to read about how to compile Apache and $a[PHP] from source last week.
		I sort of skipped that part of the reading material, due to already having compiled both $a[PHP] and Apache from source a few times.
		The material would have definitely used the <code>make</code> command, and would have used the <code>cd</code> command a few times.
		So I&apos;m still up to speed on things.
	</p>
	<p>
		I don&apos;t think most people in this course ever think about accessibility, let along care about it.
		I mean, most Web developers certainly don&apos;t.
		There is one student I&apos;ve been conversing with, even outside of class a bit, who seems intelligent and always has an insightful take on the discussion assignments.
		And this week, I learned that they&apos;ve been formatting their discussion posts with accessibility in mind.
		I&apos;ve picked up a new trick to better format my own posts.
		I suppose it won&apos;t matter for in-discussion, as again, most everyone doesn&apos;t care.
		But my discussion posts are also archived in my public journal.
		(Due to censorship from the school, all such journal entries aren&apos;t visible to the public until 2023.)
		The better my posts are formatted for the discussion assignments, the better they&apos;ll be formatted in my journal.
	</p>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		This week&apos;s reading assignment was more sections in the $a[PHP] manual.
		Oddly, the text of the link said <code>index.php</code>, though the actual target of the link was <code>install.php</code>.
		I&apos;ve already read the manual front to back though, aside from the sections regarding each and every specific function; class; and constant the language has to offer.
		I tend to look up the functions I need for a given project, and ignore the rest.
		I just don&apos;t have enough room in my brain to store all the functions.
		I can&apos;t even remember the functions I&apos;ve used, and have to look those ones up again too to get their exact usage before calling them in my code.
	</p>
	<p>
		The $a[PHP] tutorial lessons we&apos;re completing this week as our main assignment said that $a[PHP] arrays are always indexed from zero.
		I find it very annoying when arrays are indexed from one; it makes it difficult for me to work in the language, as I&apos;m always assuming an initial index of zero, and it leads to bugs when that assumption proves to be false.
		$a[PHP] arrays aren&apos;t actually <strong>*always*</strong> indexed from zero though.
		When using an array, you can set any arbitrary key you like.
		In this way, they&apos;re more like tables from other languages.
		However, $a[PHP] arrays also have an order, which is independent of the order of numeric keys.
		Instead, the order is determined by the order the keys get added to the array, until such a time that an array-sorting function is run to clean up that mess.
		If you don&apos;t choose a key to assign a value to, $a[PHP] starts assigning numeric keys automatically.
		If no non-negative integer keys have been used in the array yet, the key <code>0</code> gets assigned.
		If any non-negative integer keys <strong>*have*</strong> been assigned, $a[PHP] assigns a key equal to the highest integer key that has ever been used in the array, regardless of whether that key is still in use, incremented by one.
		That means that if you never assign keys and you never remove key/value pairs from the array, your array will be indexed from zero.
		However, if you remove key/value pairs (for example: <code>unset(<var>\$var</var>[5]);</code>) or sometimes use specific integer keys, your array will have holes in it.
		And if you assign an initial integer key greater than negative two, your array will be indexed from that.
		(Assigning a key of negative one still results in a continuous chain of integer keys because $a[PHP] assigns zero as the next key regardless of what negative number you used, and negative one just happens to be the integer just before zero.
		So by coincidence, it actually works.)
		So by these mechanics, you can index arrays from one, or from five, or from seven hundred two.
		$a[PHP] will allow all of these options.
		That said, I don&apos;t recommend doing that.
		Indexing from zero just makes everything easier.
		Just don&apos;t make the invalid claim that $a[PHP] arrays are <strong>*always*</strong> indexed from zero.
	</p>
	<p>
		It&apos;s interesting that the lesson made a point of specifically telling us that multidimensional arrays can have as many dimensions as you like.
		Most languages don&apos;t have true multidimensional array support.
		(I&apos;ve only seen one language with true multidimensional array support, and I think that was R.)
		Instead, what most languages offer is nested arrays.
		$a[PHP] is one of these languages.
		Semantically, as long as you nest your arrays the same depth on every branch, it makes no difference.
		You&apos;ve got yourself a multidimensional array that you constructed out of single-dimensional arrays.
		Mechanically though, this does make a difference.
		If a multidimensional array is just an array whose values happen to be arrays themselves, each of those arrays can likewise be arrays.
		There&apos;s no limit to depth, because the arrays at each depth are treated no differently than the arrays at any other depth.
		There&apos;s no &quot;depth-three&quot; arrays, as far as the language and the interpreter are concerned.
		As for accessing the elements, the syntax used just selects the an element from the array at depth zero, then selects an element from that array, and so on.
		There&apos;s no limit to the number of selections made, so you can properly access values across as many dimensions as you need to.
		To put it simply, the lesson is completely correct in the assessment that you can build and use arrays of any (positive) number of dimensions, but it&apos;s a bizarre thing for the lesson to point out.
		Likewise, the lesson points out that you can chain together as many <code>elseif</code> statements as you want.
		Like arrays too, you can nest <code>if</code> statements arbitrarily deep.
	</p>
	<p>
		I appreciated the exercise in which we filled in the blanks for the gender code.
		In my other course this term, we recently had an example in storing binary values, and the example used gender.
		As an individual of non-binary gender, I found that annoying.
		I actually use the name Alexand(er|ra), in the style of a regular expression, as my name in informal writing.
		Legally, I&apos;m just Alex.
		I have no need for a gendered name such as Alexander, Alexandra, Alexandria, Alexa, or Alexia.
		However, the lesson in this course had us complete code that offered &quot;other&quot; as an option.
		Not everyone fits into a binary box, and I appreciate when people don&apos;t try to shove a round peg like me into either a square or triangular hole.
		I just don&apos;t fit.
	</p>
	<p>
		In the discussion assignment, one of the other students asked me about how my pre-compiled webpages are implemented.
		It was fun to explain it, then go back into my code so I could link to the relevant parts.
		Like $a[DNA], the code&apos;s a bit of a mess due to the way it has evolved over time, but it shows that $a[PHP]&apos;s use in regards to webpages isn&apos;t limited to writing scripts that get run to build the pages every time a user requests them.
		Pages can be pre-built, sort of making the result like a cache, so they don&apos;t have to be rebuilt every time.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		Our reading assignment for the week is <a href="https://opentextbc.ca/electroniccommerce/chapter/electronic-commerce-technology/">Electronic commerce technology - Electronic Commerce: The Strategic Perspective</a>.
		Now we&apos;re getting somewhere I&apos;m not actually familiar with yet.
		It looks like we&apos;re putting aside the $a[PHP] lessons.
		I&apos;m surprised, as we haven&apos;t finished the third-party course on $a[PHP] yet, but I&apos;m happy not to continue rehashing what I already have known about the language for probably over a decade.
	</p>
	<p>
		The article discusses $a[IP] addresses, calling them 32-bit numbers.
		I guess it was written before $a[IPv6]; now they&apos;re 32-bit or 128-bit numbers, depending on which version of $a[IP] you use.
		It calls these 32-bit numbers difficult for humans to recall, making $a[DNS] vital.
		You can compact these bits into seven digits, just like an old United States telephone number, back before you had to include the area code when placing calls.
		Yet somehow, people thought it was okay to use these numbers to identify other people and businesses on the telephone network, but not businesses on the Web and the rest of the Internet.
		It&apos;s sickening.
		And yet people, to this day, put up with the telephone number system as if it&apos;s okay.
		It&apos;s not.
		We need $a[DNS] for telephony, or we need to deprecate the telephone network.
	</p>
	<p>
		It goes on to discuss the layered setup of electronic commerce technology.
		It reminds me a lot of the layered structure of the Internet in general.
		If we were able to invent the perfect technology in one go, we&apos;d probably have a more-monolithic setup.
		It&apos;d be more efficient.
		However, building layers like this allows us to focus on one problem at a time, breaking the problem into manageable pieces.
		This accomplishes two other important tasks too.
		First, as better solutions come along, we can replace only the parts that need to be replaced.
		We don&apos;t necessarily need to reinvent the whole stack.
		Second, we can use the components for other purposes.
		Again, we don&apos;t need to reinvent things that are already a part of the stack if they happen to be useful in the context of some other stack as well.
		Layers and modularity offer not only ease of problem-solving, but also flexibility.
	</p>
	<p>
		We&apos;re introduced to the concept of Uniform Resource Locators ($a[URI]s), but that&apos;s an outdated and deprecated term.
		Sadly, it&apos;s still widely in use by people that don&apos;t know what they&apos;re talking about.
		Because this article is old, I&apos;m willing to give it the benefit of the doubt and say it was likely written back when &quot;$a[URL]&quot; was a valid term.
		However, modern sources have no such excuse.
		The correct term is &quot;Uniform Resource Identifiers&quot; (&quot;$a[URI]s&quot;), as these strings don&apos;t always locate anything.
		Sometimes, they specify a location, but other times, they specify a registered name of some sort with no information on how to track down the indicated resource.
		A $a[URI] identifies a resource, but that identity may or may not actually contain information about the resource&apos;s location.
	</p>
	<p>
		Next, we&apos;re introduced to using both $a[PDF]s and $a[HTML] as formats for pages.
		Especially back in the day, $a[PDF] was an absolutely <strong>*terrible*</strong> format for Web documents.
		Like the article says, you needed Adobe software to render them.
		If Adobe refused to offer a version for your system, you were out of luck.
		If Adobe did offer a version for your system but you don&apos;t trust Adobe (Why <strong>*would*</strong> you trust Adobe?), you were out of luck.
		These days, $a[PDF]s are usable in third-party software, though the filling of forms meant to be sent over the network (as in you fill out the form, then hit a submission button within the $a[PDF]) still don&apos;t work in third-party software, at least last time I checked.
		The article also mentions that Adobe-supplied $a[PDF]-creation software is more expensive than webpage-building software.
		I&apos;ve got to agree with that, though $a[PDF] creation these days doesn&apos;t need to depend on Adobe software at all, due to other software providers having entered the market.
		Even back then though, a basic text editor, which probably came standard with every operating system even in those times, is all that&apos;s needed for writing $a[HTML].
		Personally, I use something a little more complex these days with syntax highlighting, but this isn&apos;t necessary for basic webpage development.
		The page claims $a[PDF]s to be better for both forms and information, while not asserting that $a[HTML] is better for anything in particular.
		I completely disagree.
		For printable forms, yes, $a[PDF]s are the way to go.
		For all other forms and for information, $a[HTML] is a much better option.
		It&apos;s easier for your users to work with and scales well the user&apos;s screen size and resolution.
		For example, someone with poor eyesight might configure their browser to display text in a larger font.
		This works well in $a[HTML].
		It doesn&apos;t work at all in $a[PDF]s.
		A bit later though, I found the issue with the article: it&apos;s biased.
		Adobe is used as the primary example of a company with a website as well, showing that the Adobe company, and thus it&apos;s products, are probably favoured by the author or the author&apos;s employer for one reason or another.
	</p>
	<p>
		I was well aware of both the Internet and intranets, but I hadn&apos;t heard of extranets before.
		I didn&apos;t do any side research on them to verify, but the impression I got was that they connect two usually-adjacent intranets.
		It also seems like an intranet can be a part of multiple extranets.
		It&apos;s an interesting concept, and allows compartmentalisation while also restricting access to only the departments of an organisation that need it.
	</p>
	<p>
		The article takes a very naïve look at firewalls.
		It says they&apos;re used to keep network traffic from the Internet from reaching the intranet.
		This is one common use for them, but not the only use available.
		For example, the firewall on my own computer instead prevents rogue applications from accessing the Internet!
		The firewall is used in reverse of what the article describes, allowing only the single application I allow to access the Internet to be able to transmit any information.
		This application is my proxy software.
		Any other application wanting to access the network has to play nicely with the proxy.
		If it tries to leak data outside the encrypted channel, the firewall prevents it from doing so.
		Firewalls simply allow or block traffic from one end or both, based on some criteria defined by their configuration.
	</p>
	<p>
		The article talks about the problem with single-key encryption.
		How do you safely transmit the key?
		The article makes the statement that telephone and fax machines can&apos;t be trusted due to not being completely secure.
		Not completely secure?
		That&apos;s a laugh.
		Try highly insecure.
		The telephone network has been shown to be completely unreliable when it comes to security.
		The United States government has even ordered back doors be put into the United States telephone network in the past.
		Even if you trust the government (and if your government is enforcing back doors, you shouldn&apos;t), back doors actually make it easier for crackers to get in.
		The government has deliberately weakened the encryption of the telephone network, over which bot telephone calls and fax messages travel, making it unsafe for use when doing anything remotely important.
		Like the article says though, public key encryption fixes the problem by making it so the public key doesn&apos;t need to be kept secure at all; only the private key does.
		The system is genius, if you ask me.
		Also like the article says, public key encryption can also be used to sign messages to verify their authenticity.
		For example, I sign my <a href="https://y.st./a/canary.txt">warrant canary</a> every week.
	</p>
	<p>
		The article explains that with electronic funds transfers, all transactions need to be sent through the banks.
		That seems rather obvious.
		What&apos;s not obvious though is that the banks are legally required to record all these transactions.
		I wasn&apos;t aware of that.
		That&apos;s creepy and unnecessary.
		I think I&apos;m going to try avoid using my cards from now on.
		I still need them for online purchases, but most of my purchases are in person, where cash is accepted.
	</p>
	<p>
		The article lost all credibility in my book when I got to the part where it mentioned a &quot;$a[PIN] number&quot;.
		Anyone who knows what a $a[PIN] is knows &quot;$a[PIN]&quot; stands for &quot;personal identification number&quot;.
		That means a &quot;$a[PIN] number&quot; is a personal identification number number.
		A number number?
		What even is that?
		Only idiots talk about &quot;$a[PIN] numbers&quot; or &quot;$a[ATM] machines&quot;.
	</p>
	<p>
		When starting my project for the week, I needed to figure out which of the three offered platforms to develop my e-commerce shop on.
		I knew nothing of any of the three, so I started out by seeing how the main sites of the company performed without JavaScript.
		Last time I had to use a specific platform for Web development in this course, I was required to use Wix, with no alternative.
		Wix is pretty terrible, and websites built with it don&apos;t function for users who don&apos;t use JavaScript.
		When you&apos;re building a site, why would you want to push away visitors that don&apos;t use JavaScript?
		And with the stores we have to build this week, that equates to pushing away would-be <strong>*customers*</strong>.
		Two of the three provided sample sites, so I knew the built sites would be fine.
		The third platform provided no samples, but the company websites were fine without JavaScript, which was a good sign.
		Next, I looked to see if any provided the option to directly modify the $a[XHTML].
		None listed this as a feature.
		I expected they wouldn&apos;t, but if one did, that&apos;d be the option I&apos;d use.
		Writing what you need in $a[XHTML] is much easier than trying to work with company-specific interfaces.
		The company-specific interfaces also often use drag-and-drop, with pixel-specific placements, which makes the pages created look terrible when viewed in a browser window that isn&apos;t the exact size expected by the company.
		Almost no one seem to use those exact sizes either, due to differing monitor sizes and window size preferences.
		Pixel-precise placements are almost always the wrong way to do things.
		Next, I looked at domain support.
		Could you use your own domain?
		It looks like for the paid tiers, all three offer that support.
		Two of the platforms are silent about what your address will look like if you don&apos;t pay though.
		The third tells you that you get a subdomain of their name.
		Without $a[XHTML]-editing support, I don&apos;t think any of these options are <strong>*worth*</strong> paying for, but even if they were, I&apos;m poor and I&apos;ve got nothing to sell.
		There&apos;s no sense in paying for a demo site I&apos;ll only have up for a couple weeks, so I went for the option that provides a subdomain.
		Subdomains are cleaner than subdirectories for identifying your business pages.
		As I went to check which of the three offered a subdomain so I could sign up though, I noticed that one of the other sites admitted to offering one of those terrible drag-and-drop, while the other offered &quot;full customisability&quot;.
		I was highly sceptical, thinking it likely didn&apos;t offer anything even <strong>*close*</strong> to full customisability, but I had to know what it offered.
		So I chose that platform over the one offering a clean subdomain.
	</p>
	<p>
		It turned out that that platform too offered a subdomain, but the registration page claimed I didn&apos;t fill out the $a[CAPTCHA] correctly.
		There was no $a[CAPTCHA].
		So I tried to register instead from my other browser instance, which has JavaScript enabled.
		Sure enough, there was one of those annoying Google reCAPTCHA $a[CAPTCHA]s that usually makes me fill put about twenty to thirty $a[CAPTCHA]s before it&apos;ll let me through.
		This time, it only made me fill out one.
		It&apos;s the first time in <strong>*years*</strong> that Google&apos;s reCAPTCHA hasn&apos;t done it&apos;s best to be an absolute thorn in my side.
		I found the $a[XHTML]-/$a[CSS]-editing option, so that was available like I&apos;d hoped, but it said I couldn&apos;t use that option without paying.
		Too bad.
		Looks like the list of features on the gratis version was a lie.
	</p>
	<p>
		Direct $a[XHTML]-editing would definitely be best though, if I was to be preserving this site in my archive, so I tried the next company.
		This company&apos;s registration page was <strong>*also*</strong> broken without JavaScript, as they disabled the &quot;submit&quot; button until the JavaScript detects that all fields are filled.
		It&apos;s an idiotic sort of design, really.
		If you&apos;re going to set up a page like that, the proper way is to not disable the button in the original $a[XHTML] of the page, but instead disable it using JavaScript.
		That way, it works as you want it to when JavaScript is enabled, but it&apos;s not completely broken when it&apos;s not.
		Even with JavaScript checking these things, you can never trust data from users to be valid or complete, so you&apos;ve got to check all that on the server side anyway.
		Allowing people without JavaScript to use your site shouldn&apos;t even be a question; you just do it, every time.
		I copied the address and was about to paste it in the other browser instance when I remembered I could instead just edit the page to enable the button.
		See?
		You can&apos;t trust your users, so JavaScript isn&apos;t a proper validation mechanism.
		You&apos;ve got to double check it server-side.
		I&apos;d signed up for the wrong site though, and this was the one with the drag-and-drop editor.
		Oops.
	</p>
	<p>
		The final site completely disabled the registration page if JavaScript was disabled, so I had to take that to my other instance.
		Google&apos;s reCAPTCHA was back, this time refusing to send me any $a[CAPTCHA]s at all, on the grounds that it thought I might be an automated script.
		Isn&apos;t that what $a[CAPTCHA]s are for though?
		To allow people to prove they&apos;re <strong>*not*</strong> automated scripts?
		reCAPTCHA&apos;s pretty stupidly programmed.
		So I waited, and gave Google some time to stop being an idiot, and they send me five $a[CAPTCHA]s to fill out.
		Again, a pretty small number compared to what they usually throw at me.
		It turned out this platform offered no real customisation either.
		All you could do is change the colours, though that&apos;s more than the first site would let you do on an unpaid subscription.
	</p>
	<p>
		I ended up going back to the first company, but while they said you could put text on the store front page, I never could figure out how to do it.
		That text was needed so we could include the accepted payment types information, which the assignment required.
		Otherwise, a student would have to register an account and attempt to make a purchase before they could see what payment options are available, and for that method to even work, each of us students would need to actually have a payment system in place.
		I doubt many of us have the accounts set up to accept payments online, and I know I sure don&apos;t.
		I ended up writing up the necessary information, then taking a screenshot of the text and uploading it as the banner image.
		There&apos;s no way to disable the banner image rotation though, and each of the three banner images has a default image.
		Even if you upload one banner image, the other two defaults will be used unless you upload three banner images.
		So to keep the text in place, I uploaded three copies of this screenshot.
		This is why I don&apos;t like working with pre-built website systems.
		Your options are severely limited, and you&apos;ve got to do hacky things to get your needs met, if you can even get them met at all.
		I much prefer to build from scratch, or at the very least, be able to modify the code of an existing page, so as to actually have control over the $a[XHTML].
	</p>
	<p>
		The assignment instructions say to save the $a[URI] of our e-commerce website in a word-processing document and upload it.
		That&apos;s rather asinine.
		Students then hove to download that document, then copy the $a[URI] from it and paste it into their Web browsers.
		Why not just submit the $a[URI] in the submission form?
		I did as the instructions said to do, but also provided a direct hyperlink so users don&apos;t have to download that stupid file.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://magazine.joomla.org/issues/issue-aug-2018/item/3351-10-reasons-to-choose-joomla">10 Reasons to choose Joomla</a>
		</li>
		<li>
			<a href="https://www.joomla.org/about-joomla.html">Joomla! - Content Management System to build websites &amp; apps</a>
		</li>
		<li>
			<a href="https://www.computerworld.com/article/2494863/open-source-tools-choosing-an-open-source-cms-part-2-why-we-use-joomla.html">Choosing an open-source CMS, part 2: Why we use Joomla | Computerworld</a>
		</li>
	</ul>
	<p>
		Hilariously, one of the links is broken in the study instructions, because it uses the invalid <code>ttps:</code> scheme instead of the valid <code>https:</code> scheme in its $a[URI].
		Hopefully that&apos;ll be fixed in a later term.
	</p>
	<p>
		I&apos;ve found a way to squeeze just a little more time for coursework out of my week.
		I really hate to skip my weekly $a[LUG] meetings, though I did have to skip one this term to fit in a heavy study session.
		They&apos;re the only socialising I&apos;ve managed to fit into my week.
		I&apos;m just so busy.
		I can&apos;t get much studying done at the meetings either, as I don&apos;t study well with people around.
		However, I can get the grading for the week done during those meetings.
		That way, I don&apos;t have to grade when I&apos;m at home, leaving me more time to study.
		I think I&apos;m going to do that during future terms, too.
		There&apos;s no real reason not to, and allows me to be productive without having to give up my meetings.
	</p>
	<p>
		Oddly, the final of the three aspects we&apos;re supposed to grade on had no instructions at all.
		There was no indication as to what we were supposed to be grading on.
		So I just gave everyone full credit on that.
		The first student failed to make their site visible to the world, so that was that was the only aspect I was able to give them any credit on.
		I was quite surprised, as I&apos;d been rather paranoid, myself.
		I checked my website repeatedly from an anonymous browser window, which was part of how I debugged the fact that my header image wasn&apos;t displaying correctly.
		The other two students did well though.
	</p>
	<p>
		Joomla!, needed for this week&apos;s unit assignment, seems to employ those obnoxious Google reCAPTCHA $a[CAPTCHA]s like the sites last week.
		Again, Google refused to send any $a[CAPTCHA]s at first on the grounds that I might be a robot, which again, is idiotic because the whole point of providing the $a[CAPTCHA]s is to provide a chance for me to prove I&apos;m not.
		For over an hour, Google refused to send $a[CAPTCHA]s.
		It made me question the discussion assignment.
		We were supposed to find jobs we could apply for in relation to Joomla!, but if it takes over an hour to even get <strong>*started*</strong> on a Joomla! project, such a job really isn&apos;t worth my time.
		I did some research though, and found Joomla! is not only this website, but also a free software product the same company offers.
		A Joomla! job need not have anything to do with this website.
	</p>
	<p>
		I couldn&apos;t find any jobs listed as being for Joomla! developers or Joomla! administrators on Indeed, even without narrowing my search to my country, as the discussion assignment asked us to do.
		Several jobs made mention of Joomla!, though they didn&apos;t seem Joomla!-centric.
		On the other hand, LinkedIn  provided plenty of results for this query, but wouldn&apos;t allow me to see any of them due to not having an account there.
		They&apos;ve got what looks like a blurred page just behind the login form, but it&apos;s actually a lie.
		I tried deleting the block from the page, only for the blurred page to vanish too.
		So I reloaded and compared the blurred page to the blurred page of another listing.
		They were identical!
		It&apos;s not actual page information, but a decoy to keep you from seeing anything.
		They insist that they be able to track what you&apos;re looking at and tie it to your real-world identity, and to do that, they demand that you register.
		Creepy.
		I&apos;ll pass.
		There is <strong>*zero*</strong> valid reason to need an account to browse listings of <strong>*anything*</strong>.
		Only once you start <strong>*applying*</strong> for the jobs does it make sense for a user to go ahead and create an account.
		People that would fall for this sort of garbage and allow themselves to be tracked that way are idiots.
	</p>
	<p>
		I briefly considered trying to set up an account with a temporary email address, allowing them to track me for that single session and losing me afterwards, but I quickly remembered that we&apos;ve got to provide links to the job listings we look at.
		I could not in good conscience link these listings to other students and require them to register as well.
		I&apos;m not a monster, after all.
	</p>
	<p>
		I didn&apos;t have time to complete my unit assignment on the first day, but I set up the account as quickly as I could, as I wanted to snag my preferred subdomain.
		I wanted the one with this course&apos;s course code as the part I got to choose, and didn&apos;t want someone else taking that one out from under me before I could get to it.
		If they did, I&apos;d have to come up with something else, and it just wouldn&apos;t be as cool (at least not if I couldn&apos;t use a subdomain of my own domain).
		So I grabbed that subdomain, then put the project down and worked on more-urgent coursework.
		The next day though, I finished up the urgent stuff, and figured I might as well get started on the Joomla! project.
		I wasn&apos;t sure how long it&apos;d take, as it had a boatload of steps, but I thought I could get something done on it in the couple hours I had before work.
		The hardest part was finding my way back to that first page the instructions said to grab a screenshot of.
		After that though, everything else was a breeze.
		There were a lot of steps, but they were all micro-steps, not large tasks we had to accomplish.
		I ended up getting it all done before work.
	</p>
	<p>
		The first page of the reading assignment was about why we should choose Joomla! as our $a[CMS].
		The first reason was that Joomla! has won twenty different awards.
		Honestly, I don&apos;t find that impressive.
		There are tonnes of things that win lots of awards while still being utter piles of garbage.
		For example, take almost any award-winning movie.
		Watching it is probably a waste of your time.
		At least for me, on the rare occasion I used to watch movies, I was disappointed by them almost every time, no matter how many awards they&apos;d won.
		So you won twenty awards in thirteen years?
		Big deal.
		I don&apos;t know if any of those awards actually mean anything.
	</p>
	<p>
		Next, it was said that Joomla! is free software.
		That&apos;s something I care about.
		If a given $a[CMS] isn&apos;t free software, I can guarantee I won&apos;t choose it.
		That doesn&apos;t mean if it&apos;s free software, I <strong>*will*</strong> choose it, but it being free software gives me a bunch less reasons to rule it out.
		I don&apos;t have to worry about licensing agreements, obnoxious product keys, $a[DRM], bugs that I&apos;m not allowed to fix even if I find them, being disallowed to read and understand what the software is doing to my machine, supporting developers that hold back society by hiding their source code, vendor lock-in, and a bunch of other problems.
	</p>
	<p>
		Next, it covers that Joomla! has seventy-five language choices available.
		Nice!
		And it appears support for these languages is complete, judging by the fact that you can even <strong>*install*</strong> Joomla! in any of these languages.
		That means that even the installer itself has support for these languages.
		It&apos;s nice to see efforts being made to make the product work for as many users as possible.
	</p>
	<p>
		Next, the article talks about security.
		First, it talks about two-factor authentication.
		From the sounds of it, the <strong>*only*</strong> two-factor authentication mechanism available in Joomla! is via $a[SMS].
		As far as I&apos;m concerned, that&apos;s a point <strong>*against*</strong> Joomla!.
		I&apos;d much rather see no two-factor authentication at all than requiring people to have $a[SMS]-enables mobile devices to enable two-factor authentication.
		I don&apos;t mind $a[SMS]-based two-factor authentication being available, <strong>*if and only if*</strong> an alternative two-factor authentication option is available for people who either don&apos;t use the telephone system or have no desire to provide their telephone number to the host of the Joomla! instance they use.
		There is <storng>*way*</storng> too much artificial dependence on telephones in the modern world, and it seems Joomla! is adding to that problem.
		It&apos;s not that hard to offer two-factor authentication via email as an option too.
		The article also talked about the B-Crypt algorithm being used for password storage.
		I wasn&apos;t sure what that was off-hand, but it sounded like passwords were being reversibly encrypted, which is something that you should <strong>*never*</strong> do.
		Any system that does that can be immediately written off as not being secure.
		I looked it up though, and B-Crypt is an algorithm based on the Blowfish hash.
		Passwords are being hashed, not reversibly encrypted, which is how it should be.
		It seems B-Crypt can be adjusted too, making it more and more computationally expensive to perform to account for increases in processing power.
		That&apos;s pretty awesome.
		That certainly makes it easier to stay secure as time goes on and adversaries have access to faster computers.
		B-Crypt also uses a salt, preventing the use of rainbow tables.
		Again, a plus.
		The article also briefly mentioned the bug-squashing security team of the project.
	</p>
	<p>
		Next, the article covered that Joomla! is easy to use and you don&apos;t need to be a developer to use it.
		This is a double-edged sword though.
		It almost certainly means that you can&apos;t get as good of results as if you were working with something that did require development skills to use.
		You don&apos;t have confusing options to figure out, making it easy to use, but you also don&apos;t have powerful options to use, meaning you&apos;re limited in what you can do with the product.
		The article also mentions the Joomla! manual; the manual is a vital resource for users of any large project.
		You really shouldn&apos;t be choosing a product that <strong>*doesn&apos;t*</strong> have a manual.
		And apparently, there&apos;s also a forum full of Joomla! users, but even if the Joomla! team didn&apos;t provide this, several such forums would spring up on their own.
		The main advantage of the developers providing this is that there&apos;s a better chance for there to be one unified forum on the topic and not fifteen forums with different experts, where you - for example - have to guess which forum to use to get help with which problem.
	</p>
	<p>
		Apparently, there are over seven thousand extensions, too.
		That figure sounds impressive if you&apos;re not paying attention, but the <strong>*quality*</strong> of such extensions isn&apos;t mentioned.
		More importantly, we&apos;re not told anything about the licenses they are offered under.
		It&apos;s made clear that most if not all of these extensions are provided by the community, so the level of skill and the attitudes of the developers will vary.
		As I said above, I wouldn&apos;t consider a proprietary product.
		As far as I&apos;m concerned, a proprietary extension to a freely-licensed product doesn&apos;t count toward that product&apos;s list of available extensions, so right off the bat, I&apos;m not sure if seven thousand extensions are honestly available.
		And after that, you need to rule out the badly-designed extensions.
		They count toward the number of available extensions, but not toward the list of ones you&apos;d actually want to use.
		And this smaller figure is what&apos;s going to matter when you&apos;re looking at the probability of one of the extensions filling whatever need you might have.
		Don&apos;t get me wrong, we in the free software community come out with a lot of very useful code.
		We&apos;ve even developed an entire operating system composed of nothing but freely-licensed works: Debian, the best operating system I&apos;ve ever had the pleasure to use.
		However, we also come out with a lot of utter garbage.
		It&apos;s a mixed bag with us.
	</p>
	<p>
		The article goes on to boast that Joomla! has a world-class user interface.
		I tested this though.
		The interface doesn&apos;t even function without JavaScript enabled.
		It&apos;s not just that it&apos;s harder to work or something.
		Vital functionality such as editing posts, which is a feature that in no way requires client-side scripting, is missing in Joomla! for users that choose not to use (or are unable to use) JavaScript.
		This is in no way a &quot;world-class&quot; user interface.
		It doesn&apos;t even meet the $a[W3C]&apos;s basic accessibility guidelines.
		Calling this a world-class user interface is a bold-faced lie.
		There are no two ways about it.
		It doesn&apos;t matter how many advanced features you have; if you don&apos;t have the basic necessities as well, you&apos;re not world-class.
	</p>
	<p>
		Next, the article boasts that the Joomla! community and the Joomla! developers never stop, so work on Joomla! will continue for the foreseeable future.
		As they put it, it&apos;s a future-proof solution.
		That seems legitimate.
		One of the problems with proprietary software is that the developers could stop supporting it at any time, and if they do, you&apos;re out of luck.
		There&apos;s nothing you can do about it.
		In fact, the Joomla! team likewise might discontinue the project.
		That&apos;s simply a fact of how development works.
		However, because the code is available and freely-licenced, the project can be picked up by anyone.
		With a large community, the project will continue even if the Joomla! team disbands, though the project may need to take on a different name and may end up splintering into several projects with the same original code base.
		Even if the Joomla! team disbands, if the community picks the project back up (and they most likely would), individual members from the former team may even contribute on their own time, or due to being funded by companies wanting continued support.
		Because the project is freely-licensed, it&apos;s much harder for the project to truly die.
	</p>
	<p>
		And finally, the article touts that Joomla! is powerful, due to its high level of extensibility.
		I&apos;m not sure how true this is.
		I mean, above, I pointed out a part of the article that was clearly lying, and combined with the fact that this article was written by the Joomla! developers and therefore biased, I&apos;d take this with a grain of salt.
		However, even if you did hit limitations in what Joomla!&apos;s extention $a[API] could do, there&apos;s still the fact that Joomla! is written in an easy-to-understand language ($a[PHP]) and the fact that you&apos;re legally allowed to change the source code as you wish.
		You can edit Joomla! itself to get it to do whatever you need it to do.
		Even if you did have to add certain features yourself, there&apos;s a good chance that using Joomla! as a starting point instead of building from scratch would save you a lot of time and effort.
		A better idea would be to add the $a[API] features you need to put your functionality in an extension, then try to get the developers to accept your changes into the main project.
		From your perspective, this would mean that you wouldn&apos;t have to run your own custom fork of Joomla!, which would make upgrading much easier.
		From the standpoint of the community, it&apos;d mean that they too would have access to the $a[API] features that were useful to you, and might be useful to them as well.
	</p>
	<p>
		Overall, I&apos;d say the article convinced me <strong>*not*</strong> to choose Joomla!.
		It&apos;s by far not the worst option out there, and is in fact rather decent from what I can tell, but it&apos;s not what I would look for in a $a[CMS] or any other Web-based system.
	</p>
	<p>
		The next page we were assigned to read was more of an official page from the Joomla! team.
		Most of the points covered were already covered by the previous article, but there were a couple new key things.
		The first thing I noticed was that Joomla! provides various permission levels to different users.
		That&apos;s a very powerful tool.
		Next, the extension repository is described as being mostly composed of freely-licensed extensions.
		Y&apos;know that seven thousand extensions figure that I said we needed to take with a grain of salt due to having no context for it?
		Yeah, most of those are in fact usably-licensed, so the actual relevant figure actually is close to seven thousand after all.
		Good to know.
		How close though?
		I&apos;m unsure.
		I like to assume that &quot;most&quot; means at least seventy-five percent though.
		And if the Joomla! community is strong enough, it could be upwards of ninety even ninety-five percent.
		That leaves a pretty healthy number of available extensions.
	</p>
	<p>
		The final page of the reading assignment was interesting because it was written by someone outside the Joomla! team.
		That made it much less biased.
		If you&apos;re looking for information on how to use a product or any of it&apos;s features, your best bet is to read work by insiders.
		But if you&apos;re trying to find out if that product is even worth using, you&apos;re get better results reading work by outsiders.
		That article spouted off some figures such as those relating to popularity of Joomla!, though those don&apos;t tell you what you need to know.
		For example, Windows is the most-popular desktop operating system.
		However, it&apos;s also the least-secure and has a bunch of spyware built in that phones home to Microsoft.
		Windows is by far <strong>*not*</strong> the best operating system, yet it&apos;s still the most popular.
		Popularity of software is partly determined by quality, but it&apos;s also determined a lot by culture.
		One thing this article did mention though is the scale of user-friendliness to feature-fullness.
		It looks like Joomla! takes a middle ground.
		As I was worried about missing features, I&apos;d be more likely to choose Drupal, as it was said to be closer to the feature-full side of the spectrum.
		I&apos;d need to actually do some research first though to find out if Drupal was good for a given project.
		And before that, I&apos;d need to <strong>*have*</strong> a given project.
		I&apos;m pretty sure no $a[CMS] is right for my main personal website, though it might be awesome for other projects I may work on.
		Another big thing it talks about is how the Joomla! team is pretty democratic.
		The project sometimes gets delayed due to the team not agreeing on things.
		The alternative is for one person to be in charge, breaking all ties.
		I like the democratic approach better.
		It slows development at times, but I think it&apos;s more likely to lead to a better end result.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://docs.joomla.org/Creating_a_simple_module">Creating a simple module - Joomla! Documentation</a>
		</li>
		<li>
			<a href="https://docs.joomla.org/J3.x%3ACreating_a_simple_module/Developing_a_Basic_Module">J3.x:Creating a simple module/Developing a Basic Module - Joomla! Documentation</a>
		</li>
		<li>
			<a href="https://docs.joomla.org/Module">Module - Joomla! Documentation</a>
		</li>
		<li>
			<a href="https://tutorialspoint.com/joomla/joomla_create_modules.htm">Joomla Create Modules</a>
		</li>
	</ul>
	<p>
		I tried to get the Weblinks module running on my Joomla! site, but no matter what I tried, I couldn&apos;t get any visible change.
		I had two Weblinks widgets supposedly in place, but neither was actually visible for some reason.
		I finally had to give up and choose a different module to install for the unit project.
		However, every other module I could find either was of no interest, required payment to download, or made the download process too confusing to figure out.
		One of them, for example, presented a page written in German when I clicked &quot;Download&quot;.
		I don&apos;t read German.
		Yet.
		I wasn&apos;t sure what to do on that page.
		I certainly don&apos;t mind paying for software as long as it&apos;s under a free licence, and some of the interesting paid modules were, but I don&apos;t want to pay for a module I&apos;ll use once for this assignment and then throw away.
		I&apos;m not keeping this Joomla! site once the course is over.
		I&apos;ve already got a website to manage.
		I didn&apos;t want to choose some module I didn&apos;t think was actually useful, so I had to give up on giving up, and go through every option the control panel offered to try to debug Weblinks.
		It turns out there&apos;s an unintuitive third section you have to configure.
		If you want Weblinks to work, you&apos;ve got to configure it in three different places in the control panel.
	</p>
	<p>
		I added two links to the page: one to my home page and one to my learning journal for this course, which itself links to the rest of my coursework archive for this course.
		I couldn&apos;t get them to display in the correct order, which was unfortunate.
		This is one of the reasons I write my pages directly in $a[XHTML] and $a[PHP].
		I control the layout, and the page comes out exactly as I code it to.
		There&apos;s none of this garbage such as hyperlinks fighting to stay disorganised.
		It was good enough though.
		I had the module installed and running.
		No one would know the links were not in their intended order.
		That is, aside from people reading this journal entry.
	</p>
	<p>
		I thought the first page on creating modules for Joomla! would be about how to develop, in $a[PHP], modules for Joomla!.
		That would have been a fun exercise for the week.
		However, that&apos;s not what it was at all.
		Instead, it just told us that modules are blocks added to Joomla! pages and can be added on a per-page basis.
		So it didn&apos;t even mean modules as in extensions, but modules as in widgets.
		It didn&apos;t even explain how to use them, create them, or do anything with them.
		It was an entirely useless page, but it was also incredibly brief.
	</p>
	<p>
		The second page on creating modules was closer to what it said on the tin: it was a page on creating extensions for Joomla!.
		It didn&apos;t really get into the $a[API] at all, but it showed a very basic module with two $a[PHP] files.
		Due to one including the other (<code>require_once</code>), it&apos;s unclear whether both these files are actually needed or not.
		As best I can tell, only the first file is needed, and the second can be defined or not under any name, just to keep any classes you need in a separate file or files.
		It&apos;s good practice to put each class in it&apos;s own file, but for tutorials such as this, whether that&apos;s why it&apos;s in a separate file or not really should be mentioned.
		The nice thing though is that Joomla! modules are clearly written in $a[PHP].
		I mean, there&apos;s an $a[XML] file too, for metadata about the module, but the fact that the main code of the module is written in $a[PHP] means you can do just about anything with the module.
		Integrating the functionality with the main Joomla! functionality requires using the Joomla! $a[API], obviously, but you&apos;re not limited in how you can process the data you have access to.
		That makes customising Joomla! much easier, especially for odd use cases that the Joomla! developers would have no reason to think of.
		You don&apos;t have to worry that some use case can&apos;t be worked with in Joomla!.
	</p>
	<p>
		The third page first says that modules are extensions, then says that they&apos;re widgets.
		I get the feeling Joomla!&apos;s terminology isn&apos;t exactly consistent.
		An extension and a widget aren&apos;t the same thing, so to use the same word to mean both of them only leads to confusion.
		An extension is an executable that adds functionality to the code of the main program.
		In some programs, these executables are scripts, not compiled binaries.
		Joomla!&apos;s one of these programs.
		A widget is a user-facing interface item that can be added.
		Often times, they can be interacted with.
		Widgets can be added and removed from the interface, and can often times be moved to other parts of the interface.
		The main program can define widgets, but widgets can also often be defined by extensions.
		So in Joomla!, a module (read: extension) can define a module (read: widget).
		Separate terminology would definitely help.
		The page also explains that template designers choose where widgets show up, which is good to know, and uses the example that they module space labelled as &quot;left&quot; might not actually be defined to be on the left of the page.
		It probably should be on the left unless there&apos;s a specific reason not to put it there, but administrators should be aware that if they install custom templates, things may not be in the assumed places.
		If they show up in odd places, after the administrator checks to see that they chose the correct module spaces to put modules in, they should look to the template they&apos;re using to see if odd arrangements are coming from there.
		This is of course not an oversight in Joomla!, but a natural consequence of having a huge amount of customisability in the templates.
		Finally, the page lists a bunch of specific extensions and gives brief descriptions of what they do.
	</p>
	<p>
		The final page of the reading assignment again explains how to build an extension, but this time introduces a third $a[PHP] file.
		This third file appears to be called by Joomla! itself, just like the first, but no explanation of what it&apos;s actually for is made.
		The second file, the included one, is again included in this example, so we again don&apos;t see whether this specific file name actually means anything.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://dyn.com/blog/can-businesses-manage-and-avoid-internet-performance-issues-to-boost-their-enterprise-systems/">Can Businesses Manage &amp; Avoid Internet Performance Issues to Boost their Enterprise Systems? | Dyn Blog</a>
		</li>
		<li>
			<a href="https://smartbear.com/learn/performance-monitoring/what-is-real-user-monitoring/">What is Real-User Monitoring? | SmartBear</a>
		</li>
		<li>
			<a href="https://stackify.com/web-application-problems/">Web Application Performance: 7 Common Problems and How to Solve Them</a>
		</li>
		<li>
			<a href="https://www.monitis.com/blog/what-is-synthetic-transaction-monitoring-and-who-needs-it/">What Is Synthetic Transaction Monitoring | Monitis Blog</a>
		</li>
	</ul>
	<p>
		Images draw my eye, and the first student had the courtesy to upload their screenshots and embed them into the page instead of only uploading an obnoxious word-processing document containing screenshots like the instructions wanted us to do.
		I immediately noticed the sudoku on the homepage.
		A nice, short explanation was provided too for those that couldn&apos;t spot it though.
		The second student embedded images too, but they embedded the images from the assignment prior.
		I had no choice but to give them a zero.
		The third student uploaded a word-processing document at the assignment asks for, but their screenshots were so wide that they had to be scaled to fit in the document.
		It made grading very difficult.
		I couldn&apos;t see what was in the image very well.
		I couldn&apos;t fault the student for including the screenshots in a word-processing document, as they were only following the instructions, so I didn&apos;t mark them down.
		However, they were really stupid about how they did it.
		Joomla! has one of those annoying fixed-width set-ups that doesn&apos;t adapt to different monitor widths.
		The student&apos;s monitor was quite a bit larger, and they had their browser window at full width.
		This made the screenshots much wider than they needed to be, causing the scaling to need to shrink the relevant part of the image even further.
		I later went back to check the second student&apos;s work in case they had different images in an uploaded document.
		Sure enough, they had an uploaded word-processing document, and sure enough, it had very different images in it.
		I planned on docking them about ten percent for using the wrong images in the main submission, but before I got a chance to finish my report on the grading situation, I decided just to issue a warning and give them full credit.
		They&apos;d done the work, they only messed up the upload of their screenshots of the work.
	</p>
	<p>
		The section that mentions $a[DNS] traffic steering makes the claim that it can keep traffic on routes that are fast and secure.
		Speed is something I can definitely see being improved this way.
		But security?
		Not really.
		If your traffic is encrypted, all routes are secure.
		No one can eavesdrop on your communications.
		On the other hand, if your traffic isn&apos;t encrypted, no route can be assumed to be secure.
		This is particularly provable with a thought experiment.
		Imagine that you think you have a secure route to your server.
		One of your customers though lives in an area in which all outgoing traffic is monitored by adversaries.
		The traffic between them and your safer has to pass through your connection, but it also has to pass through <strong>*theirs*</strong>.
		If their end isn&apos;t secure, the connection as a whole isn&apos;t secure because it only takes one adversary getting information they shouldn&apos;t have for the security to be ineffective.
		Your customers can be anywhere, and you can&apos;t keep a path to everywhere secure.
		And that ignores the fact that $a[DNS] steering only changes which of your servers your customers will connect to anyway.
		It doesn&apos;t actually define the full route your traffic will be requested over.
		In other words, even if there was at least one secure route to each customer, $a[DNS] traffic steering wouldn&apos;t help you send traffic over it.
	</p>
	<p>
		As I finished the article, I thought the title was a bit odd.
		It posed the question of whether something could be done about performance issues on the Internet, which seems like an open-ended question in need of several possible improvement methods.
		Instead, the article mentions $a[DNS] steering, doesn&apos;t even explain it, then abruptly ends without providing a second improvement method.
		A better title would have been something about the usefulness of $a[DNS] traffic steering or something.
	</p>
	<p>
		I don&apos;t know what to think of real-user monitoring.
		My first thought was that it was creepy, though if you&apos;re only gathering performance metrics, maybe it&apos;s not so bad.
		However, you&apos;ve got to make certain assumptions to even develop your real-user monitoring system, and those assumptions will necessarily prove false for some use cases.
		This will result in some classes of user being unrepresented, and thus likely not taken care of.
		A great example of this is that the article talking about real-user monitoring said it can be useful to do the monitoring via JavaScript.
		Anyone not using JavaScript thus won&apos;t be counted, and accessibility problems in regards to JavaScript won&apos;t be dealt with.
		There&apos;s also the case of third-party scripts used to do the monitoring.
		If the script sets tracking cookies via $a[HTTP] (as opposed to via JavaScript methods), those cookies will be associated with the third-party site.
		First of all, that allows the third-party site to track users not only as they navigate your site, but also track them <strong>*across*</strong> websites, which is a privacy nightmare.
		Some of us don&apos;t allow cross-website tracking like that in our browsers.
		The PrivacyBadger extension does wonders in that regard.
		When such trackers attempt to track us across multiple websites, those trackers get blocked altogether.
		The JavaScript they send won&apos;t even be loaded, so even featured that don&apos;t rely on that noxious cross-site tracking won&apos;t function.
		Again, those of us not allowing our privacy to be stripped away aren&apos;t represented by this sort of real-user monitoring, even if we allow tracking within individual websites.
	</p>
	<p>
		The article talked about the massive amounts of data real-user monitoring generates.
		I figured that tied in with the data mining we&apos;d learned about before.
		But then I remembered that that probably wasn&apos;t in this course.
		It was in <a href="https://y.st./en/coursework/CS4407">the other course I&apos;m taking right now</a>.
		So I guess I&apos;ll summarise my thoughts again here: tracking your users and mining them for data, in the traditional sense, is incredibly creepy.
		The use case here is that you&apos;re not trying to learn about the users themselves and their spending/usage habits, and instead focussing on what&apos;s going wrong with your website itself.
		However, the high-end real-user monitoring systems described <strong>*do*</strong> track the user on every page and record the user&apos;s session, as stated by the article.
		Regardless of intent, the level of tracking here is very creepy, and as a user, I don&apos;t appreciate it.
		I don&apos;t care if you&apos;re able to make your website faster if it means you&apos;ve got to creep on me to do it.
	</p>
	<p>
		The next article opened with the assertion that a company&apos;s website acts as the face of the company, so it should display reliability, innovation, and progress, so as to convey that the company itself has these characteristics.
		I agree, but these aren&apos;t always the most important characteristics to convey.
		Reliability is definitely a good thing to convey.
		In the attempt to convey innovation and progress though, many companies take a step backwards, especially in the accessibility department.
		When you sacrifice accessibility, you send the message to your customers that they&apos;re not important to you.
		The bulk of your customers may not experience the issues and may not notice, but those with non-standard set-ups, such as blind people with screen readers, will end up choosing (and recommending to other people, including those with the specific setup your website requires) other companies over yours.
	</p>
	<h3>Epilogue: Unit 9</h3>
	<p>
		The final exam for this course was easy.
		I was given an hour to finish it, but I used less than ten minutes of that.
		It really helped that I&apos;ve already been doing Web work for over a decade as a hobby.
		As I always say, $a[PHP] is my native language.
		We were also tested on the <code>ls</code> command, which I have to use a couple times each week, and compilation commands, which I use much less frequently, but I still use on occasion.
		I&apos;m on a Debian system, so I have to know a few Linux commands.
		I guess that gives me a leg up on other students taking this course.
		The converse is also true though: I also had a severe handicap when I took <a href="https://y.st./en/coursework/CS2301/" title="Operating Systems 1">CS 2301</a>.
		(Despite being a &quot;course on operating systems&quot; (plural), the coursework all assumes everyone is running the same operating system.
		Some of the assignments were to take screenshots of you using very specific, Windows-only software, then write up a report on your experiences using the software.
		The only reason I was able to complete the work was because a fellow student lent me remote access to their own machine.
		I don&apos;t have any Windows machines locally.
		I ended up spending the term doing double the work, showing screenshots of the specific Windows-only software we were required to use - on a system I wasn&apos;t even familiar with - as well as screenshots for alternative software that ran on Linux, them writing up double- to triple-length essays explaining how to use both and that there&apos;s no particular reason the exact software the assignment specified to use was even required.)
		Everything I was tested on during the final for this course was things that I already knew before the class started, with the exception of the questions on Web 1.0 and Web 2.0.
		And while I now know the difference between the two, I&apos;m still not convinced that &quot;Web 1.0&quot; and &quot;Web 2.0&quot; are even valid concepts, or that there&apos;s a valid separation between the two.
	</p>
	<p>
		That&apos;s not to say I didn&apos;t learn anything in this course, just that I wasn&apos;t tested on it on the final exam.
		For example, before this course, I&apos;d had zero experience with a $a[CMS].
		I didn&apos;t even know what Joomla! was; I&apos;d never heard of it!
		I think with the way we set up our instances of it, we skipped past the actual Joomla! installation process.
		However, when it came to getting a module running correctly, I struggled, and had to explore the control panel quite a bit.
		With that experience, I would hope that it&apos;ll be much easier in the future should I ever work with Joomla! again.
		Now I know where to look for the settings I need.
	</p>
</section>
END
);
