<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 2205: Web Programming 1',
	'<{copyright year}>' => '2017',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		School started back up this week.
		I can already tell I&apos;m going to love <span title="Web Programming 1">CS 2205</span>.
		The first discussion forum activity is to read articles about and write about privacy issues on the Internet.
		The first assignment is to find three websites, run them through a validator, and report on their errors.
		The first learning journal assignment is similar to that one I like where we summarise the week&apos;s learning, but without the inclusion of our discussion posts in the journal entry submission.
		Score on all three fronts!
		Privacy on the Internet is a huge deal for me, but it feels like most people ignore this issue.
		Not only do I get to state my case, but every other student&apos;s going to have to do research too and maybe they&apos;ll start caring at least a little more.
		If I make a convincing argument, maybe I can help tip the scales.
		With any luck, other students will also have convincing argument I can add to my own to help me better explain my position in the future as well.
		I knew I needed to work in something about $a[Tor] and the $a[NSA], at a minimum.
		Code validation has always been a thing of mine as well.
		I believe in writing accurate and valid code.
		I believe when Web browsers started accepting and attempting to parse invalid markup, they did the entire Web a huge disservice.
		It is for that reason that I always write in $a[XHTML], not $a[HTML], as Web browsers tend to throw errors and alert me to mistakes in $a[XHTML].
		Web browsers don&apos;t catch <strong>*all*</strong> errors in $a[XHTML], but they do catch the $a[XML] well-formed-ness errors that result from most of my typos.
		Whenever I&apos;m working with more-complex pages, I also always run my pages through the $a[W3C] validator, and I&apos;m a bit appalled that most pages on the Web these days can&apos;t even pass a transitional-level validation.
		Meanwhile, I always validate my own pages to the strictest standards; it&apos;s honestly not difficult at all and only a lazy hack would fail to meet even transitional-level standards.
		As for the learning journal assignment, I find I learn a lot better when I&apos;m able to write about what I learn as I go.
		The more detailed I get, the better the information sticks with me.
		Past courses here at University of the People that&apos;ve had had this type of learning journal assignment have had the same assignment repeated every week all term, so I&apos;m guessing I&apos;ll be using this effective (effective for me, at any rate; everyone learns differently) learning tool through the end of the course.
	</p>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://computer.howstuffworks.com/internet/basics/internet-versus-world-wide-web.htm">What&apos;s the difference between the Internet and the World Wide Web? | HowStuffWorks</a>
		</li>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/How_does_the_Internet_work">How does the Internet work - Web Education Community Group</a>
		</li>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/The_history_of_the_Web">The history of the Web - Web Education Community Group</a>
		</li>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/The_web_standards_model_-_HTML_CSS_and_JavaScript">The web standards model - HTML CSS and JavaScript - Web Education Community Group</a>
		</li>
		<li>
			<a href="https://www.w3schools.com/website/web_validate.asp">404 - Page not found</a>
		</li>
		<li>
			<a href="https://validator.w3.org/docs/why.html">Why Validate?</a>
		</li>
	</ul>
	<p>
		One assigned page, properly labelled above, is a <code>404</code> error.
		I&apos;ve only noted it in hopes it&apos;ll be corrected for next term&apos;s students.
	</p>
	<p>
		First on the reading list was an article about the differences between the Internet and the Web.
		Yes!
		This should be a mandatory study topic for all Web users.
		I happen to know someone that often likes to talk about things they have no actual knowledge about.
		Sometimes, I&apos;ll mention the Web or the Internet in a conversation with them.
		I might say something applies to the Web, realise it&apos;s not limited to the Web, and correct myself to say it applies to the Internet in general.
		Likewise, I sometimes say something about the Internet, then realise it only applies to the Web, and correct myself.
		In either case, they claim my correction is unnecessary as they&apos;re the same thing.
		I&apos;ve tried explaining that the Web runs over the Internet but isn&apos;t the Internet, and I&apos;ve tried explaining the differences between the two.
		It&apos;s not a difficult concept to grasp: the Internet is the network of interconnected computers, while the Web is one of many services (which also includes email, $a[IRC], $a[XMPP], $a[SIP], $a[VoIP], and many others) that run on the Internet.
		They still refuse to believe they&apos;re not the same thing though.
		Then again, they also refuse to believe that if you flip two coins, there&apos;s a 75% chance of getting at least one heads.
		They insist that it&apos;s a 100% chance, but that you&apos;re not guaranteed to get a heads.
		Probabilities don&apos;t work that way.
		If something&apos;s not guaranteed to happen, you don&apos;t have a 100% chance of it happening and you don&apos;t get to just add probabilities together like that; you&apos;ve got to multiply them.
		They&apos;re certainly not someone to listen to when it comes to understanding how things actually work.
		While I think this should be mandatory study for Web users, the specific article assigned by the course is full of misinformation and shouldn&apos;t be the one studied by Web users that don&apos;t already know what they&apos;re talking about.
	</p>
	<p>
		The article says that as the Internet is more easily understood than the Web, we should start with explaining the Web.
		Of course the Internet is easier to understand; the Web uses the Internet as one of its components!
		Understanding the Web without understanding the Internet is like trying to understand what a bicycle is without having any idea what a wheel is.
		You can learn about bicycles as a whole without first learning about wheels, but by the time you&apos;ve finished understanding the bicycle, you&apos;re going to have a firm grasp on the concept of wheels and won&apos;t need to discuss them separately, at least not at length.
		If you want to discuss the differences between a bicycle and a wheel, you&apos;re going to want to start with the wheel.
		The article claims though that the Web is a system used to access the Internet.
		That&apos;s not <strong>*at all*</strong> true.
		The Web uses the Internet as a transport layer for relaying Web pages, other files, and client requests for Web pages and other files, but the Web is <strong>*not*</strong> a gateway to the Internet.
		If the Web were a way of accessing the Internet, for starters, it&apos;s provide Internet access.
		However, it does not.
		Different software components are needed to provide that access, and none of those components are related to the Web.
		Internet access must first be established before Web access is even a <strong>*possibility*</strong>.
		Second of all, if the Web provided Internet access, one would be able to access all (or at least most) of the Internet through the Web.
		That&apos;s not how it works at all though.
		Only files made available over the Web can be accessed over the Web, and as many parts of the Internet have nothing to do with files, the Web has no chance of offering access to these parts except through gateway servers (which means that other services are <strong>*translated*</strong> to be transferred over the Web and can&apos;t be considered to be the Web actually being able to handle the native form of these services).
		If anything, the Internet is subdivided into several different services.
		The underlying $a[TCP]/$a[IP] (or $a[UDP]) protocol is flexible and can allow new services (and thus new pieces of the Internet) to be added on, but each existing service is only one small part of the Internet.
		No service acts as a gateway to the content of the Internet as a whole.
		Web-based gateways to services such as email or $a[IRC] can allow someone to interact with other parts of the Internet through the Web, but these other parts are strictly mutually exclusive to the Web and there isn&apos;t overlap.
		These gateway services are possible only because the gateway server runs two types of services.
		One service is used to gather information that that same server reformats and provides via the other service.
		(That said, the Web, even though it&apos;s a subdivision of the Internet, can be subdivided further.
		Some services such as CardDAV run on on top of the Web.)
	</p>
	<p>
		The article on how the Internet works brings up a couple interesting points.
		First, it says that $a[URI]s that use domain names as their host component act as aliases for $a[URI]s that instead use $a[IP] addresses.
		However, that&apos;s only half the story.
		$a[DNS] allows the translation of domain names into $a[IP] addresses.
		In this way, the domain names do sort of act as aliases for those $a[IP] addresses.
		Still, this is an extreme oversimplification even as far as $a[DNS] is concerned.
		However, the $a[HTTP] protocol is set up such that an <code>https:</code>- or <code>http:</code>-scheme $a[URI] that uses a domain name is <strong>*not*</strong> an alias for a similar $a[URI] that instead uses an $a[IP] address!
		First off, one domain name can point to multiple $a[IP] addresses.
		The Web server at each $a[IP] address could be serving a different website.
		One $a[URI] cannot be an alias for the $a[URI]s of multiple, completely-different pages.
		Using the $a[IP]-address-based $a[URI]s, you can choose specifically which site you want to visit, but with the domain name, you&apos;re left with the Web browser and the $a[DNS] server making that choice for you.
		I think the Web browser usually tries the first-listed $a[IP] address first, but the $a[DNS] server can be programmed to rotate through the $a[IP] addresses, changing the order, for a round robin effect.
		In practice, this case of multiple websites with the same domain isn&apos;t seen much, but it&apos;s a good example of why these aren&apos;t true aliases.
		A second example is more realistic though.
		The $a[HTTP] protocol, at least version 1.1 (the latest version) of it, includes the host name in the headers.
		The same server can send different websites based on whether you used the $a[URI] with the domain name or the one with the $a[IP] address!
		Back when I ran my own Web server, I set it up so that the website at the $a[IP]-address-based $a[URI] always redirected to the domain-based $a[URI], but I could&apos;ve instead served a second website there.
		Speaking of second websites, a Web server can send a different website based on <strong>*which*</strong> domain is used!
		Let&apos;s take my website for example.
		My name is Alex Yst, and my domain looks like my surname: <code>y.st.</code>.
		The $a[URI] of my homepage is <a href="https://y.st./"><code>https://y.st./</code></a>.
		The domain name resolves to <code>51.254.73.48</code> and <code>2001:41d0:c:b19:0:0:0:10</code>.
		If you try to load <a href="https://51.254.73.48/"><code>https://51.254.73.48/</code></a>, you instead see the website of my friend Opal!
		Why is that?
		Well, Opal&apos;s website, normally reached at <a href="https://wowana.me/"><code>https://wowana.me/</code></a>, is on that same server; she hosts both websites.
		<code>https://y.st./</code> cannot be an alias of <code>https://51.254.73.48/</code>, because the two $a[URI]s correspond to completely different pages!
		(Ostensibly, if you were to load the page at <a href="https://[2001:41d0:c:b19:0:0:0:10]/"><code>https://[2001:41d0:c:b19:0:0:0:10]/</code></a>, that&apos;d also load Opal&apos;s page and not mine, but I don&apos;t have $a[IPv6] service here at home to test that $a[URI] with.)
	</p>
	<p>
		Second, the article says domain names are much more human-memorable.
		This is only one benefit of using domain names instead of directly using $a[IP] addresses, but it&apos;s an important one.
		It&apos;s one of the main reasons I think the telephone number system is incredibly poorly set up.
		We should be using $a[DNS] for telephone numbers too, not just $a[IP] addresses, and the current $a[DNS] structure could handle it (in the form of TXT records, maybe with host names in the form of <code>_telephone.example.com.</code> to refer to the telephone number that <code>example.com</code> refers to), but telephone makers haven&apos;t bothered to set that up.
		Instead, the telephone numbers, which were originally like $a[IP] addresses in purpose, were completely reworked.
		Telephone numbers now refer to entries in a lookup table, like $a[DNS] names, but without the readability, without the subdomainability, and without the ability for someone outside the telephone service industry to own and/or reserve a name.
		Telephone numbers now have the disadvantages of $a[IP] addresses (they&apos;re not easily human readable/memorable) and the disadvantages of domain names (computers have to look up what the number represents instead of using it directly), without hardly any advantages of either system.
		It&apos;s a real mess, and no one seems to care but me.
	</p>
	<p>
		The section on types of Web content is a bit misinformed.
		I&apos;d quote it here, but the license is incompatible with my own due to the non-commercial requirement of the original document (I archive all my <a href="https://y.st./en/coursework/">coursework</a> on my website, released under the $a[GNU] {$a['GPLv3+']}, which allows commercial reuse).
		Basically though, it says Web files fit into four groups: text files, Web markup/script files ($a[XHTML]/$a[CSS]/JavaScript), server-side scripts, and files that require a browser plugin or non-browser program to read/run.
		So ... where do images fit in?
		Images are often used on the Web, but fit into none of those four groups.
		Furthermore, server-side scripts overlap with all other groups; they&apos;re not a group themselves.
		A server-side script needs to generate a file that&apos;ll be sent to the client, and that file must fit into another group (once we fix the issue of not all Web files being grouped at all).
		Server-side scripting is powerful though, and allows different content to be served under differing circumstances, so I can see why the authors of the wiki article would group it into its own category.
		I&apos;d say there&apos;s two groups of files on the Web.
		First, there&apos;s the standards-based language files mentioned, such as $a[XHTML] files, $a[CSS] files, and JavaScript files.
		And second, there&apos;s files that are simply taken how they are.
		Plain text files aren&apos;t special, and are just part of this second group, which also includes image files and such.
		Files that require a plugin or other application are just things the Web browser isn&apos;t programmed to handle; which files the Web browser handles varies between browsers.
		For most browsers, $a[PDF] files need to be downloaded and read with a separate application.
		However, Firefox allows these to be displayed in-browser.
		(This feature is broken and no error message is even displayed when JavaScript is disabled.)
	</p>
	<p>
		I have a friend that uses Gopher for their own server instead of $a[HTTP].
		I&apos;ve looked into the Gopher protocol myself, but it doesn&apos;t fit my needs.
		Specifically, it lacks a way to have the client interpret a file as $a[XHTML].
		$a[HTML] can be used, but without $a[XHTML], my files don&apos;t render properly.
		I think $a[HTML] is messy, so I&apos;m not going to switch to it any time soon.
		My pages don&apos;t render correctly when interpreted as $a[HTML] either.
		They&apos;re perfectly valid $a[XHTML] and $a[XML], but they make use of some of $a[XML]&apos;s cleanliness options that basic $a[HTML] simply doesn&apos;t offer.
		I was unaware of the licensing issues, as mentioned in the history article, that caused the shift away from Gopher though.
		I thought the creation of $a[HTTP] was just because a more-flexible protocol was wanted.
		$a[XHTML] didn&apos;t exist at that time, so I knew $a[XHTML] support wasn&apos;t one of the issues in question, but the Gopher protocol does have restrictive limitations in what it can accomplish.
		The existence of the Mosaic $a[HTTP]/Gopher hybrid client was also new information to me.
	</p>
	<p>
		The browser wars are touched upon, but their severe effect on $a[HTML] isn&apos;t mentioned.
		Because of this war, browsers started attempting to parse and render pages with invalid markup.
		This resulted in lazy Web authors building poorly-coded pages that were malformed.
		To avoid breaking the Web, Web browser vendors to this day allow their browsers to interpret malformed $a[HTML] files, resulting in $a[HTML] being an utter mess.
		The laziness of Web authors still hasn&apos;t ended, and it&apos;s not going to any time soon.
		The $a[WHATWG] set up a standard for rendering malformed pages &quot;correctly&quot;, so there&apos;s no incentive for Web developers to take the proper care they should.
		Instead, Web browser vendors are stuck pouring effort into writing and maintaining code for the sole benefit of lazy developers that don&apos;t deserve the help.
		Clean code is something to care about and strive for.
		It is for this reason that I always use $a[XHTML] instead of $a[HTML].
		Web browsers still render pages with tags being nested within tags they shouldn&apos;t be, and they allow invalid tags, but at least my pages are forced to meet the basic $a[XML] well-formed-ness rules before they&apos;ll render, which catches most of my mistakes before I publish live.
		Whenever I publish a page using tags outside the basic set, I also use a validator to ensure the rest of the markup is fine as well.
	</p>
	<p>
		The interference of the $a[WHATWG] was covered too.
		I&apos;m still a bit peeved with them, myself.
		$a[XHTML]2 was set to be so much cleaner and feature-rich than $a[HTML], but a group of browser vendors that didn&apos;t want to write the code needed to support $a[XHTML]2 banded together to fight against it.
		They&apos;re the reason the Web moved toward $a[HTML]5 instead of $a[XHTML]2, even though $a[HTML] is messy and should&apos;ve been deprecated.
		Both for killing $a[XHTML]2 and for reviving $a[HTML], I think these people moved us in the wrong direction, but their treachery doesn&apos;t stop there.
		They also insist that $a[HTML]5 is a &quot;living standard&quot;, which is basically a rolling release of a specification with not even version numbers to distinguish between different past and present versions of the specification.
		In other words, it&apos;s not even a standard at all!
		It&apos;s a moving target that no one can ever hope to keep up with.
		The $a[W3C] periodically publishes static snapshots of the $a[WHATWG] specification though, so as long as we use the $a[W3C] version of the standard (which the stupid $a[WHATWG] discourages doing), we have a target we can actually hit.
		Currently, the $a[HTML] 5.1 specification is the latest available, but $a[HTML] 5.2 is in the works.
		Additionally and thankfully, the $a[WHATWG] also didn&apos;t fully kill $a[XHTML].
		An alternate, $a[XML]-based syntax for $a[HTML]5 is available, known as $a[XHTML]5.
		I use $a[XHTML] 5.1 for all pages I write, though if $a[XHTML] had been discontinued in its entirety, I would continue using $a[XHTML] 1.1 just to be able to continue using a cleaner markup language than $a[HTML] has become.
		Supposedly, the $a[WHATWG] were trying to preserve backwards compatibility, but this makes zero sense when you think about it.
		<code>&lt;!DOCTYPE&gt;</code> declarations exist for a reason: communicating to the client what version of the language is being used so the document can be correctly rendered.
		Despite the huge differences between $a[XHTML] 1.1 and $a[XHTML]2, the same client can easily distinguish which language version is used and display documents written in both versions completely correctly.
		Additionally, as the $a[WHATWG] mangled the <code>&lt;!DOCTYPE&gt;</code> declaration of $a[HTML]5 to the point that it contains no useful information, they&apos;ve <strong>*broken*</strong> the possibility of compatibility in the future.
		If their &quot;living standard&quot; changes too much, it&apos;ll prevent either new or old documents form being displayed correctly, as there&apos;s no way to distinguish between the two.
		Finally, it&apos;s important to note that when clients are required to fix and render invalid markup, as is required in $a[HTML], it consumes more system resources than are required for rendering markup that can be thrown out in case of malformation.
		The continuation of $a[HTML] is bad news for devices with limited resources, such as mobile devices!
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		As requested by my professor, I&apos;ve tried to be more brief in this journal entry.
		I was disappointed last week to see that we had no learning journal assignment, but maybe it worked out for the best, as I had two days to complete all my coursework that week.
		My laptop&apos;s motherboard died on me at the beginning of the week, and it took most of the week to get a functional system back up and running.
		It looks like we have no journal assignment this coming week, either.
	</p>
	<p>
		I registered for Codecademy as instructed this week, but I never could find any option on the site labelled as the $a[HTML] &amp; $a[CSS] track.
		Unsure what to do, I completed the separate $a[HTML] course and $a[CSS] course.
		We were told to complete six lessons, so I completed all three $a[HTML] lessons and the first three $a[CSS] lessons.
		Were these the right lessons?
		I&apos;m not sure, but it shouldn&apos;t matter too much anyway.
		I&apos;m fluent in $a[XHTML] and I know the basic $a[CSS] concepts that were probably covered in the correct lessons.
		The only difficulties I have in $a[CSS] are in advanced work, as I don&apos;t know all the fancier properties that can be worked with.
	</p>
	<p>
		In the $a[HTML] course, I noticed that fully-spelled-out attributes weren&apos;t covered.
		While the course covered the $a[XHTML] self-closing tags, it asked us to use the attribute name <code>controls</code>, but provided no value for it.
		In $a[XML], that&apos;d be invalid.
		In the $a[CSS] course, I noticed that they recommended using the <code>id</code> attribute for elements that should be uniquely styled.
		Personally, I recommend against this.
		It&apos;s fine to have a class that&apos;s used by only a single element.
		However, the <code>id</code> attribute has functionality outside of styling.
		Specifically, it&apos;s used for hyperlinking to specific elements in a page.
		It therefore shouldn&apos;t be used in cases in which that non-styling semantic is desired.
		I&apos;d argue that the <code>id</code> element is also a good choice when you need to uniquely find a specific element from a JavaScript script.
		The uniqueness is an aid for that.
		You don&apos;t gain anything from using the <code>id</code> element for styling instead of a single-element class though.
		That said, if an element already has an <code>id</code> attribute for non-style-related reasons, there&apos;s no harm in using that <code>id</code> as a $a[CSS] selector as well.
		It&apos;s also worth noting that in the lessons, we set all the sizes in terms of pixels.
		That&apos;s a bad idea for accessibility reasons in many cases.
		At a minimum, no font size should be set using a fixed measurement such as <code>px</code>; it should be set using something relative, such as <code>em</code>.
	</p>
	<p>
		The Codecademy website kept having issues, but I think it was fairly functional, for an interactive Web tutorial site.
		When it asked me to set the <code>font-weight</code> property of <code>&lt;p/&gt;</code> elements to <code>bold</code> though, it refused to accept my answer.
		Eventually, I gave up and told the site to give me the answer, and it provided the exact same answer I did!
		I don&apos;t know what the problem was, but I guess it doesn&apos;t matter a great deal.
	</p>
	<p>
		The main assignment this week was much nicer than last week&apos;s assignment; we worked with real $a[HTML] and $a[CSS]!
		Last week, we used that stupid Wix system, which is horrid in so many ways.
		I won&apos;t get into the details of last week&apos;s assignment though, as I expressed my thoughts on it already as the topic of my website submission.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		Lovely.
		Another Codecademy assignment.
		Last week&apos;s assignment didn&apos;t match the actual content of the Codecademy website, so it was ambiguous as to how to even complete the assignment.
		This week&apos;s assignment seems to be the same way.
		This really isn&apos;t my week.
		In this course, the assignment is ambiguous because the website it depends on has changed since the assignment instructions were constructed, and in my other course, I have an assignment that I can&apos;t complete because the assignment depends on students having a particular, expensive operating system that I don&apos;t use and can&apos;t afford to buy.
	</p>
	<p>
		The reading assignment for the week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/Programming_-_the_real_basics">Programming - the real basics - Web Education Community Group</a>
		</li>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/What_can_you_do_with_JavaScript">What can you do with JavaScript - Web Education Community Group</a>
		</li>
		<li>
			<a href="https://www.w3.org/community/webed/wiki/Your_first_look_at_JavaScript">Your first look at JavaScript - Web Education Community Group</a>
		</li>
	</ul>
	<p>
		Most of this was review for me, as I&apos;ve already been programming for years.
		However, I do have some notes on the content.
		The first page says that the opening <code>&lt;script type=&quot;text/javascript&quot;&gt;</code> tag in $a[HTML] tells the Web browser to switch to interpreting the code in a different language: JavaScript.
		The ending <code>&lt;/script&gt;</code> tag then tells the browser to switch back to $a[HTML].
		This is very true, and is one of the things that makes $a[HTML] so messy.
		In $a[HTML], you can just switch languages back and forth like this.
		In $a[XHTML], this is instead interpreted differently.
		In $a[XHTML], the $a[XML] source is parsed fully as $a[XML].
		There is no language-switching.
		From there, the tree of $a[XML] elements is handled and rendered as it needs to be.
		The content of <code>&lt;script type=&quot;text/javascript&quot;/&quot;</code> elements is then treated as and parsed as JavaScript.
		Instead of switching back and forth between languages, the JavaScript code is properly encapsulated within an element in the $a[XHTML] code.
		People used to $a[HTML]&apos;s messy way of handling JavaScript find confusion when dealing with $a[XHTML].
		They do messy things such wrap their JavaScript code in <code>&lt;![CDATA[</code> <code>]]&gt;</code> tags.
		Fun fact: properly-written $a[XML] doesn&apos;t use or need <code>&lt;![CDATA[</code> <code>]]&gt;</code> tags.
		The only thing these tags do is let you get out of escaping your special characters.
		So for example, this will work but is messy:
	</p>
	<blockquote>
	<pre><code>&lt;script type=&quot;text/javascript&quot;&gt;&lt;![CDATA[
		var maths_string = &quot;2 &lt; 3&quot;;
	]]&gt;&lt;/script&gt;</code></pre>
	</blockquote>
	<p>
		The <strong>*proper*</strong> way to write this, which will do the exact same thing, is:
	</p>
	<blockquote>
	<pre><code>&lt;script type=&quot;text/javascript&quot;&gt;
		var maths_string = &quot;2 &amp;lt; 3&quot;;
	&lt;/script&gt;</code></pre>
	</blockquote>
	<p>
		The less-than character needs to be escaped because the whole file is $a[XML].
		We&apos;re not sloppily flipping between languages, but instead, parsing the content of an element as JavaScript.
		Because we&apos;re working with the element content, <code>&amp;lt;</code> will already be converted to <code>&lt;</code> by the time the JavaScript interpreter sees it, so it&apos;ll work as intended.
		This matters little when writing a good webpage though.
		You <strong>*should*</strong> be keeping your differently-languages code in separate <strong>*files*</strong>, anyway, and using a <code>&lt;script/&gt;</code> tag in the $a[XHTML] to link to the JavaScript file.
	</p>
	<p>
		The next page talks about the downside to JavaScript.
		It fails to mention accessibility though, at least in the downside section.
		(It does mention it later in the article.)
		Not all of us have JavaScript turned on, and some of us even use text-based Web browsers.
		Personally, I keep JavaScript disabled because of the misbehaving scripts on the University of the People website.
		The school website has me tearing my hair out any time i try to use it with JavaScript enabled, and unfortunately, every website I visit has to suffer for that, as the JavaScript-disabling feature applies Web-wide, not just on one website.
		JavaScript is not a substitute for many types of things Web developers try to use it as one for.
		JavaScript should be used when and only when client-side activity is needed.
		It should <strong>*never*</strong>, for example, be used to display the main content of a page, build the navigation menu, or perform any other tasks that are better suited to simple $a[XHTML] code.
		This page also mentions <code>document.write()</code>.
		Don&apos;t use <code>document.write()</code>.
		Ever.
		It&apos;s messy and horrible.
		Use $a[DOM] methods instead, and you&apos;ll get cleaner code with better compatibility with more pages.
		(<code>document.write()</code> is <strong>*so*</strong> messy and horrible that it was made to be disabled in $a[XHTML], so compatibility with $a[XHTML] pages requires leaving calls to this horrid method out of your code.)
	</p>
	<p>
		The third page mentioned the <code>&lt;![CDATA[</code> <code>]]&gt;</code> tags I wrote about after reading the first page, so I&apos;ll reiterate: <strong>DON&apos;T USE <code>&lt;![CDATA[</code> <code>]]&gt;</code> TAGS</strong>!
		Ever!
		You don&apos;t need them, they&apos;re messy, and they&apos;re only a shortcut for allowing yourself to write sloppy code with unescaped special characters.
		Write your code properly, escape your special characters within the <code>&lt;script/&lt;</code> tags, and don&apos;t use sloppy shortcuts.
		Or even better, put your JavaScript code in a separate file, where the $a[XML] parser doesn&apos;t even need to see it.
		Besides, this&apos;ll allow you to use the same script on multiple pages, making good use of Web browser caching and eliminating the need to update the script in every webpage file when you could instead simply update the script once in the external JavaScript file.
		This page also says it&apos;s sensible to keep JavaScript external for strict $a[XHTML] pages, but that seems to miss the point too.
		It&apos;s not just $a[XHTML] pages that should keep their JavaScript external.
		Plain $a[HTML] files, if you believe they should even exist, should likewise keep their JavaScript external for better caching, reuse, and maintenance, just like for $a[XHTML] pages.
		This third page does agree with me though that you should never use <code>document.write()</code>.
	</p>
	<p>
		I guess this entry got a bit long, but I&apos;m very passionate about coding standards.
		When I read instructions teaching people bad practices, I know it&apos;s perpetuating the messy code slop I already see everywhere and that I already hate dealing with.
		It feels like the inside of my body heats up a little, and I get a bit frustrated; I think this is what people mean when they say their blood is boiling.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		Our assigned reading for the week was an <a href="https://www.w3schools.com/xml/">XML Tutorial</a>.
		I&apos;m already quite familiar with $a[XML], as it&apos;s a technology I care very much about, so I focussed most of my study time on the $a[AJAX] section of the tutorial.
		For everything I&apos;ve ever built, $a[AJAX] has been an overkill solution, so I&apos;ve always opted to use something else.
		As such, I have zero experience with $a[AJAX].
		Much to my surprise, I fount that while the ex in $a[AJAX] stands for $a[XML], using JavaScript to retrieve non-$a[XML] data files is still considered to be $a[AJAX].
	</p>
	<p>
		For as long as I can remember, the University of the People website has locked up on me periodically if I have JavaScript enabled.
		University staff insist that no such problem exists, but the problem has cost me a lot of time.
		Often times, when I&apos;m typing, the website&apos;ll lock up and I won&apos;t notice, so I&apos;ll keep typing.
		Nothing I type during these locked periods actually makes it into the text box though, so when I go back and proofread, I find partial sentences and missing ideas.
		When I do notice, I hit the same key repeatedly so I&apos;ll know when the page starts working again, and it takes about a minute before I can do anything.
		It&apos;s extremely annoying and inconvenient at best, but when I&apos;m in a hurry, it&apos;s outright problematic.
		Some Firefox experts told me it was likely due to poorly-timed $a[AJAX] requests.
		That is to say, the page should not be sending <strong>*any*</strong> $a[AJAX] requests <strong>*except*</strong> when a user clicks something to trigger it.
		However, the University of the People website is sending them periodically, for no good reason, and with no known purpose.
		$a[AJAX] requests lock up the browser, and shouldn&apos;t be sent without a trigger.
		With what I learned this week, I was able to confirm that University of the People is indeed sending these $a[AJAX] requests.
		I&apos;m unsure of the exact nature of these requests, but I think they&apos;re on a timer.
		This is a huge bug and I really wish University of the People would fix it.
		It&apos;s gotten so bad that over the past couple of terms, I&apos;ve disabled JavaScript in my Web browser entirely just so I can make the University of the People website functional on my end.
		If the school fixed their bug (that is, if the school removed their untriggered, periodic $a[AJAX] requests), I could use JavaScript once more.
	</p>
	<p>
		There wasn&apos;t much there that was noteworthy on the rest of the tutorial to me though.
		The section on the $a[DOM] listed the properties and methods of $a[DOM] objects, which wasn&apos;t something I knew offhand, but the rest of that section was review for me.
		The concept of XPath was interesting.
		Using a syntax similar to locating files in a directory tree seems like a good way to unambiguously identify specific nodes.
		I wouldn&apos;t mind having an assignment on $a[XSLT] next week so I can get some hands-on practice with that, but it probably won&apos;t happen.
		I&apos;ll need to try that on my own when I get time, which won&apos;t be for a while.
		I&apos;m not sure what kind of project to try it on, either.
		The main $a[XML] I use is $a[XHTML].
		What type of $a[XML] would I need to transform into $a[XHTML]?
		Or conversely, what $a[XML] would I need to convert $a[XHTML] into?
		It&apos;s an interesting concept I&apos;d like to try, but I have no practical use for it at the moment.
		XQuery seems cool for people that frequently use $a[SQL], but again, I have no practical project to use it for at the moment.
		$a[DTD]s, or more likely, $a[XML] schemas, will definitely be useful in the future.
		I&apos;ve had times I want to store data but don&apos;t have a good format for it.
		Defining my own $a[XML] language would be helpful for that sort of thing.
	</p>
	<p>
		The quiz for the week had an error.
		It asked if this is valid $a[XML]:
	</p>
	<blockquote>
	<pre><code>&lt;note date=12/11/2007&gt;
		&lt;to&gt;Tove&lt;/to&gt;
		&lt;from&gt;Jani&lt;/from&gt;
	&lt;/note&gt;</code></pre>
	</blockquote>
	<p>
		I answered that the $a[XML] is invalid, but I was marked as having gotten that wrong.
		Look at the value of the <code>date</code> attribute though; it&apos;s not quoted!
		All attribute values <strong>*must*</strong> be quoted.
		No exceptions.
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		The reading material for his week was as follows:
	</p>
	<ul>
		<li>
			<a href="https://www.smashingmagazine.com/guidelines-for-mobile-web-development/">Guidelines For Mobile Web Development - Smashing Magazine</a>
		</li>
		<li>
			<a href="https://www.w3.org/standards/webdesign/mobilweb">Mobile Web - W3C</a>
		</li>
		<li>
			<a href="https://www.w3schools.com/jquery/">jQuery Tutorial</a>
		</li>
	</ul>
	<p>
		The main topic covered was whether we should create separate websites for mobile users and desktop users.
		I have to agree with the authors of the articles we read: absolutely not!
		First of all, such a tactic is unnecessary.
		All the tools exist in $a[CSS] to make the website present itself differently based on screen size.
		And that&apos;s what you want: screen real estate being properly managed.
		Second, sending users to the &quot;appropriate&quot; version of your website requires <code>User-Agent</code> string sniffing.
		If you&apos;re performing <code>User-Agent</code> string sniffing, you&apos;re doing something majorly wrong, at least in most cases.
		Some websites, such as the Firefox download website, will attempt to guess your operating system and language from your <code>User-Agent</code> string, then will make the download option for that operating system and language large and prominent.
		It then allows you to choose a different operating system and/or language if you prefer.
		This is perfectly acceptable and, arguably, very useful.
		The key thing here though is that users are not prevented from choosing a different option if they want to.
		With <code>User-Agent</code> string based redirects, the user isn&apos;t given the option to use their preferred version of the website.
		And you can&apos;t know every <code>User-Agent</code> string out there, so you&apos;re bound to get your guesses wrong sometimes.
		I&apos;ve often had to fake my <code>User-Agent</code> string to get websites to present me with the correct version of the page, or sometimes, to even function.
		This is aggravating and should be stopped.
		In fact, I strongly believe the <code>User-Agent</code> header should be removed from the $a[HTTP] protocol.
		Unfortunately, that&apos;ll never happen, but it&apos;s nice to imagine such a world.
	</p>
	<p>
		Hilariously, I wrote up that last paragraph before reading the articles, then read the articles to see what they had to say.
		They pretty much said everything I&apos;d already said though; the authors and I are in total agreement.
		Additionally, they said many desktop-only websites are less usable on mobiles, but it&apos;s not because mobile websites are better for mobile devices.
		It&apos;s because the desktop websites are painful to use on <strong>*any*</strong> device!
		I&apos;ve run into this sort of issue many times, and it&apos;d be great if webmasters cleaned up their sites to be functional for everyone.
		Additionally, they state that desktop websites are often too cluttered, as webmasters see all the open space on a desktop monitor and seek to fill it.
		By keeping things minimal, websites would be much easier to use for mobile users and desktop users alike.
		Again, this hints that we don&apos;t need  separate mobile site, but instead, we need to apply the concept of minimality to the only version of the website.
		If you&apos;re foolish enough to try to serve different content for different screen sizes, where to you even draw the line between &quot;small&quot; and &quot;large&quot; screens?
		It&apos;s completely arbitrary.
	</p>
	<p>
		The article by the $a[W3C] was predictably insightful.
		The $a[W3C] almost always has a view on accessibility that I would never come up with on my own, but that I fully agree with once I read it.
		In this case, they compared mobile users to handicapped users.
		In either case, there are limitations that must be worked around, but building an entirely separate site for this class of users is the wrong approach.
		We simply need to make sure our singular website is clean, well-built, and complete.
		The needs of mobile users don&apos;t impose on desktop users.
		If we build the site correctly, mobile users will have their needs me, but desktop users won&apos;t be left looking for a different version of a page.
	</p>
	<p>
		The jQuery tutorial claims that &quot;jQuery greatly simplifies JavaScript programming.&quot;.
		I have to strongly disagree with that.
		To start with, jQuery code is ugly and difficult to read.
		When I first tried to read code written with jQuery, I had no idea what I was looking at.
		Finally, I learned one vital piece of information that cleared up the confusion: in JavaScript, the dollar sign character is just another character that can be used in variable names.
		For whatever reason, the jQuery team chose this character as the name for their one-method-does-just-about-everything method.
		Once you figure out that there&apos;s no magical, confusing JavaScript trick going on, you then have to deal with the fact that this one method is always doing one of two things: completing your task based on the arguments you pass it or returning an object that you&apos;ll need to complete the next step of your task.
		From there, you chain method calls onto method return values and get chains of what looks like gibberish.
		Obviously, chaining method calls like this is a very powerful and useful tool.
		However, when you&apos;re using only one function as the beginning of <strong>*everything*</strong>, you&apos;re library is clearly dong something very wrong.
		The library is performing too many tasks from the same function.
		Separate functions need to be used for separate functionality; this is vital for readability of the code, and is also helpful for eliminating unnecessary conditionals within that would-be-singular function.
	</p>
	<p>
		The jQuery tutorial did clear some things up, and the <code>\$()</code> function isn&apos;t quite as bad as I&apos;d thought it was.
		At least this function <strong>*usually*</strong> takes a selector as an argument.
		It&apos;s still taking a string, parsing the string to see what type of selector it is, then returning an object to call a method of.
		But why prefix the string with a pound sign to specify that the selector is an <code>id</code>, prefix it with a full stop to specify it&apos;s a class, and nothing to specify it&apos;s an element name, when you can instead use the correct <strong>*separate function calls*</strong> to do the same thing in a <strong>*more readable*</strong> way?
		And that&apos;s just the three most generic types of selectors that can be passed to <code>\$()</code>.
		The <code>document</code> and <code>this</code> objects can also be passed to this same function.
		Additionally, instead of a selector, a function can be passed to <code>\$()</code>, and it will be treated as though that function had been passed to <code>\$(document).ready()</code>.
		This.
		Is.
		An.
		$a[API].
		<strong>*Mess*</strong>.
		Saving a few characters of typing isn&apos;t worth making code harder to read; as I said before, every functionality needs its own, separated function, so the people reading or even maintaining the code can see easily what it does.
		Admittedly, jQuery&apos;s ability to use complex selectors as can be done in $a[CSS] piqued my interest, though mostly on a theoretical level.
		After all, if you&apos;re doing something that complex in JavaScript, you&apos;re starting to tread into the territory of putting too much into the JavaScript of the page and not enough into the $a[XHTML] of the page.
		Remember that JavaScript should <strong>*enhance*</strong> a page, but should never be required to make a page even function.
		For that reason, complex JavaScript like this tends to be an accessibility <strong>*nightmare*</strong>.
	</p>
</section>
END
);
