<?php
/**
 * <https://y.st./>
 * Copyright © 2016 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'title' => 'The spider seems to be running smoothly now',
	'body' => <<<END
<p>
	My task for today was to complete my $a[FAFSA].
	It went very smoothly.
	Of note, it was possible to leave the telephone number blank, despite that message I received that claimed that government forms usually require a telephone number even from people that do not have one.
</p>
<p>
	I ran into a bug before I went to bed last night in which the spider assumes that every normalized $a[URI] has a host component, which was then triggered by the presence of a $a[URI] using the <code>javascript:</code> scheme.
	I did not remember seeing that one on the <a href="https://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml">official scheme list</a>, so I took another look.
	It is not listed! There are a bunch of schemes listed there that almost no one uses, yet the fairly-well-known <code>javascript:</code> scheme did not make the cut.
	In any case, I have repaired that bug in the spider.
	When I woke up, I found that a bug that I had intentionally coded into the spider had been set off as well.
	Namely, I programed it to assume that all $a[URI]s that it finds are valid.
	My functions are designed to throw exceptions in case of malformed $a[URI]s, but the spider was not designed to catch them.
	I knew that this bug would cause an issue eventually, I just did not realize that it would be so soon.
	Some people just do not know how to form a valid $a[URI], which I think is the case here, but also you should never trust user input.
	I hate to blindly ignore these bad $a[URI]s, but I do not know what else to do.
	For now, the spider now tries to make sense of each $a[URI], but it the syntax is invalid, it catches the exception, outputs an error message that will not be seen as it will be buried in all the other output, then moves on to the next $a[URI].
	I suppose that I will need to start greping the log from now on.
</p>
<p>
	I wrote a couple of the new scheme-specific normalization functions, but then I decided to delete them and remove the callback hook from the main normalization function.
	This is starting to lead off on an unnecessary tangent.
	If someone wants to compare two $a[URI]s for equivalence, they should probably use both the generic $a[URI] normalization function and, if necessary, a scheme-specific $a[URI] normalization function.
	I also removed the parts of the function that normalize the port component based on the default port of a given scheme and checks for the presence of $a[URI] components that are required or forbidden by a given scheme as well.
	I considered leaving them, as at least the port normalization is useful to me, but they seem like they are outside the scope of the function.
	I need a function that can take the place of $a[PHP]&apos;s broken <code>\\parse_url()</code> function.
	The RFC 3986-specified normalization goes a bit beyond that, though it is quite useful when trying to keep a list of pages and sites that the spider has already visited and is generic enough to be useful in many cases.
	Anything scheme-specific is outside the scope of that function though and should be delegated to another function if that functionality is required.
</p>
<p>
	I received an email from eSmart reminding me to finish filing my taxes through them even though they have not had one of the supervisors get back to me.
	Because of their letter, I pestered them on Twitter a bit, as they included a link to their Twitter account in their email.
	On Twitter, they gave me the support email address! I doubt that I will pursue this, but if I wanted to, I have another method.
</p>
<p>
	It seems that my mother is going to Portland for some training and she will be dropping me off in Springfield along the way.
	There, I will work on cleaning up our former residence through Thursday, Friday, and Saturday.
	I may not have Internet access during that time though, so I may not update this weblog until I get back.
</p>
END
);
