<?php
/**
 * <https://y.st./>
 * Copyright © 2016 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Better spider debugging and list-based navigation',
	'<{body}>' => <<<END
<p>
	Today, I checked in on the spider and found that it was about a week into working on the current website, had about an estimated week left, and had crashed.
	A week&apos;s worth of crawling efforts had been lost, and if I could find and fix the bug, it would take about a week to reach that page again to test the fix.
	Because of this, I set up a new feature that allows the administrator to specify a $a[URI] to test on.
	Because of how the spider works, the specified $a[URI] will be used third, right after the <code>robots.txt</code> and main index pages.
	It still means a lot less waiting and a lot less wasted effort.
	It appears that there was no bug for me to fix though, I had already fixed it and the page that the spider choked on before was no longer an issue.
	Most likely, I&apos;d fixed this but hadn&apos;t restarted the spider to make the fix come into effect due to the fact that restarting the spider would cause the spider to lose its progress on the current site.
</p>
<p>
	My initial plan was to implement the feature in such a way that if a debugging $a[URI] is provided, <strong>*only*</strong> that page would be loaded and parsed.
	I was going to do this by having the debug switch set the queue variable to an object of a class that acted like an array, but just dropped any information given to it.
	However, it seems that even the most array-like object implementing all the relevant interfaces cannot be passed to <code>\\array_pop()</code>, so after building that class, I ended up throwing it out.
</p>
<p>
	It turns out that the reason that the <a href="apt:atheme-services">atheme-services</a> package in Debian is so outdated isn&apos;t because the Debian package maintainers for that package stopped caring or have been too busy.
	It is because the Atheme team has requested that distribution maintainers only package their long-term support version! The copy in the Debian repositories is the latest long-term support version.
	It&apos;s a shame that this version doesn&apos;t support $a[ngIRCd].
	I&apos;ll have to use a copy from outside the Debian repositories, which is something that I really don&apos;t like doing when choosing server software.
</p>
<p>
	Speaking of $a[IRC], I forgot that <a href="http://zdasgqu3geo7i7yj.onion/">The Unknown Man</a> and I had agreed to only allow encrypted connections to our network.
	Yesterday, I wrote about $a[URI]s with the <code>irc:</code> and <code>irc6:</code> schemes, but those won&apos;t really be relevant to our network.
</p>
<p>
	Someone that prefers not to be mentioned by name has requested that I provide an index for my weblog that uses lists instead of $a[ASCII] calendars.
	He thinks that they would be more accessible in text-based Web browsers than the $a[ASCII] calendars.
	I disagree, but agreed to set up such an index.
	After agreeing to build such an index, I realized that while I still disagree that it will help in text-based Web browsers, it might actually be helpful for blind people using screen readers.
	With that in mind, I also added a <code>title</code> attribute to the links on the calender-based index page.
	I added both an ascending and descending list of entries.
</p>
<p>
	While trying to set up the set up the list-based version of the index, I ran into a problem that I have run into countless times before.
	The list has an order, so unordered lists are the incorrect thing to use semantically, but ordered lists don&apos;t work either because they stupidly index from one.
	If only I could use the <code>start</code> attribute! However, that attribute is deprecated in $a[XHTML] 1.1, the latest version of the $a[XHTML] standard, or so I thought.
	I looked into it today to see it $a[XHTML]5 is any closer to finalization, and it seems that it was <a href="https://www.w3.org/TR/2014/REC-html5-20141028/">finalized</a> over a year ago.
	According to one article, <a href="http://www.pcworld.com/article/2840132/html5-finalized-finally.html">it took eight years to finalize it</a>.
	No wonder I gave up checking on it! I had kind of started wondering if $a[XHTML]5 would ever become an actual standard.
</p>
<p>
	It seems that <a href="apt:links">Links</a>, the Web browser that my friend uses, does not handle lists correctly.
	If you attempt to list from zero, Links uses bullet points instead of numbers.
	If you use a reverse list, Links will still count up instead of down.
	I never could find a solution to either issue.
	The problem with the reversed lists might be that they are a newish feature.
	However, list starting points are old enough that they have become deprecated and then undeprecated.
	There is no good reason not to support them fully.
	Thankfully, I found a workaround.
	It isn&apos;t as good as using the the <code>start</code> attribute of the <code>&lt;ol/&gt;</code> tag, but you can also use the <a href="https://www.impressivewebs.com/reverse-ordered-lists-html5/"><code>value</code> attribute of the <code>&lt;li/&gt;</code> tag</a>.
</p>
END
);
