<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Acceptance tests',
	'<{subtitle}>' => 'Written in <span title="Software Engineering 1">CS 2401</span>, finalised on 2017-11-29',
	'<{copyright year}>' => '2017',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<p>
	We haven&apos;t covered the proper way to write acceptance tests, so I&apos;m pretty much winging it.
	In fact, acceptance tests haven&apos;t even been covered by the book at all yet.
	If a requirement is testable, I&apos;ll first cover why it&apos;s testable and what information we should glean from the wording of the requirement, then cover the steps in testing for it.
	Again, as we haven&apos;t been shown a formal format for this, these tests won&apos;t be in any sort of formal format, but in plain English.
</p>
<h2>&quot;The user interface must be user-friendly and easy to use.&quot;</h2>
<p>
	This requirement is too vague to test.
	What one user finds to be simple and easy to use may be cumbersome and difficult for another user.
	What one user finds to be user-friendly may drive another user batty.
	The word &quot;easy&quot; is a relative word that deals solely in the realm of opinion.
	If that word is in your requirements, you can almost guarantee that you can&apos;t objectively test for it.
	(You could write <strong>*subjective*</strong> tests, but such test results are prone to error, not necessarily repeatable, and have little meaning compared to objective tests.)
</p>
<h2>&quot;The number of mouse clicks the user needs to perform when navigating to any window of the system&apos;s user interface must be less than ten.&quot;</h2>
<p>
	This requirement is extremely specific and deals within the realm of cold, hard facts.
	As such, it&apos;s completely testable.
	Note that the requirement specifies that the number of clicks <strong>*needed*</strong> are less than ten, not the number of clicks a user <strong>*could*</strong> use.
	That means that multiple paths through the system may or may not be possible, but we only need to guarantee that <strong>*one*</strong> of the paths to each window can be achieved in fewer than ten clicks.
	Also note the wording &quot;less than ten&quot;.
	&quot;Less than ten&quot; isn&apos;t the same as &quot;ten or fewer&quot;; ten is not less than itself.
	If ten or more clicks are required, you&apos;ve failed the test, so we need to make the navigation work in nine clicks or fewer.
	Next, the requirement doesn&apos;t mention where the starting point for these clicks must be.
	As such, we should assume that nine of fewer clicks can be used to get from <strong>*any window*</strong> to any other window.
</p>
<p>
	The first step in testing this is to make a list of all windows that the application can produce.
	With that list, we&apos;ll need to make a list of all potential current positions of the user and all potential desired destinations.
	This list will have n<sup>2</sup> entries, where n is the number of windows our application produces.
	n of these n<sup>2</sup> pairs will be trivial; the user is already where they need to be.
	(The user can get from Window A to the same Window A in zero clicks.)
	For the remaining n<sup>2</sup>-n pairs, we&apos;ll need to manually navigate to each starting location and count the clicks it takes us to reach each destination.
	If our click count exceeds nine, we&apos;ll need to find an alternate route and try again.
	If no alternate routes are available, we&apos;ve failed the test and need to program a shorter route (or at least a route requiring fewer clicks) for our users to work with.
</p>
<h2>&quot;The user interface of the new system must be simple enough so that any user can use it with a minimum training.&quot;</h2>
<p>
	On one level, this seems untestable.
	You don&apos;t know your users&apos; skill level, so you can&apos;t know how much if any training they&apos;ll need.
	On the other hand ... this requirement does say <strong>*any*</strong> user.
	Some users are lazy morons.
	They can&apos;t figure out how to use anything they don&apos;t care on their own about because they can&apos;t be bothered to try.
	You&apos;re likely to need lots of training for at least some of these people, regardless of how the system is implemented.
	Even more importantly, some users have legitimate physical or mental handicaps.
	It&apos;s hard to account for every physical handicap, though we can design the system to function well for people with the major common handicaps.
	As for mental handicaps ... this requirement becomes impossible to meet.
	You can&apos;t design the system so that it accounts for every possible mental handicap and is easy to use by everyone with minimal training.
	You can&apos;t test for this condition, but you can guarantee that you failed to meet it every time; it&apos;s not a realistic requirement.
</p>
<h2>&quot;The maximum latency from the moment the user clicks a hyperlink in a web page until the rendering of the new web page starts is 1 second over a broadband connection.&quot;</h2>
<p>
	Like the last requirement, this one technically gets an automatic fail.
	You don&apos;t know the state of your users&apos; computers, nor do you know what software they&apos;re using to download and render your pages.
	Take, for example, my mother&apos;s computer.
	She keeps one or two VLC instances, several $a[PDF] reader instances, several LibreOffice Writer documents, several file manager windows, and a Firefox window with about fifty tabs (I&apos;m not even exaggerating) open at all times.
	At a minimum.
	Her $a[RAM] is so bogged down that <strong>*no*</strong> webpage is capable of even beginning to render in under a minute on any type of connection, let alone beginning to render in under a second.
	However, despite the inability to meet this requirement for all users, we can still do some useful things to make it work for as many users as possible.
	It&apos;s also difficult to forecast network conditions that will slow the downloading of your pages, and some clients render pages slower than others.
</p>
<p>
	First, test to see if your pages even render at all with JavaScript disabled.
	Poorly-designed pages often rely on JavaScript for rendering basic content, but some users either don&apos;t have JavaScript enabled or are using a Web browser that isn&apos;t JavaScript-capable, either for security reasons, because of disabilities, or some other reason.
	If your page doesn&apos;t render at all with JavaScript disabled, it certainly isn&apos;t beginning to render in under a second for these users!
	Next, try loading and rendering your pages in a variety of Web browsers over a broadband connection and timing how long it takes for render to begin.
	At a minimum, I&apos;d recommend testing using common Web browsers such as Firefox and Chromium.
	Arguably, Web browsers such as Internet Explorer, Edge, and Safari should be tested too, but due to the restriction of which operating systems these Web browsers are even compatible with, you may or may not be able to test these specific browsers.
	Just do what you can.
	It&apos;s difficult to do this browser-testing in any sort of automated way though, as you&apos;re looking at render start time, not download time.
</p>
<p>
	(If your page is relying on JavaScript for rendering, render time can be complicated to figure out in an automated way, but you&apos;ve already failed because of the very basic use case of not having JavaScript enabled.
	As required JavaScript for rendering automatically causes you to fail all render tests, I&apos;ll assume from here on out that you&apos;ve either removed your JavaScript dependency or never had one to begin with.)
</p>
<p>
	However, download time influences how long it takes for a page to start rendering.
	We can automate an approximate download time checker.
	The larger the $a[XHTML] file (or $a[HTML] file if you&apos;re using $a[HTML] instead), the longer it will take to download and render.
	Additionally, Firefox won&apos;t start rendering a page until it&apos;s downloaded the $a[CSS] for the page or given up trying to reach the $a[CSS] files on the server.
	That said, most Web browsers will begin rendering pages before downloading images.
	We can therefore assume that the vital content needed to begin rendering the page are the main $a[XHTML] file of the page as well as any and all $a[CSS] files referenced by the page.
	As file size is proportional to download time and Web browsers need these files to begin rendering, we can add up the file sizes and attempt to keep them below a certain threshold.
	If all our pages use the same $a[CSS] files, we can keep a constant list of those.
	Otherwise, we&apos;ll need to parse their $a[URI]s out of our pages in out automated test.
	We can recursively iterate over each of our pages, checking the total download size.
	In the case of static pages, this is extremely straightforward.
</p>
<p>
	In the case of server-side dynamic pages, it gets a bit more complicated.
	Not only do we need to know how much data is being sent to the user, which we&apos;re going to have to figure out from something other than the file size on our end, we&apos;re also going to need to figure out how long it takes for our server-side script to run.
	The browser can&apos;t render the page until it downloads it, but it can&apos;t download it until our server finishes building the page from the script.
	Exhaustive tests for every input that can be given to the page may not be feasible or even possible.
	We can try a few specific (or randomised) inputs, but mostly, this isn&apos;t something we can properly test the speed of.
	Furthermore, if remote systems are involved, more variables come into play and make things even harder to calculate in any automated way.
	For example, if our server relies on a remote $a[SQL] server for data storage, the response time of the $a[SQL] server must be added to our own script run time to find our own response time, which is needed to estimate the client&apos;s time needed to begin rendering.
	The more complex the setup, the more impossible it becomes to measure this statistic.
</p>
<p>
	As stated in the beginning, this requirement cannot be fully tested for anyway, and if it could be, we&apos;d fail because of users that overload their $a[RAM].
	However, it&apos;s useful to perform these partial tests for compliance with the requirement, and such tests can show us problem areas in need of fixing.
</p>
<h2>&quot;In case of failure, the system must be easy to recover and must suffer minimum loss of important data.&quot;</h2>
<p>
	Again, the word &quot;easy&quot; is a word of relativity and opinion.
	As soon as you put that word into a requirement, the requirement becomes impossible to test for.
	This condition is not testable.
</p>
<p>
	For the sake of argument, let&apos;s throw out that part of the condition and try to meet the rest of it.
	The word &quot;important&quot; is also relative, but we can work with that.
	What do we consider to be important?
	Without knowing the company or what kind of data we&apos;re working with, we can&apos;t be sure.
	However, I think an acceptable way to make sure we don&apos;t lose &quot;important&quot; data is to avoid losing data that is useful between system runs and cannot be calculated from other data.
	For example, the system might use session tokens of some sort, but those are only useful during a single run of the system.
	As soon as the system shuts down or crashes, the tokens should become void, so session tokens do not qualify as important.
	If we are storing a list of costs of running the business as well as the total of all those costs, the total isn&apos;t important, as it can be recovered by adding up the individual costs again.
	Any data that should persist between system runs and that cannot be calculated from other saved data, we&apos;ll categorise as &quot;could-be-important&quot; and we&apos;ll make sure this data isn&apos;t needlessly lost.
	The condition thus becomes:
</p>
<blockquote>
	<p>
		In case of failure, the system <del>must be easy to recover and</del> must suffer minimum loss of <del>important</del> <ins>irrecoverable, run-independent</ins> data.
	</p>
</blockquote>
<p>
	To meet that requirement, we need to make sure our system adheres to a few specifications.
	First of all, we need our system to save to disk any time such irrecoverable, run-independent data is written (including both new data and updated data).
	To test this, we can iterate over all cases in which the system is supposed to write such data, then check the database on disk with a tool external to our system (in read-only mode, of course) to see if the data was properly added or updated on disk.
	Next, we don&apos;t know when or why our system will fail.
	Perhaps the power went out for our building while in the middle of a database write.
	No matter how well we program, certain types of interrupts such as these cannot be avoided.
	In case of such an interrupt, data <strong>*will*</strong> be lost; this too is unavoidable.
	We have to choose to either lose the entire contents of the write or lose the partial contents of the write as well as lose the data validity of the database.
	Obviously, we don&apos;t want the integrity of the entire database lost, so we&apos;ll need to roll back the partial database write and lose the full interrupted database write instead.
	Our system has to recover from that without losing more data than the partial write itself contained.
	For this, we&apos;ll need to ensure our database has a good journaling system.
	A partial write will compromise the validity of the data, so only a full write must be allowed.
	If the write is interrupted and part of the attempted write&apos;s information is lost, for example because of a power failure, the system must be able to use the journal to return the data to the state it was in before the write was interrupted.
	To test this feature, we&apos;ll need to run simulations on database update interrupts to generate copies of a database that have had an interrupt during various stages of writing.
	Our system should be able to recover to the pre-write state in all of these cases.
	We may want to write in some functionality for recovering databases that have been modified and broken by an outside actor (such as other software running on the same machine), but this isn&apos;t really possible to fully test.
	The database file could be modified in unexpected ways, and we can&apos;t anticipate every one of the infinite number of ways in which the database can be modified into an invalid state.
</p>
<p>
	As I said above, it&apos;s not possible to test that the system is &quot;easy to recover&quot;, as the word &quot;easy&quot; only functions in the world of opinion, not fact and verifiability.
	However, if we decide on some more-concrete wording, we could test it.
	Maybe, we want the system to automatically recover once the machine turns back on in case of machine failure.
	We can build and test for that.
	Maybe we want the system to recover if the administrator deliberately tries to recover it, and all we want the administrator to have to do is run one command and enter their password.
	Again, we can build and test for that.
	The key to making the specification testable is to remove ambiguous and opinion-based wording, instead saying exactly what we want.
</p>
END
);
