<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Continuing my coursework',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="university">
	<h2>University life</h2>
	<p>
		Mostapha wrote today asking if tonight would be a good time to start the virtual machine, but there wasn&apos;t enough time before work to make sure I got everything done.
		By the time I get off work, it&apos;ll be late morning in his time zone.
		I asked to postpone it until then, planning to work on that assignment late tonight, possibly after midnight in my time zone.
		That didn&apos;t work out though; no response.
		It confuses me; if he&apos;d planned to help me tonight, it seems like he&apos;d&apos;ve expected a response by then.
		Has I responded that I was ready at his proposed time, it still wouldn&apos;t&apos;ve worked because it seems he didn&apos;t check his mail.
		No matter.
		As before, I have a backup plan.
		I&apos;m not stressing about this, though it&apos;d be awesome if he could help me out.
	</p>
	<p>
		I finished one of my two reading assignments, and with it, my initial discussion post for that course:
	</p>
	<blockquote>
		<p>
			The main way to reduce the cost of paging is to page less.
			Short of that, we&apos;d need to make some new hardware advances (such as inventing $a[SSD]s to replace $a[HDD]s, improving disk speed and thus reducing the cost of paging when it occurs).
			By using a decent algorithm for determining what to page out, we can page out data that isn&apos;t as likely to need to be paged back in soon as other data that we&apos;ll keep in $a[RAM].
			In other words, we need a good page-replacement policy.
			Another way to reduce the cost is to swap several pages to disk at once, instead of one at a time as needed.
			For example, if the operating system detects that there aren&apos;t many free page frames, it can start a process that effectively determines what to page out, then that process can write several paged to disk at once and free those page frames.
			By combining write operations, some time expense are avoided (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
			Techniques we learned for caching can also be applied to paging.
			Though perhaps not a technically-accurate model, thinking of the $a[RAM] as a sort of cache for pages shows us something very basic but very important: we need to reduce the number of cache misses and increase the number of cache hits (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016), or rather, reduce the number of page faults and increase the number of page hits.
			Our page-replacement policy needs to work to that end.
		</p>
		<p>
			A $a[FIFO] (queue-like) policy or a random policy are easy to implement, but with that ease comes a huge drawback: these policies don&apos;t actually try to maximise hits or minimise misses!
			In a way, they sort of act as mid-efficiency policies.
			They don&apos;t actively try to improve performance, but they&apos;re also not trying to create as many cache misses as possible; they&apos;re not actively against us.
			Still, we need something that&apos;ll work better if we&apos;re trying to reduce our paging costs.
			Policies that would work better include such as a least-recently-used policy and a least-frequently-used policy.
			Being unable to look into the future, these policies instead look into the past to attempt to predict the future.
			Of course, they&apos;re imperfect like everything else, but using one of these policies gives us a pretty good guess as to which pages can be paged out with the least need to page them back in soon.
			That said, it&apos;s difficult to know what the workload will be for a given process.
			A least-recently-used policy, for example, won&apos;t perform well for a process that loops through more pages worth of memory than will fit in $a[RAM] (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
			Even something as unintelligent as a random policy outshines a least-recently-used policy in that corner case.
			Still, a least-recently-used policy or a least-frequently-used policy will be the best option for making the most processes keep their paging costs low.
		</p>
		<p>
			While a least-recently-used policy reduces paging overhead, it also creates overhead of its own.
			The system has to keep track of when each page was last accessed, meaning it has to update some timestamp every time memory is accessed.
			Even worse, when paging out, the system has to look through all those timestamps to find the absolute oldest one.
			We can build an approximation of a least-recently-used policy though, for reduced overhead over a true least-recently-used policy.
			With some hardware support, we can add two bits to contain information about each page: a used bit and a dirty bit.
			The used bit is set to <code>1</code> by the hardware any time the page is read from.
			The dirty bit is set to <code>1</code> by the hardware any time the page is written to.
			The operating system can then set the used bit to <code>0</code> when certain actions take place (perhaps on a schedule or perhaps as it goes through looking for an unused page) and can choose a page with a used bit that is already set to <code>0</code> for paging out.
			The dirty bit can be used in a couple of ways for speeding paging.
			First, the system can avoid writing clean pages to disk when paging them out.
			Nothing has changed on that page, so an exact copy of it resides on disk already from the last time it was paged out.
			Second, the system can prefer to page out clean pages instead of dirty ones, keeping pages that would need to be written to disk from being paged out at all if it can be helped (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
		</p>
		<p>
			By selecting the right pages to page out, we reduce the need to page in as well.
			However, paging in will still likely need to occur.
			To keep paging costs low, we should try to page in only when needed.
			The best policy is to only page in when a page fault occurs.
			This is what most systems already do (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
			However, some systems also assume that when a page of executable code if paged in, the page directly after it (in that process&apos; virtual address space) will be needed too, so they page that one in as well, regardless of if it actually will get used or not.
			To reduce the cost of disk writes, pages can also be clustered and written to disk together when paging out, keeping a bunch of smaller writes from needing to occur.
		</p>
		<div class="APA_references">
			<h3>References:</h3>
			<p>
				Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Beyond Physical Memory: Mechanisms. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys.pdf</code></a>
			</p>
			<p>
				Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Beyond Physical Memory: Policies. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys-policy.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys-policy.pdf</code></a>
			</p>
		</div>
	</blockquote>
</section>
END
);
