<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Self-examination',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="general">
	<h2>General news</h2>
	<p>
		I think my subconscious must&apos;ve been working at adapting my sexuality because it&apos;s been bothering me.
		I don&apos;t see any other reason why it would be changing now when I couldn&apos;t change it recently.
		I&apos;ve been thinking a lot about sexuality as of late, but I&apos;ve tried not to directly attempt to reshape mine again.
		Last time let to some pretty bad results.
		I think I might be getting there this time.
		I mean, I&apos;ll likely always have a preference for males.
		But if I can be open to love from any direction if it comes, that&apos;d be optimal.
		Passing up an opportunity for happiness over something trivial doesn&apos;t seem like a good idea, after all.
		There&apos;s no need to rush this transition though.
		I won&apos;t have time to date until I finish school, so I have almost three years before there&apos;s any real chance of this mattering.
		I can&apos;t help thinking about when my sexuality was first awakening, either.
		I was finding myself noticing the secondary sex characteristics of <strong>*both*</strong> sexes more at first, at which point I started panicking, then I started having a preference for <strong>*females*</strong>.
		I tried to shove my sexuality away so hard, and it went away for a time, but when it came back, it was then that I had a preference for males.
		My ability to pair with a male, provided we had compatible ethics, morals, and personalities, is real and natural for me.
		I can feel it in my heart.
		However, an ability to likewise pair with a female instead given the same assumption that we had compatible ethics, morals, and personalities may likewise be natural for me.
		Then again, that may be wishful thinking.
		I shouldn&apos;t try to force anything.
		As I said, I have plenty of time.
		If I&apos;m gay at the end of these almost-three years, I&apos;m probably stuck that way and shouldn&apos;t try to do anything about it.
		However, either healing or the slow process of my subconscious working on fixing this should be able to fix me in that amount of time if there&apos;s anything about this that even <strong>*can*</strong> be fixed.
		Honestly though, come to think of it, how likely is it I&apos;m going to find a ciswoman that&apos;s fine with there not being a &quot;man&quot; and &quot;woman&quot; of the relationship?
		Gayness is the easier path, I&apos;m pretty sure.
		Then again, I&apos;m not trying to reject males, only accept females.
		Adding <strong>*anyone*</strong> to my dating pool likely increases my odds of finding real love.
		Aside from dating pool sizing and whatnot, I&apos;d still rather date a male for long-term reasons though.
		Homosexuality just fits my ideals (such as not accidentally having children once things get serious and not getting locked into gender roles) than a heterosexual relationship could likely provide.
	</p>
	<p>
		I couldn&apos;t keep on task today.
		I got a decent chunk of coursework done, so I still think I can finish on time without working on it tomorrow.
		In other words, I&apos;m going to Eugene tomorrow to try to get my insurance issues fixed.
	</p>
	<p>
		As for my not keeping on task ... I finished the update for <code>minestats</code> to make it treat all dropped non-nodes fairly, offer both itemised and non-itemised views of stats, and provide a simple interface for determining what power level a user should be granted in regards to any specific game item.
		Treating all items fairly was necessary for the itemised and non-itemised stat views, and the back end for the itemised stat view was necessary to get the power level interface operational.
		That power level interface will be needed by the yellow beds mod.
		I couldn&apos;t even start on that mod until this update was complete.
		Well, I could, but I&apos;d&apos;ve had to write a bunch of functionality into the yellow beds mod just to tear it out and replace it with the better functionality in the <code>minestats</code> mod later.
		I&apos;m not going to release the <code>minestats</code> update just yet though.
		Before I release, I need to update the description post on the Minetest forum, and I&apos;m not up for that today.
	</p>
	<p>
		I&apos;ve thought about it further, and I&apos;m not as close to paying off my debt as I&apos;d thought.
		I thought I&apos;d be able to pay it off with my next pay cheque, but I&apos;ll need the money from that pay cheque for rent.
		It&apos;ll be at least until next month before I can be rid of this loan company.
	</p>
</section>
<section id="university">
	<h2>University life</h2>
	<p>
		It&apos;s no wonder I couldn&apos;t finish this discussion post given my partial free day yesterday.
		The topic was vague, so there was a lot to cover:
	</p>
	<blockquote>
		<p>
			Pages of virtual memory all take up the same amount of space, so we don&apos;t have to worry about knowing where to cut up the memory.
			We know right where the page bounds are without any need to check a list.
			Physical memory, $a[RAM], is likewise cut up into sections of the same size, though these units are known as page frames instead of pages.
			For virtual memory to be used, pages of it need to be stored in page frames within the $a[RAM].
			One or more page frames will be used not for pages, but reserved instead for use by the operating system.
			Intuitively then, we see that if the number of page frames reserved by the system plus the number of pages used is less than the total number of page frames, the $a[RAM] is under-utilised.
			If these combined numbers instead add up to more than the number of available frames, some paging in and out will need to occur.
		</p>
		<p>
			A page table is used to keep track of which page resides in each page frame, but this page table is a per-process data structure.
			The page table is used to store the information needed for address translation, allowing the process&apos;s virtual addresses to be translated into the physical addresses needed to store and retrieve the actual data.
			Because the page table is a per-process structure, mapping of the seemingly-same virtual addresses will map to different physical memory locations when used by different processes, achieving the isolation we seek when virtualising the $a[RAM].
			Using these page tables, the virtual address space is broken up into pages, with the high-order bits of an address (representing page numbers) being translated into part of the physical address (representing page frame numbers) and the low-order bits (representing offsets) being used directly as the other part of the physical address.
		</p>
		<p>
			Various control bits are utilised for making paging more efficient as well.
			The valid bit makes it known if a given page is actually in use, and thus if it needs to be stored at all.
			If a page isn&apos;t valid, and therefore not stored, the translation of addresses in that page&apos;s address range aren&apos;t valid and can&apos;t be used.
			Protection bits are used to determine whether a page can be read from, whether it can be written to, and whether it can be executed.
			The present bit represents a boolean marking whether the page is currently in a page frame or not.
			The dirty bit tells whether a page has been modified in $a[RAM] since being loaded from disk, and the reference bit tells whether or not the page has been read from since being loaded into $a[RAM].
			The user bit determines whether userland processes are able to access the page.
			Other control bits include those that determine how hardware caching is to function with the given page.
		</p>
		<h3>Problems with and advantages of the page table</h3>
		<p>
			The page table resides in memory, so intuitively, it causes an extra memory lookup to be performed every time memory is accessed.
			As we discussed, memory lookups occur not only when you need to retrieve a value, but also when you need to look up an <strong>*instruction*</strong>.
			That means for every instruction run, an added memory lookup is required, and it adds a lot of overhead.
			To combat that, a translation-lookaside buffer can be added to the hardware as a component of the memory-management unit.
			It acts as a cache of commonly-used address-translation lookups, saving lookups to the slower $a[RAM].
			After the page number is extracted from the virtual address, it&apos;s checked against the translation-lookaside buffer first, and only if the translation can&apos;t be found there is it checked against the page table.
			If the page isn&apos;t found in the translation-lookaside buffer, it&apos;s added to the translation-lookaside buffer after being looked up in the page table.
			(That is, assuming the page is found to be valid in the page table; if the valid bit indicate the page is invalid, there&apos;s no reason to add it to the translation-lookaside buffer and the normal $a[OS] trap for attempting to access an invalid address will instead be called.
			Likewise, if the memory is accessed in a way not allowed for that page by that process, the relevant $a[OS] trap will be called.)
			This make reads from addresses on pages that have recently been accessed much quicker, as a full lookup isn&apos;t needed if the page was already accessed recently.
			Oddly enough, the page is added to the translation-lookaside buffer, but the memory isn&apos;t looked up right away from the page-translation information even when the translation is first looked up.
			Instead, the page-translation information is then looked up in the translation-lookaside buffer after having just been copied to it.
		</p>
		<p>
			Spacial locality usually results in addresses appearing on the same page, greatly increasing the chances that the data needed soon will be on pages we&apos;ve used just before.
			The only time accessing an address spacially local to a previously-accessed address is across page bounds.
			Likewise, temporal locality can play in our favour.
			The translation-lookaside buffer stores recently-accessed translations, so if we try to re-access recently-used pages, we&apos;re more likely to find the translation we need in the translation-lookaside buffer than if we haven&apos;t used the given page in quite some time.
			After all, old translations need to be removed from the buffer to make room for new ones.
			It used to be that hardware dealt with translation-lookaside buffer misses on its own, but in modern computers, the operating system is allowed to perform this task via a special trap.
		</p>
		<p>
			You may have noticed that this translation-lookaside buffer doesn&apos;t make any sense when you think about contexts switches.
			The new process would end up with address translations from the old process!
			However, translations in the buffer can either be invalidated upon a context switch (effectively flushing the cache) or the translations can be marked with an address space identifier so the cache can tell which process a translation applies to.
			If an address space identifier is used, the operating system simply sets a privileged register so the cache knows which translations to use for the currently-running process.
		</p>
		<h3>Storing the page table</h3>
		<p>
			Additionally, the location of each individual page doesn&apos;t need to be stored in the page table.
			Using a base-and-bounds-like structure, we can store information about where <strong>*ranges*</strong> of pages reside, keeping the minimal amount of information needed to find a page in the page table.
			This conserves a lot of space over storing information about each page location individually, and allows us to keep the page table small.
			This still causes some memory-allocation problems though, as it assumes that the three common segments (code, heap, and stack) will be used, and the base and bounds for these are stored in registers.
			Large empty spaces in the heap, for example, thus can&apos;t be marked as invalid pages.
		</p>
		<p>
			Another option is to use a multi-level page table.
			In this scheme, the page table is broken into segments and stored in a page directory.
			If there aren&apos;t any valid pages in a given segment, that segment doesn&apos;t need to be stored and can be marked as invalid in the page directory.
			These segments are typically the size of a page themself, so the number of segments with valid pages in them plus one (one for the page directory to reside in) is the number of segments this page-tracking structure will take up, at least in a two-level page directory.
			When using more levels, a page will be needed for each level of the directory that has either valid pages or sub-directories that somewhere down the line have valid pages.
			As a bonus, the page directory keeps track of where in memory each segment resides, so we can break up the table and put it in non-adjacent page frames if we need to.
			The downside to this scheme is that when memory translations aren&apos;t found in the translation-lookaside buffer, a memory lookup for <strong>*each level*</strong> of the page table, in addition to the intended memory lookup.
		</p>
		<p>
			Yet a third option is an inverted page table.
			In this setup, we keep track of all pages used by processes in of the system, and map each one to the virtual page number and the process that uses it.
			Depending on how this inverted page table is structured, it may be required that every entry in the table is searched to fine the desired one, an expensive operation.
			However, if implemented as a hash table, much fewer entries need be searched before we get a hit.
			(As we learned in <span title="Programming 2">CS 1103</span>, a hash table uses linked lists, so the number of entries checked might not be exactly one.
			Still, it should be very few.)
		</p>
		<div class="APA_references">
			<h2>References:</h2>
			<p>
				Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Paging: Faster Translations (TLBs). Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-tlbs.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-tlbs.pdf</code></a>
			</p>
			<p>
				Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Paging: Introduction. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-paging.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-paging.pdf</code></a>
			</p>
			<p>
				Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Paging: Smaller Tables. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-smalltables.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-smalltables.pdf</code></a>
			</p>
		</div>
	</blockquote>
	<p>
		I couldn&apos;t complete the discussion assignment though, as not enough other students have posted for me to reply to.
		However, I did get the first two of three responses handed in:
	</p>
	<blockquote>
		<p>
			That sounds like typical Windows security through obscurity, a technique well-known not to be very secure at all.
			The system could protect privileged sections of memory so they can&apos;t be overwritten with such buffer overrun exploits.
			That way, malicious instructions can&apos;t be written to the memory chunks that will be run in kernel mode.
			However, instead of trying to actually harden the system, Windows simply randomises where the privileged memory resides so it&apos;s hard to guess where a buffer overrun attack would need to be run to get the desired result.
		</p>
		<p>
			This isn&apos;t the <strong>*actual*</strong> security through obscurity technique, as security through obscurity involves hiding the details of how something is implemented from humans so they have a hard time guessing where the weak spots are.
			The weak spots still exist, it&apos;s just harder to spot them.
			Once you spot them, they&apos;re often extremely easy to exploit because they have no actual defences.
			This is incredibly similar though, as the weak spots still very much exist, it&apos;s just harder to know where to find them because they move.
			It&apos;s like having a non-armoured van full of money, with no guards, and having no map to its location.
			The van moves and you don&apos;t know where it is, but if you can find it, you can easily rob it at gunpoint.
			In other words, the van and the money have no real security in place.
		</p>
		<p>
			You make a good point about aligning pages to be adjacent on-disk that are adjacent in the virtual memory space.
			I&apos;m not sure how the new solid slate drive function, but the old hard disk drives that most of us still use (or I still use, I don&apos;t know about other people) use a mobile head to read and write to the drive.
			Moving the head further requires a little more time, and that little bit of time adds up over the span of many reads and writes.
			By keeping the on-disk pages aligned well, reading from near the end of one page through near the beginning of another (or even longer reads that cross borders like that) can be sped up because the head can just move continuously through the read (or write) instead of stopping, moving, and starting again.
		</p>
	</blockquote>
	<blockquote>
		<p>
			So if I understand what you&apos;re saying correctly, the system uses the cache for hard disk data to store information about the paged-out data?
			That seems like good disk data to cache.
			On-disk pages are likely to be some of the most-frequently-needed bits of data on the disk, so it makes sense to try to keep those cached.
			The data-pool-combining you mentioned seems like a good idea as well.
			One set of functions might not need as much data, while another might need more.
			By combining the pools, greater flexibility can be achieved.
		</p>
	</blockquote>
	<p>
		Finally, before heading to bed, I got that $a[XML] assignment finished.
		Eventually, my exhaustion overtook me though, and I had to go to sleep.
	</p>
</section>
END
);
