<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Resource management',
	'<{subtitle}>' => 'Written in <span title="Operating Systems 1">CS 2301</span> of <a href="http://www.uopeople.edu/">University of the People</a>, finalised on 2017-10-04',
	'<{copyright year}>' => '2017',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<div class="APA_title_page">
	<p>
		Resource management<br/>
		Alex Yst<br/>
		University of the People
	</p>
</div>
<h2>Cache coherency</h2>
<p>
	Cache coherency is the issue addressed when trying to keep data consistent between caches and in the main memory (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
	The problem is twofold, but both problems are caused by the fact that each processor has its own cache.
	First, data in the cache isn&apos;t always written to the main memory right away by a processor.
	In case the value is to be updated again soon, the processor tries to save writes to the slow, main memory by waiting a while to perform that write.
	However, when multiple processors are involved, the process may end up stopped on one processor and started on another.
	When this happens, the second processor will need to look up the value from the main memory, but that value is out of date!
	For that reason, the process will then, for no clear reason to the programmer, use the wrong value for the variable.
	The second issue involves the value being known to two (or more) processors and existing in their caches.
	A value could be updated by a processor in both its own cache and in the main memory, but the second processor will see the out-of-date value from its own cache!
	Again, this leads to an incorrect value being used to the bewilderment of the programmer, and with no way for the programmer to code around it.
</p>
<p>
	To keep the caches from becoming incoherent, one of several techniques can be employed.
	The details aren&apos;t covered in the reading material for the week, but bus snooping is one such technique.
	Processor caches watch access to the main memory, and if a value is updated there, caches besides the one performing the update will either throw out or update their copy of the data.
	The main thing not covered by the reading material is how delayed writes are accounted for.
</p>
<h2>Sparse addressing in virtual memory</h2>
<p>
	The need for virtual memory is pretty clear.
	In order for a process to be fully isolated, it must be given a virtual memory address space that can in no way touch or be touched by other processes.
	This prevents the process from harming the operation of another process, while at the same time preventing that same process from being harmed by other processes.
	By why are sparse addresses used in the virtual memory instead of clumping all the in-memory data together?
	As it turns out, two main structures need space for resizing (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
	The heap and the stack need to grow and shrink as a process runs, and if the two were right up against one another in their virtual memory, one of them would be caught between the edge (or more likely, the virtual memory that holds the program itself) and the other structure.
	Unable to grow, it&apos;d have to be split into multiple pieces.
	Of course, the second piece would block the other structure from growing, so if it needed to grow, it&apos;d likewise need to be split.
	That just adds up to a mess.
	Instead, sparse addressing is the concept of putting the heap on one side of the virtual memory and the stack on the other.
	Both then can grow in opposite directions, becoming closer to the other structure as it does so, consuming some of the free memory between the two.
	Efficiency is achieved and data structures don&apos;t need to be fragmented.
	This becomes difficult when multiple threads share a virtual memory space though.
	As the virtual $a[RAM] has only two ends but there are multiple stacks and heaps, a less-efficient placement of these structures must be used.
	Still, leaving spaces between the structures allows them space to grow and shrink, so sparse addressing is still used in such situations.
</p>
<h2>Process states</h2>
<p>
	At any point in time, a process may be in any of three states: running, ready, or blocked (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016).
	When a process is running, it has control of the processor.
	It doesn&apos;t have full access to everything though, as it&apos;s running in user mode, not kernel mode.
	It is in this state that a process gets its work done.
</p>
<p>
	A process becomes blocked when it asks the operating system for something that can&apos;t be instantly completed.
	For example, if the program tries to perform an {$a['I/O']} operation, the process becomes blocked.
	When the system finishes providing what the process asked for (for example, getting input from a user, writing something to disc, or retrieving data from the network), the process moves from the blocked state to the ready state.
</p>
<p>
	In the ready state, a process has everything it needs to run, but currently isn&apos;t the active process using the processor for the moment.
	When a process is started, it&apos;ll be in this state, and it&apos;ll also be put in this state when leaving the blocked state.
	A running process may not complete before the operating system decides to interrupt the process and run a different one.
	In a case such as that, the process will transition from the running state to the ready state.
</p>
<h2>Load balancing</h2>
<p>
	There are a couple different ways to balance the load between processors when an uneven number of jobs exist (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016), and each have their advantages and disadvantages.
	The most obvious way, in my opinion, is to select some jobs that will remain on their respective processors while migrating others around.
	The advantage of this is quite clear: those jobs that don&apos;t migrate achieve perfect cache affinity.
	For these jobs, memory-related overhead is at its lowest achievable point, and those jobs runs smoothly.
	The disadvantage is just as blatantly obvious too: the migrated jobs achieve zero cache affinity and have the highest attainable memory overhead.
	These processes will make inexplicably slow progress, from the point of view of the user, who will have no explanation as to why these processes are performing so poorly compared to the others.
	They&apos;ll likely blame the problem on the software that spawned the processes, instead of placing blame on the operating system, where it belongs.
</p>
<p>
	The other option is to migrate all jobs between processors, slowly over time.
	In this scenario, no process gets unwarranted special treatment.
	Cache affinity is granted to each process for a while, but the process does eventually move to the next processor and allow another process to use the first processor.
	If processes A, B, and C are run on $a[CPU]s 0 and 1, $a[CPU] 0 might run processes A and B for a while, letting $a[CPU] 1 handle only process C, but then after a time, $a[CPU] 0 might be running only process B while $a[CPU] 1 might run processes A and C.
	After a while, process C would move to $a[CPU] 0, and still later, process B could end up on $a[CPU] 1 with process A.
	All three processes ended up running on both processors, though they had lengths of time in which they only had time on a specific $a[CPU] to utilise that $a[CPU]&apos;s cache effectively.
	It&apos;s hard to say which method is in total more efficient, but this second option is certainly more fair in its resource distribution.
</p>
<h2>Physical and virtual memory</h2>
<p>
	Physical memory is the memory that actually exists in the machine.
	It&apos;s provided by the $a[RAM] and can only truly be seen by the operating system.
	The operating system virtualises this resource to provide virtual memory (Arpaci-Dusseau &amp; Arpaci-Dusseau, 2016), also known as logical memory, to its processes.
	Each process is given its own, dedicated, virtual memory, as not to interfere with nor be interfered with by other processes.
	This virtualisation is part of what keeps processes contained and isolated from one another.
</p>
<h2>Conclusions</h2>
<p>
	There&apos;s a lot the operating system has to do to keep everything running smoothly, and we usually take it all for granted until things start going wrong.
	When we&apos;re on a well-constructed operating system, we can let it help us be more productive, while when we&apos;re on a less-than-functional operating system, resources become poorly-managed and our productivity falls apart.
	When people complain about their computer behaving poorly when there aren&apos;t any hardware issues, I blame the operating system.
	There are several reasons I choose to use Debian 9, and one of them is that Linux-based systems don&apos;t seem to become bogged down.
	They&apos;re great at resource management, keeping me happy and productive, with the only real bottleneck being my own human limitations.
</p>
<div class="APA_references">
	<h2>References:</h2>
	<p>
		Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). Multiprocessor Scheduling (Advanced). Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/cpu-sched-multi.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/cpu-sched-multi.pdf</code></a>
	</p>
	<p>
		Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). The Abstraction: Address Spaces. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/vm-intro.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/vm-intro.pdf</code></a>
	</p>
	<p>
		Arpaci-Dusseau, R. H., &amp; Arpaci-Dusseau, A. C. (2016, July 20). The Abstraction: The Process. Retrieved from <a href="http://pages.cs.wisc.edu/~remzi/OSTEP/cpu-intro.pdf"><code>http://pages.cs.wisc.edu/~remzi/OSTEP/cpu-intro.pdf</code></a>
	</p>
</div>
END
);
