<?php
/**
 * <https://y.st./>
 * Copyright © 2017 Alex Yst <mailto:copyright@y.st>
 * 
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <https://www.gnu.org./licenses/>.
**/

$xhtml = array(
	'<{title}>' => 'Learning Journal',
	'<{subtitle}>' => 'CS 2301: Operating Systems 1',
	'<{copyright year}>' => '2017',
	'takedown' => '2017-11-01',
	'<{body}>' => <<<END
<section id="Unit1">
	<h2>Unit 1</h2>
	<p>
		Virtualisation is used in all major operating systems.
		The two operating systems I use myself at home are Debian 9 and Replicant 4.2.
		Both of these systems provide a virtual system for applications to interact with, making it so hardware-specific programs need not be written (though different versions still need to be compiled for different processors).
		In the case of Debian, the standard Linux $a[API] is available to applications.
		Replicant is a fork of a fork of Android (that&apos;s not a typo; it&apos;s a fork of a fork), so all the free parts of the Android $a[API] are available to work with.
		(In Replicant, the proprietary $a[API] methods made available by Google Play Services aren&apos;t present because Google Play services, thankfully, isn&apos;t included.)
		Android&apos;s $a[API] is based on Java, making Java&apos;s standard methods and classes available as well.
		At work, I deal with a third system that will remain unnamed, but it provides the same type of virtualisation for applications to interact with the hardware without needing to know the specifics of that hardware.
		These are the only three operating systems I interact with, so they&apos;re the only places I come in contact with virtualisation, at least directly.
		I don&apos;t have any other electronics (such as televisions, $a[DVD] players, $a[DVR]s, et cetera) in my home besides my microwave oven, conventional oven, and refrigerator/freezer, so I don&apos;t have as much virtualisation in my home as most people do.
		Internet-based services also tend to run on machines that use virtualisation, so I likely come into indirect contact with virtualisation on countless Linux-based systems, as well as $a[BSD] systems, Windows systems, and OS X systems.
		Using this virtualisation, I&apos;m able to run many programs at once.
		I don&apos;t know what all the computer at work runs, but on my laptop, I need to simultaneously run $a[Tor], my basic desktop software, my text editor (I write in my journal and compose all my assignments in $a[XHTML]), my email client, a Web browser, my file manager, and usually at least two instances of a command line emulator, all at once.
		And that&apos;s my minimal setup, when I&apos;m not doing anything out of the ordinary.
		When I&apos;ve got other things to do, I run additional applications while keeping all my usual stuff running.
		The basic desktop software&apos;s composed of many types of applications as well, which all run at once.
		There&apos;s a lot that needs to happen even with this minimal setup!
	</p>
	<p>
		My name is Alex Yst, and I&apos;m currently working toward my computer science degree here at University of the People.
		I&apos;m agendered, so I&apos;m both an Alexandra and an Alexander (or neither, if you prefer to think of it that way).
		The main language I prefer to use is $a[PHP], but I&apos;m also fluent in Java, Python, $a[XML], $a[XHTML], $a[CSS], and Lua.
		I dabble in JavaScript, but I try to avoid using it when possible for accessibility reasons.
		I maintain a website, and I keep an <a href="https://y.st./en/coursework/">archive of my past coursework</a>.
		I type in English English (as opposed to United States English), so if you&apos;re in the United States, you&apos;ll likely see a lot of my words use an &quot;s&quot; instead of a &quot;z&quot;, and I spell &quot;colour&quot; with a &quot;u&quot;.
		I thought I had an idea of how memory addresses in $a[RAM] were used by applications.
		I thought that the operating system assigned a memory address for the application&apos;s use when the application requested it, and if the application tried to access the memory allocated to another application, the operating system would cause the program to segfault.
		In fact, that&apos;s what I thought a segfault was: an error thrown because an application tried to access memory not allocated to that specific program (or program instance, rather).
		However, it seems the $a[RAM] is virtualised, so there&apos;s no way to even try to access another application instance&apos;s $a[RAM] addresses.
		So what&apos;s a segfault then?
		I&apos;ll need to look into that when time allows, if it&apos;s not covered by the course material.
		The chapter also showed an example of a multi-threaded application coming up with the wrong solution due to sets of instructions not being treated atomically.
		The problem looks very much like my understanding of race conditions.
		I&apos;m really hoping we cover race conditions and how to properly deal with them in our software.
	</p>
</section>
<section id="Unit2">
	<h2>Unit 2</h2>
	<p>
		At the beginning of the school week, my laptop (my only Internet-enabled computer) died on me.
		What slowed that machine down to absolute zero was a fried motherboard, but that doesn&apos;t really count as a process, does it?
		I was going to spend the week as basically a guess on a foreign system, but then I read this week&apos;s learning journal assignment.
		I <strong>*needed*</strong> to have my system running so I could talk about the processes slowing it down!
		So I spent most of the week trying to get ahold of a new laptop from the recycling centre and getting the operating system replaced with my usual one: Debian 9.
		Debian&apos;s actually a pretty lightweight system though, especially considering my use of the $a[LXDE] desktop.
		Of the major desktops available, $a[LXDE] is probably the most minimal, though less-well-known desktops exist that are even more minimal.
		Except when I&apos;m having hardware issues, I almost never have a problem with my system, and the thing runs quicker than I can respond.
		I don&apos;t run a bunch of unnecessary background processes, either.
		I have noticed one application that runs slowly on certain settings: Geany.
		However, it doesn&apos;t slow the whole system down.
		Fixing the speed issue is as easy as disabling the spell check option that prints spelling suggestions to the debug output.
		Spell check can still be (and is) enabled, it just can&apos;t output the spelling suggestions in that particular way without extreme slowness.
		Firefox and Thunderbird are bloated and use more system resources than they should.
		If I had more tasks running and eating up my resources, those two would have more effect on my speed, but as I have resources to spare, these two applications do me no harm.
		Fixing the Firefox issue would be as easy as replacing the Web browser, but it&apos;s difficult to find a decent browser.
		In fact, I&apos;m not convinced Firefox is a decent browser; it&apos;s just less bad than a lot of other browsers.
		Thunderbird is hard to replace though.
		I&apos;ve seen much better email clients, but they aren&apos;t really $a[Tor]-compatible; they leak $a[DNS] requests, if nothing else.
		Thunderbird does this too, but there&apos;s a plugin to make it stop.
		My network connection is slowed down by the $a[Tor] process, which routes all my network traffic through three (or sometimes six) proxies, but there&apos;s no way to fix this.
		Proxies are vital for privacy and security, so removing or replacing $a[Tor] is not an option.
	</p>
	<p>
		As far as hardware abstraction is concerned, I think we pretty much covered the same things this week as last week.
		There wasn&apos;t anything new there.
		However, the information on processes was informative.
		In particular, I find the concept of forking to be interesting.
		I&apos;ve heard of it before, but I&apos;d never known much about it.
		I knew it resulted in more processes, and one has to be careful with these extra processes.
		For example, there&apos;s something known as a &quot;fork bomb&quot;, in which a process can be set up to fork itself indefinitely until all system resources are depleted and the system crashes.
		I think you have to purposely create fork bombs though, they aren&apos;t an accidental thing.
		The <code>exec()</code> system command is also an oddity to me.
		It&apos;s useful, but the fact that it overwrites the current process with a new one is outright bizarre.
		The assignment for the week was a bit vague.
		It asked us to use the <code>fork()</code> system command, but we obviously can&apos;t use it directly.
		We need to use a language to write our code in, so we have to use that language&apos;s $a[API].
		The book&apos;s examples were in C, but we never covered how to write C programs in this course or any of its prerequisites; I don&apos;t know the C language.
		I&apos;m hoping any language will do.
		Personally, I used my native language, $a[PHP], which uses the <code>pcntl_fork()</code> function to access the system&apos;s <code>fork()</code> command.
	</p>
</section>
<section id="Unit3">
	<h2>Unit 3</h2>
	<p>
		The book talks about how a running application isn&apos;t able to jump straight into any part of the kernel code it wants to via a memory address.
		Instead, it must specify a service via a service number, and the hardware looks up what service corresponds to that number in its trap table and executes the corresponding code.
		The book calls this a sort of protection, preventing the bypassing of (for example) permission checks.
		It&apos;s true that this aids in protection of the system, but the benefits don&apos;t stop there.
		Another great aspect of this setup is that the code of the kernel can be more easily changed.
		If even well-behaved applications needed to know what memory address to jump to to achieve the desired results, the kernel code would have to be written such that when updates occur, the the beginning of each system function in the updated code lands on exactly the same memory addresses.
		What a pain!
		It&apos;d be very difficult to expand the system functions to include more (for example) checks, and doing so would require a messy setup of jumping to another memory address in the middle of the function.
		Reducing a system function would result in wasted space between functions.
		Because applications merely need to know the system call number and not the memory address, the memory addresses in the kernel&apos;s $a[API] are allowed to change and be optimised.
		It&apos;s sort of like how in theory, $a[DNS] exists to provide human-readable names for machine addresses, but in reality, it also provides name portability, and this portability may be an even more important feature.
		The book also says that rebooting isn&apos;t hacky, it&apos;s a time-tested way of resetting back to a known-working system state.
		It&apos;s still incredibly hacky that this is even required.
		To say it isn&apos;t hacky is like saying that periodically shredding everything in your filing cabinet and reprinting it isn&apos;t a hacky way to ensure al your files are intact.
		Sure, it&apos;ll work, but there should be a better way.
		Ignoring the immense amount of paper resources consumed, such an approach interrupts workflow horribly, just like rebooting.
		Everyone that has files checked out needs to hand them back in, then all the files need to become inaccessible for a while, then access is restored when the newly-printed copies are made.
		Just like with this filing cabinet example, the system should have a way to check the integrity of the state and recover from such issues.
		In our metaphor, it&apos;s like only reprinting missing or damaged files from the cabinet, after having first taken an inventory of the contents to find the problems.
		Anything short of this is hacky; having the computer drop everything it&apos;s doing and restart is <strong>*undeniably*</strong> hacky, even if it&apos;s the only thing that works with modern software.
		Software just hasn&apos;t improved to the point it should be at yet.
	</p>
	<p>
		Use of virtual machines has improved my efficiency considerably.
		Because of virtual machines, I can run tasks concurrently on my machine.
		Often time, I deal with network slowness.
		While I wait for a page I need to load, either for an assignment submission, assignment instructions, or research material, I can work on a different assignment on another desktop, or even just in another window on the same desktop.
		Likewise, I don&apos;t have to close my plain text editor, which I use to compose my assignment submissions, to open up a Web browser for instructions or research material.
		It&apos;d be a huge hassle and waste of time to have to keep closing the Web browser and the text editor to alternate between the two.
		For that matter, the entire desktop system is only possible because of virtual machines.
		So many processes are needed just to run a desktop environment, and I enjoy the ease and efficiency of my desktop environment because virtualisation allows all those processes to run &quot;at the same time&quot;.
		I also tend to keep a few non-coursework-related things open too, such as my email client, $a[XMPP] client, and $a[IRC] client.
		If not for virtual machines, I would have to become completely unreachable to the outside world whenever i was working on my coursework!
		That&apos;d be a major hassle as well.
		There are countless ways that hardware virtualisation impacts me daily, but for the most part, all those ways boil down to simple concurrency; I don&apos;t go overboard with running huge amounts of software at once like some people do, but I do gain much from being free to run several applications together.
	</p>
</section>
<section id="Unit4">
	<h2>Unit 4</h2>
	<p>
		I wasn&apos;t sure what to think of this week&apos;s discussion assignment.
		It asked about single- versus mulit-<stong>*queue*</stong> scheduling, but that&apos;s what we covered in <strong>*last*</strong> week&apos;s reading material.
		This week, we covered single- versus multi-<strong>*processor*</strong> scheduling.
		Both in the learning guide and in the weekly forum, the topic was about queues and not processors, so I had to assume it wasn&apos;t a typo.
		With that in mind, I decided to complete this week&apos;s reading material, but then go back and discuss last week&apos;s material.
		Thankfully, in this week&apos;s material, I found a new concept of multi-queue scheduling that&apos;s different than last week&apos;s concept.
		I guess the differently-levelled queues are being treated as a single &quot;queue&quot; in this week&apos;s context, so there&apos;s potential to have multiple sets of these levelled queues.
		I also had an issue with the essay topic this week, which asked about the differences between physical, virtual, and logical memory.
		Nowhere in the reading material was logical memory mentioned, as far as I could find.
		I&apos;m guessing that logical memory is a synonym for virtual memory, but I couldn&apos;t find anything on the Web to back that up or dispute it.
	</p>
	<p>
		Scheduling in the processor context is done all day throughout my life.
		At home, I work on coursework on my computer, and the scheduling determines which processes get run when.
		Because the scheduling (at least seems to me to be) efficient on my Debian 9 system, all the work I need done gets done quickly; my own personal limitations act as a bottleneck, but the scheduling on of the processor isn&apos;t.
		We recently installed a new and unnecessary computerised system at work, too.
		The thing slows us down even when its working as intended.
		However, it often locks up for seconds to minutes at a time.
		This system must also perform process scheduling for $a[CPU] time.
		The frequent system lock-ups could be due to poor scheduling, though they might be caused by something else entirely.
		I wouldn&apos;t know.
		In a non-processor-related context, scheduling is used to tell me (and my coworkers) when to show up for duty.
		The head manager <del>guesses</del> <ins>estimates</ins> the customer load that we&apos;ll be dealing with each day, and with that load, how much work will need to be done.
		Like with processor scheduling, it&apos;s a matter of resource efficiency.
		If an employee is given too much clock time, their pay will be higher than it needs to be and the store will lose money.
		If an employee is given too little time, not enough work will get done and the employees on the clock will have too much work load.
		It&apos;s not quite like giving a process too much or too little processor time, but it&apos;s still an important function of resource management.
		Off the clock, I have to manage time in a process similar to scheduling as well.
		I don&apos;t use a formal schedule to tell me when to do what in most cases, but I do need to worry about over- and under-utilisation of time.
		If I spend too much time on coursework for example, I get burnt out and don&apos;t actually get everything done like I should.
		If I spend too little time on it, it also doesn&apos;t get done.
		A nice balance needs to be found.
		I also have to dedicate time to personal upkeep, including eating, sleeping, and showering.
	</p>
</section>
<section id="Unit5">
	<h2>Unit 5</h2>
	<p>
		Memory in the computer sense is used in my daily life both for school and for work.
		We use computers to connect to the school website and complete our assignments, and those computers need to store the information associated with our running software.
		At work, we recently installed a computerised order- and inventory-management system.
		Every employee there hates it because the system is incredibly broken and doesn&apos;t take the reality that real-world stores deal with into consideration.
		Corporate only made us install it so they could sell us their management software.
		That system obviously uses memory as well, but it also segue nicely to non-computerised memory.
		I&apos;m highly in favour of computers when well-implemented, but this system is so broken that we can&apos;t even retrieve the information we need from it.
		Yet we still have to enter the information anyway because the system needs it for other things.
		So, what we have to do, is write out separate order tickets on paper after entering orders into the system.
		These paper tickets act as a low-tech memory that allows us to store and recall information.
		If we were storing them long-term, I&apos;d say these order tickets are like a hard drive, but we don&apos;t.
		We simple store them long enough for the process (the filling of the order) to complete, which usually takes no more than twenty minutes even if we have to cook new menu items from scratch, then we throw them out, thus clearing that spot on the order rack (like freeing memory).
		For years, I&apos;ve also kept a paper and pen with me for taking notes on things I want to write about in my journal when I get home.
		Again, I&apos;m using temporary data storage, similar to memory, just long enough to make it home.
	</p>
	<p>
		I was horrified to see the assignment for this week.
		It can only be completed on a Windows-based system!
		I don&apos;t have access to any computers running Windows.
		This really isn&apos;t my week.
		In this course, I have an assignment that I can&apos;t complete because the assignment depends on students having a particular, expensive operating system that I don&apos;t use and can&apos;t afford to buy, and in my other course, the assignment is ambiguous because the website it depends on has changed since the assignment instructions were constructed.
		I immediately wrote to my professor asking what to do about the Windows-based assignment that I can&apos;t complete.
		I could easily complete the assignment if it said to find those stats using software of our choosing.
		However, it demanded we use Process Explorer, which is only available for a single operating system.
		We were given zero warning that we&apos;d need to have Windows for this course, so I wasn&apos;t able to prepare before registering.
		I&apos;m not the only student here that doesn&apos;t have Windows, either.
		Several other students expressed being in the same bind, though they seemed less stressed out about it.
		The problem is that we can&apos;t predict the grading instructions.
		If the grading instructions turn out to specify that things all be done with Process Explorer, everyone that used an alternative will get a grade of zero, despite the fact that we put in <strong>*more*</strong> effort than those that used Process Explorer.
		After all, we had to actually figure out what software would give us the stats we needed on our own, while Process Explorer users were just handed their software!
		I never did get a response from our professor, and I spent the whole week stressing about this, waiting each day hoping that the essay I wrote without Process Explorer would be fine.
		On the night before the final day, Mostapha Ramadan offered to lend me a Windows system remotely, so my essay was saved.
		However, I had work the final night, so I had to spend the second-to-final night integrating Process Explorer information into my essay instead of sleeping.
		I haven&apos;t had sleep in over twenty-four hours, and now I&apos;ve got to go to work.
		On a lighter topic, it looks like covered what segfaults are finally.
		Yay!
		Since <a href="https://y.st./en/coursework/CS2301/#Unit1">Unit 1</a> when it was hinted that segfaults aren&apos;t what I thought they were, I&apos;d been hoping we&apos;d cover what they actually are.
		Memory leaks are pretty much what I expected they were, but I hadn&apos;t heard of dangling pointers before.
		The problem of not allocating enough memory seems somewhat similar to the Heartbleed bug in OpenSSL, though I think it&apos;s not quite the same.
		In Heartbleed, the server is given a value and told the value is larger than it is.
		It seems to get the correct amount of memory allocated and written to, but then when it tries to read back the memory, it reads back way more than the actual value.
		In a way though, it&apos;s not allocating enough memory to be read back, though the real problem is that it&apos;s not sanity checking its redundant input.
		The translation of virtual memory addresses to physical memory addresses in the hardware was a surprise to me.
		I find it amazing how much hardware is designed specifically to provide the operating system with the tools it needs.
		Memory can even be set to read-only for some processes, allowing the use of shared memory containing instruction code.
		It&apos;s certainly a lot more complex than I&apos;d imagined.
		I don&apos;t understand non-executable segments of memory though.
		If a shared segment can be read but not executed, the process could simply copy the data to an executable segment, then execute it.
		So how is protection against this provided?
		The two-paragraph limit for these journal assignments is a bit of a pain.
		I can&apos;t break my ideas up into separate paragraphs like I&apos;m used to doing.
	</p>
</section>
<section id="Unit6">
	<h2>Unit 6</h2>
	<p>
		If the virtual memory space is larger than the physical $a[RAM] supports, the memory likely simply can&apos;t be paged in all at once.
		Let&apos;s assume that paging is being used, not the variable-width segmentation technique discussed last week.
		First, it&apos;s unlikely a process will need to allocate that much of the address space.
		By not keeping invalid pages in the $a[RAM], the problem is solved just about every time.
		However, it&apos;s possible the process will actually need quite a bit of memory for some reason.
		In this case, hopefully the memory manager can make good estimates as to which pages to keep loaded.
		Keep in mind too that some $a[RAM] will need to be dedicated to the $a[OS] even when the $a[OS] isn&apos;t the active process, as control will need to be able to switch back to the $a[OS].
		Assuming that this one process is the only one using lots of memory, it might be possible to get by fairly well with good estimation as to which pages to keep loaded.
		Without good estimation, the process will cause a lot of page faults, and memory will need to be paged in and out.
		Each page fault will cause a trap to the $a[OS] to be called, so the needed page can be paged in and another paged out.
		If done too much, this will cause what&apos;s known as thrashing: constant page reads from and writes to the hard disk.
		For the sake of argument, we could also have two such processes using more memory than the $a[RAM] has space for.
		In this case, nearly the entire $a[RAM] (everything except that which is reserved by the operating system) will likely get paged in and out at each context switch between the two processes.
		This situation is certainly going to cause thrashing.
		As for a segmented memory system, it&apos;s not likely such a system would be able to support full use of the address space if the address space is larger than what can fit in the section of $a[RAM] not allocated to the $a[OS].
		In such a case, when the process tries to request more memory from the $a[OS] than the $a[OS] has $a[RAM], the $a[OS] won&apos;t be able to comply and will have to terminate the process.
		Here, we see that while the paged solution to this problem is ugly and involves thrashing, at least it functions.
		The segmented solution using base and bounds registers, however, can&apos;t do what it takes to keep such a memory-consuming process alive.
	</p>
	<p>
		In response to my last journal entry, you said I should&apos;ve downloaded a gratis copy of Windows and used that in a hypervisor.
		Microsoft doesn&apos;t just give away copies of Windows though, and neither does University of the People.
		The only way to download a gratis copy would be to pirate it, which would be illegal.
		You&apos;re not recommending that I break the law, are you?
		On the topic of this week&apos;s work, I was overwhelmed when I saw the main assignment for this week.
		Given addresses of a given size and pages of a given size, how many bits are in the offset and how many bits are in the page number?
		That seemed like nearly-impossible-to-grasp knowledge, and I worried I wouldn&apos;t get it; it seemed like to calculate that, I&apos;d need more information than was given.
		It wasn&apos;t anywhere near the overwhelming feeling I got from last week&apos;s work though, and I hoped it was a simpler concept than it appeared on the surface.
		Thankfully, I wasn&apos;t disappointed.
		There&apos;s a number of bits needed to specify a memory location within a page, so the remaining bits of the virtual address are used to specify the page number.
		It&apos;s simple reverse exponentiation and subtraction.
		While reading about translation-lookaside buffers, the first thing that came to mind was context switches.
		After all, every process has its own address space, so how in the world is an address-translation cache supposed to function like we need it to?
		That was covered as well though, so everything made sense again.
	</p>
</section>
<section id="Unit7">
	<h2>Unit 7</h2>
	<p>
		My coursework is, as always, done on my laptop, so there&apos;s physical memory being used there.
		At work, we have a computerised inventory-management system, so gain, we have physical memory present.
		My mobile also has physical memory, though I don&apos;t use my mobile often.
		And finally, I&apos;m pretty sure my microwave oven has physical memory, which it uses to store the number of seconds left of the cooking time.
		I haven&apos;t used my microwave oven in a few weeks though, so I guess that doesn&apos;t qualify as a place physical memory was used in my everyday life this week.
		If we look at the question of where physical memory is present literally, there&apos;s not much to talk about.
		All computerised objects have it, while all non-computerised objects don&apos;t.
		However, I often think of the human brain similarly to how I think of a computer.
		When it takes me a couple moments to remember a piece of information, I think of it being like having to load information from a file on disk into $a[RAM], and I&apos;ve even used such metaphors in my everyday speech.
		While it can take a second or two to remember something you haven&apos;t worked with for a while, once you have thought of it, you can continue using that information without needing to repeatedly take that few seconds to think of it again.
		It&apos;s just like loading a file.
		On the other hand, if someone distracts you while you&apos;re doing something, it can take a bit to remember what you were doing, but it doesn&apos;t take quite as long as before.
		It&apos;s like the thought was paged out of physical memory and into the swap partition.
		Recalling it again is like paging it back in.
		In past weeks, I was able to compare the requested computer components with physical objects, but physical memory is a little different.
		Physical memory is what&apos;s being used <strong>*right now*</strong>.
		If you write something down, that&apos;s not like physical memory; it&apos;s more like something on disk.
		Likewise, anything else in your environment that you manipulate to store data is likewise more of a hard drive than it is physical memory.
		If you&apos;re using something other than your short-term memory to store the data, it&apos;s not physical memory.
		That&apos;s what physical memory is: short-term, volatile memory that is erased when the system is powered down or (in the case of people) you just dump it.
	</p>
	<p>
		Lovely.
		Another Windows-only assignment.
		These kinds of assignments are stacked against those of us that don&apos;t have the Windows operating system; in other words, those of us that are either too poor to afford Windows and those of us that are too well-informed to choose Windows as our computer&apos;s operating system.
		In the reading assignment, we covered the information needed for last week&apos;s learning journal assignment.
		For last week&apos;s learning journal assignment, we were simply asked to state how we thought a computer would handle virtual memory spaces that don&apos;t fit into $a[RAM], and this week, we got to see how close our guesses turned out to be.
		It was pretty much as I said, too: memory would need to be paged in as used, but not stored all in $a[RAM] at once.
		I didn&apos;t guess that a process that triggers a page fault would become blocked though.
		I hadn&apos;t considered that a page fault is basically an {$a['I/O']} operation and would be subject to similar rules as any other {$a['I/O']} operation.
		I also found the comparison of $a[RAM] to a cache for pages to be interesting.
		I&apos;m not sure how accurate that depiction technically is, but as far as how to think about which pages to keep loaded, it&apos;s exactly how we should think of the $a[RAM].
	</p>
</section>
<section id="Unit8">
	<h2>Unit 8</h2>
	<p>
		For the sake of this assignment though, if I were for some reason a Windows user, it seems the first step would be to use the permission system to protect the log files.
		Why are the log files not protected by default in Windows?
		I&apos;m not sure; after all, they <strong>*are*</strong> protected by default in Linux.
		If a Windows &quot;domain&quot; is configured for the machine, it can be further configured to prevent access to the logs from those that shouldn&apos;t have it.
		Windows can also be used to log access to log files, so if suspicious log access is noticed, those accessing the log file when they&apos;re not supposed to can be questioned and appropriate action taken.
		The $a[NSA] says the Enhanced Mitigation Experience Toolkit can be used to provide added defence for Windows machines connected to the network, but doesn&apos;t give any details as to what the Enhanced Mitigation Experience Toolkit actually does.
		They also recommend using a single, dedicated machine for gathering all the logs from the machines on the local network.
		&quot;Domain&quot; policies can be put in place to allow the reading of those gathered logs by only those that should be authorised to read them.
		The event collector used to centralise the log files, as recommended by the $a[NSA], opens up a vulnerability.
		(Network services on any system are likely to open up some sort of vulnerability, not just on Windows.)
		To mitigate this vulnerability, firewall rules can be established to only allow the legitimate log collector to access this service on the other machines.
		Furthermore, communication between the logging server and the other machines isn&apos;t encrypted by default.
		It&apos;s possible and highly recommendable to enable encryption.
		Honestly, <strong>*all*</strong> network traffic should be encrypted.
		No exceptions.
		Insecure methods of authentication when dealing with the log server should likewise be disabled.
		Channel-binding tokens can be enabled and used to prevent many man-in-the-middle attacks.
		A trusted host can be set up for when an unencrypted connection is used to communicate with the log server, but this is irrelevant; as stated before, <strong>*all*</strong> connections should be encrypted.
		Anything less than that is not secure.
		The $a[NSA] has advice on which events should be logged and which shouldn&apos;t, but going into details in this learning journal entry would make this already-too-long paragraph even longer, seeing as there&apos;s a two-paragraph cap on this assignment so I can&apos;t break up ideas into their own paragraphs.
		To keep it short, the $a[NSA] recommends logging application whitelisting, application crashes, system/service failures, system update errors, firewall reconfigurations, log modification (such as erasure), software installation, account usage (especially privilege escalation), anything related to the antivirus software (such as modification of the antivirus software or failure of the antivirus software to perform normally), the connection and disconnection of mobile devices (and I&apos;d assume other portable devices such as laptops) to the network, the insertion of $a[USB] storage devices, document printing, remote desktop logins, 
		As the $a[NSA] says, not everything logged will be malicious, but if malicious activity is discovered, logs can be very helpful in tracking down the culprit.
		They recommend reviewing the logs daily, then archiving said logs and starting with a clean log file.
		That way logs can be gone over again later if they need to be, but huge, slow log files aren&apos;t burdening the system.
		Next, the $a[NSA] recommends disabling the remote shell.
		Again, as a member of the Linux community, this seems so very backwards.
		On Linux, our remote shell equivalent, <code>ssh</code>, has to be explicitly installed.
		It&apos;s not installed or enabled by default.
		In other words, Linux takes the secure approach, enabling only the access the administrator explicitly enables, while Windows opens itself up to all access by default, providing a large surface for attackers to work with until an administrator explicitly removes such access.
		The $a[NSA] touts kernel driver signing as something to be aware of and make use of for security, but unless you can trust the people signing the kernel, such a feature doesn&apos;t actually provide any security.
		And as Microsoft has proven themselves untrustworthy countless times in the past, a signature from them is next to worthless.
		Still, if you&apos;re going to be using Windows, you&apos;re deciding to put your trust in Microsoft anyway.
		While a signature from Microsoft is <strong>*next to*</strong> worthless, it&apos;s not <strong>*completely*</strong> worthless.
		You can use such signatures to be sure that the only tampering that occurs to signed software is either performed my Microsoft themself (and is therefore not a tampering, but a regular modification) or performed by someone authorised by Microsoft (such as the $a[NSA]).
		The $a[NSA] talks about avoiding malware infections several times in their paper, but really, the most effective way to avoid malware is to use a system build from code that can be fully audited so security holes can be found and patched by anyone that cares enough to put in the time.
		$a[BSD] and Linux are going to be your go-to systems for <strong>*real*</strong> security.
		When a security hole is found in one of these systems, it gets patched pretty quickly.
		On the other hand, with Windows, when a security hole is found, users are stuck waiting for Microsoft to do something about it.
		It&apos;s also worth noting that Windows 8 and above have built-in spyware that phones home to Microsoft.
		If you want a semi-secure Windows installation (as a very secure Windows installation isn&apos;t possible), you&apos;ll need to use Windows 7 or older.
	</p>
	<p>
		I feel like the learning journal assignment this week is rigged.
		The two Windows-based assignments were bad enough, but with a fellow student lending me a Windows virtual machine remotely, they were possible to complete.
		The week, the topic is Windows security.
		The learning journal assignment is to discuss what security measures I have taken or will take because of what I&apos;ve learned from the course material.
		Using Windows is known to be one of the biggest security risks one can take on a computer though.
		Even if I could afford to use Windows, and I can&apos;t, there&apos;s no reason why I&apos;d use it if I care at all about security.
		And I do care about security.
		Deeply.
		For that very reason, among others, the material this week has had zero effect on how I plan to use my computer.
		I&apos;ll continue not using Windows, and I&apos;ll continue enjoying a security level that is above even the most hardened Windows system conceived.
		None of this material applies to measures I can or would take personally, as I&apos;m too well-informed to ever choose to use the Windows operating system.
		I also noticed that the reading material this week was provided by the $a[NSA].
		That makes it dubious at best, as the $a[NSA] is known to convince proprietary system developers to build back doors into their systems.
		The $a[NSA] doesn&apos;t have our best interests or our security at heart, so what they say about security should be taken with a grain of salt.
		My guess is that any security-boosting tips they have are fine, but that they&apos;re not enough.
		Further hardening would be required for a system to be anywhere near secure.
		Furthermore, if the $a[NSA] says some security measure is unnecessary, there&apos;s a good chance it&apos;s not actually an unnecessary measure.
		Their advice on what to log, if assumed to be only a <strong>*minimal*</strong> level of logging, was probably valuable for any multi-user system.
		As I&apos;m the only human user on my computer though (there are several daemon-run users), most of this logging wouldn&apos;t be beneficial in my current circumstances.
	</p>
</section>
END
);
