\documentclass{article}

\usepackage[latin1]{inputenc}
\usepackage{verbatim}
\usepackage{caption}
\usepackage{listings}
% \usepackage{amsmath, amsthm}
\usepackage{graphicx}

% \author{Timothy Humphries\\
%         \texttt{thum460@cse.unsw.edu.au}}
\title{Stupid Operating System}

\begin{document}
\maketitle

\tableofcontents
\newpage

\section{Overview}

Stupid Operating System is a very, very simple OS personality built
atop the seL4 microkernel, designed to run on the iMX6.

\subsection{Features}

\subsection{Code Overview}
Code can be loosely divided into physical memory management, virtual
memory management, filesystem, clock, and process management. Most of
these are covered below.

\subsubsection{Concurrency policy}
All public interfaces return integer-valued error codes.

All asynchronous interfaces are defined in terms of a callback and a
cookie - e.g. \texttt{typedef void (*frame\_swapin\_cb)(int err, void
  \*cookie)}. There is also a SOS task queue, defined in
\texttt{task.h}, that is essentially a list of callbacks and cookies,
triggered in order of registration. High-priority callbacks are
processed immediately after syscalls and before SOS yields, while
low-priority callbacks are processed on kernel ticks and are intended
for long-running operations. Some modules use coroutines internally;
for this reason, all asynchronous functions call back via the task
queue (else the function would return before the coroutine yields).


\subsubsection{Common terminology}

A couple of heavily-used variable names:

\begin{itemize}
\item \texttt{paddr}: A board-physical address assigned by
  \texttt{ut\_alloc}
\item \texttt{sos\_vaddr}: A virtual address within the SOS window,
  where some frame has been mapped. See frame table section.
\item \texttt{proc\_vaddr}: A virtual address within a process
  namespace. Probably need to go through functions from
  \texttt{pagetable.h} in order to make use of this address.
\end{itemize}

\subsubsection{Libraries used}
We made heavy use of a datastructure library known as \texttt{libut}.
This provides simple macro-based linked lists and hash tables;
for instance, adding a \texttt{UT\_hash\_handle} field to any struct
allows it to be treated as a hash table. Likewise, adding
\texttt{next} and \texttt{prev} pointers allows one to use the linked
list macros. The library does not do any memory management. This was
convenient for embedded linked lists in tables, and for other complex
topologies.
\\

We also made sparse use of a coroutine library, \texttt{picoro}.
This library uses stack splicing in a clever way, such that splices
are retained in a list and reused for future coroutines.
It also allows very simple message passing between coroutines, via
arguments to \texttt{yield} and \texttt{resume}. When a coroutine
returns, its splice is freed, as one would expect.

\subsection{Shortcomings}

Due to personnel issues, not much stress testing was done. For
instance, \texttt{malloc} is still using a fixed range in the SOS
binary, and if this heap runs out, very little is able to be done.
Likewise, coroutines are used in a number of places; if more than 30
coroutines are started at once, bad things start to happen. 
There are still a few race conditions present, including one involving
\texttt{process\_wait} and an unfair scheduler; deadlines be deadlines.
\\

I'm also really sorry about all the GCC warnings - I promise they're
all word casts and implicit declarations.

\newpage
\section{System Calls}

The basic protocol for SOS syscalls:
\begin{itemize}
\item MR 0 contains the syscall number
\item The rest of the IPC buffer is used for syscall-specific
  information. After the OS services the request,
\item MR 0 will contain any results, if they were expected
\item MR 1 will contain any error code
\end{itemize}

These are all defined in \texttt{libsos/src/sos.c} and
\texttt{syscall.c}. Besides the treatment of filesystem paths, it is
all very straightforward.


\subsection{Filesystem paths}

NFS has a maximum filename length of ~255, so we enforce a similar
length in \texttt{SOS\_PATH\_MAX}, defined as the full IPC buffer
minus three words for bookkeeping. This means a single IPC message is
sufficient to transfer any filename.

\newpage
\section{Physical Memory Management}

Files relating to physical memory can be found in the \texttt{mem/}
directory; see also \texttt{frametable.h} and \texttt{swaptable.h}.
\\

We obtain all physical frames via \texttt{ut}. Every obtained frame is
mapped into the SOS address space at a known offset. This SOS range is
known as the \texttt{SOS\_WINDOW}, and the fixed bounds are defined in
\texttt{vmem\_layout.h}.
\\

Mapping frames into SOS places a hard bound on the memory we are able
to manage, since there is limited room available in the SOS address
space. However, for the iMX6, there is plenty of room.
\\

\texttt{PHYS\_MAX\_FRAMES} in \texttt{vmem\_layout.h} defines the
maximum number of frames usable by the frame table. This can be used
to induce artificial memory pressure, for testing swap. This is also
intentionally set a little below the maximum, in order to keep enough
scratch in ut for seL4 page tables and the like. If we had more time,
we'd have built a simple scratch mechanism to replace the bad malloc
with untracked mapped ut frames.

\subsection{Frame Table}

We manage 4KiB frames and 4KiB pages. No other sizes.

The frame table is initialised once, and mapped into a fixed range
defined in \texttt{vmem\_layout.h}. Frames used for the table cannot
be reused, i.e. we leak frames to bootstrap it.
\\

It was originally designed as a simple array, to be addressed by some
simple arithmetic on the physical addresses spat out by
\texttt{ut\_alloc}. This is still the case, however the entries in the
array also behave as a doubly-linked list. The list head is
\texttt{\_swap\_queue}, and this is used to maintain the swap policy,
i.e. choose which frame to swap out next.
\\

When frames are freed, they are returned to \texttt{ut}. We do not use
a free list.

\subsubsection{Bookkeeping}
Here we step through the items tracked in a \texttt{frame\_table\_entry}.
\begin{itemize}
\item \texttt{page}: This is the cap used to map the frame into the
  \texttt{SOS\_WINDOW}.
\item \texttt{mapping}: When the frame is granted to a process, this
  stores all the information needed to revoke or restore the mapping
  (for example, when the frame is swapped out). Cap,
  \texttt{proc\_vaddr}, pointer to PCB.
\item \texttt{swappable}: A bit set when a frame is able to be
  swapped; this is usually when said frame has been granted to a
  process and is not pinned.
\item \texttt{referenced}: Used to administer the second-chance page
  replacement policy. If this bit is set, the frame has been
  referenced by a process recently, and is not a very good swap
  candidate.
\end{itemize}

\subsubsection{Functions}

Here we step through the most important functions used for the frame
table.

\begin{itemize}
\item \texttt{frame\_grant} / \texttt{frame\_ungrant}: Note that
  \texttt{frame\_alloc} assumes it is SOS who wants the frame. If we
  want to give it to a process, we grant it; this takes care of all
  the mapping and cap-copying.  Ungranting a frame reverses the
  mapping and clears the cap. Granting makes a frame swappable, and
  the caller can quickly call \texttt{frame\_pin} to avoid this.
\item \texttt{frame\_pin} / \texttt{frame\_release}: Pinning a frame
  prevents it from being swapped out. We do this when we need to
  perform a synchronous copy into or out of process memory, for
  example. Releasing a frame renders it swappable again.
  This is necessary to avoid races when performing asynchronous IO.
\item \texttt{frame\_process\_cap}: Provides the process cap for a
  granted frame. Used occasionally when the above abstractions were
  not good enough.
\item \texttt{frame\_alloc}: This allocs a frame, swapping out if
  necessary. A \texttt{sos\_vaddr} is passed back to the caller; this
  is the frame's address within the SOS window.
\item \texttt{frame\_translate\_vaddr} /
  \texttt{frame\_translate\_number}: The pager does not have enough
  bits to store a full \texttt{sos\_vaddr}; instead it stores a 24-bit
  frame number. Translate between the two with these functions.
\item \texttt{frame\_swapin}: A frame has been swapped out, and a
  process wants it back. This requests the restoration of the frame.
\end{itemize}

\subsubsection{Problems}

The frame table uses too much memory. It allocates bookkeeping
information for all frames in the range 0 - \texttt{ut\_lo}, even
though it is not possible to obtain addressese that low. Likewise, it
respects \texttt{ut\_hi}, even though we physically constrain the
number of available frames using \texttt{PHYS\_MAX\_FRAMES} in
\texttt{vmem\_layout.h}. The fields are also integers, when a simple
bitfield would be more efficient. Deadlines...

\subsection{Swap Table}
We keep track of swap numbers using a simple swap table. This wastes a
little memory, but allows us to obtain the next available swap cell in
constant time.
\\

We bootstrap the same way as the frame table, leaking memory from ut
and mapping into a range defined in \texttt{vmem\_layout}.
We treat the swap table as an array-backed linked list, where:

\begin{itemize}
\item The first cell points to the next available swap frame
\item If a cell is zeroed out, this means the adjacent frame is the
  next available
\item When returning a swap frame to the table, simply push it in as
  the list head, and make its successor the old list head.
\end{itemize}

We assume the swap file could be as large as 2GiB, so linear search
was not acceptable.

\newpage
\section{Virtual Memory Management}

All virtual memory behaviour can be found in the \texttt{vm/}
directory. A SOS-public header, \texttt{pagetable.h}, will also be
helpful.

\subsection{Regions}
In order to provide constant-time and synchronous management of
process address spaces, we introduced a simple region abstraction.
Regions have a lower-bound virtual address, a number of pages, and a
set of attributes. All changes to regions ensure that no overlap is
possible. Guard pages are also implemented as VM regions with very
restrictive attributes.
\\

VM faults first hit the region list, which performs a linear search of
all regions. This is effectively a constant-time operation, since we
did not implement mmap - all processes have ~10 regions. If the
address satisfies a region and the attributes are permissive, we move
on to the pager and potentially map in a frame.
Outside of the pager, SOS modules are only permitted to create
regions or resize the heap (move the brk) - they cannot resize
or destroy arbitrary regions without dismantling the process itself.

\subsection{Page Tables}
We implemented a standard two-level page table, with a 10/10 split.
This allowed the page directory to remain in a single 4KiB page, which
also allows us to use \texttt{frame\_alloc} facilities.
The page directory is obtained at process creation, and all child page
tables are allocated on demand. All seL4 caps and page table objects
built in this process are stored in a list in the PCB, for freeing.
\\

Each page table entry has 8 bits of possible bookkeeping and a 24-bit
unsigned frame number. When the page is backed by a physical frame,
the frame number is one that is understood by
\texttt{frame\_translate\_number}. If the frame has been swapped out,
the \texttt{attrs.swapped} bit is set, and the frame number is a
potentially-larger swap number, understood by \texttt{frame\_swapin}.
\\
The bookkeeping bits track whether the frame has been swapped, whether
it is pinned, and its readable, writeable and executable
attributes. The latter attributes are redundant, as that information
is also tracked by the region system.

\subsection{Swap policy}
We implement a second-chance page replacement policy, as required.
When a frame is granted to a process, it is appended to the swap
queue. When the frame table hits its frame cap or cannot obtain any
memory from ut, it will traverse the swap queue. Any referenced frame
in the swap queue will have its first chance, and will be mapped out
of its process, then added to the back of the queue. Any unreferenced
frame in the queue will be evicted. If no frames are swappable,
behaviour is undefined; this scenario did not get well-tested.

\subsection{Functions}

This section outlines the various public functions used by the rest of
SOS to manipulate virtual memory. Functions with the
\texttt{\_nofault} suffix are fully synchronous, and will not perform
any page fault (i.e. will report an error rather than force any
asynchronous behaviour).

\begin{itemize}
\item \texttt{sos\_page\_pin} / \texttt{sos\_page\_release}: These are
  asynchronous operations. When \texttt{page\_pin} calls back, the
  one can be certain that there is a frame backing the given
  \texttt{proc\_vaddr}, and that it will not be swapped until
  \texttt{page\_release} has been called. These combine
  \texttt{page\_fault\_handle} with  \texttt{frame\_pin} /
  \texttt{frame\_release}.
\item \texttt{sos\_page\_map} / \texttt{sos\_page\_unmap}: These
  functions are used by the frame table to control the mapping or
  unmapping of a frame into process address space. These manipulate
  capabilities and seL4-level mappings, but do not allocate or free
  any frames.
\item \texttt{sos\_page\_swapin} / \texttt{sos\_page\_swapout}: These
  notify the pager that a frame has been swapped out to disk, or
  swapped back in. As these are simple notifications, they are
  completely synchronous operations.
\item \texttt{sos\_page\_lookup\_nofault}: If a physical frame is
  currently backing \texttt{proc\_vaddr}, this function will return its
  SOS window address (\texttt{sos\_vaddr}) and, optionally, its
  permissions. Often called after \texttt{page\_pin}.
\item \texttt{sos\_copyin\_nofault} / \texttt{sos\_copyout\_nofault}:
  Supervised copying into and out of process memory.
  If there is a physical frame currently backing \texttt{proc\_vaddr},
  these functions will copy at most one frame worth of data into or
  out of that frame. Used by filesystem and ELF loading, in
  combination with \texttt{page\_pin}.
\end{itemize}

% \subsection{Difficulties}
%  
% Transitioning from a fully synchronous design to one with a high
% degree of asynchronous behaviour proved very difficult.
%  
% The easiest way around this was to make heavy use of the pinning
% functions when reaching into process memory.

\newpage
\section{Process Management}

SOS process control blocks and the relevant process management
functions are declared in \texttt{process.h}, and defined in the
\texttt{process/} directory.
\\

\subsection{PID policy}

When creating a new process, we first try a PID one higher than the
last, and wrap back around to a linear search when we hit
\texttt{SOS\_PID\_MAX}. This is not ideal, but works (deadlines).
Since we have at most 65536 processes at a time, the pathological case
completes faster than a single 10ms kernel tick.

\subsection{Process deletion}

Many race conditions were made apparent when arbitrary process
deletion became a possibility in M7. This was hacked around in a
couple of ways:

\begin{itemize}
\item When freeing up all used frames, any pinned frames were not
  returned to the frame table. We assume that whoever pinned the frame
  can clean it up.
\item For filesystem operations, we made sure that closing a FD
  mid-operation would be detected, and any pinned frames handed back.
\item When deleting a process with some arbitrary syscall currently in
  progress, we do not free the PCB; rather, we mark the \texttt{abort}
  flag in the PCB, and free all other resources. When the operation
  tries to complete, it detects the abort flag and cleans up.
  This helps avoid a lot of segfaults in the VM subsystem, which held
  only PCB pointers.
\item If a process is killed while waiting for another process, this
  leaves a dangling PCB pointer in the waiting queue. The PCB will
  dangle until that waiting queue is traversed.
\end{itemize}

\newpage
\section{File System}

The filesystem is largely defined in \texttt{vfs.[ch]},
\texttt{console.[ch]}, and \texttt{nfs.[ch]}.

\subsection{VFS layer}

The VFS we've submitted is still a little half-baked, but was designed
to allow very easy extension with additional filesystems and multiple
mountpoints. A filesystem provides a mount function, which provides a
struct of function pointers providing the various FS operations. The
VFS was supposed to use a prefix trie to determine the correct
mountpoint, but deadlines happened, and a string match is there
instead to intercept accesses to console.

The NFS and console modules both provide a struct of filesystem
operations.

\subsection{FD policy}

Processes are given public-facing, integer-valued FDs called
\texttt{proc\_fd}s. These are local to each process, and are stored in
a FD table in the PCB. The VFS selects the lowest available FD.
This can be linear time in the number of FDs.
\\

Within the VFS, each \texttt{proc\_fd} will point to a
filesystem-specific FD, \texttt{fs\_fd} (usually a pointer) as well as
a pointer to the correct filesystem. This keeps SOS memory private,
avoiding weird side-channels, and allows flexibility in filesystem
implementation. The filesystem may choose to store an internal FD
table, or not.

\subsection{NFS module}

The NFS module makes heavy use of \texttt{picoro} coroutines. Each
operation spawns a coroutine, which is woken up by the various NFS
callback functions. We use a mix of linked lists and \texttt{yield} /
\texttt{resume} to pass information around; this is suboptimal,
however, deadlines. We also use an internal FD table.

Fortunately, all NFS operations are interrupt-driven, so no desperate
measures were needed to maintain liveness; we simply set up a NFS
operation whenever we want one, and process it when it comes back.

\subsection{Console module}

The console module also makes heavy use of \texttt{picoro}
coroutines. When a reader is attached, the reader callback simply
wakes up an attached coroutine and passes it another character.
For writing, however, things are not nice and interrupt-driven.
In order to avoid dominating the system, we write a chunk, then place
our coroutine onto the SOS \texttt{task\_ready\_queue}, described in
\texttt{task.h}. High-priority tasks are run immediately after
syscalls, while low-priority tasks are run during kernel ticks. 
Long-running console writes are low-priority.

Characters read from the console are delivered straight to the
process, without double-buffering. However, when the coroutine hits a
page boundary, it will attempt to pin the next page; this may take
some time, and input may be lost in the meantime.

\newpage
\section{Clock Driver}

Defined in \texttt{clock.c} and \texttt{queue.[ch]}.

We enable interrupts on the GPT in countdown mode, and have it fire
every \texttt{KERNEL\_TICK} microseconds. This is used as our SOS
kernel tick, and is the lower bound on timer precision; by default, it
is set to 10ms. We also use the EPIT1 for timestamp precision, such
that our timestamps are microsecond-accurate.

When registering a timer, we use a simple ordered doubly-linked list,
and suffer O(N) registrations as a result (one needs to walk the list
and figure out where to insert). This does give us O(1) timer
dispatch, as desired. We fire off all timers below the relevant
threshold on every kernel tick.



\end{document}