\chapter{Sleep and Wakeup}
\label{CH:SLEEP}
%% 

Scheduling and locks help conceal the actions of one thread
from another,
but we also need abstractions that help
threads intentionally interact.
For example, the reader of a pipe in xv6 may need to wait
for a writing process to produce data;
a parent's call to \lstinline{wait} may need to
wait for a child to exit; and
a process reading the disk needs to wait
for the disk hardware to finish the read.
The xv6 kernel uses a mechanism called sleep and wakeup
in these situations (and many others).
Sleep allows a kernel thread to
wait for some condition to be true; another thread
or an interrupt handler can cause the condition
to be true (typically by modifying some variable(s))
and then call wakeup
to indicate that threads waiting for the condition should resume.
Sleep and wakeup are often called 
\indextext{sequence coordination}
or 
\indextext{conditional synchronization}
mechanisms.

Before proceeding, please read the functions {\tt sleep()} and {\tt
  wakeup()} in {\tt kernel/proc.c}, and all of file {\tt kernel/pipe.c}.

\section{Overview}

The sleep/wakeup interface looks like:

\begin{lstlisting}
void sleep(void *chan, struct spinlock *lk)
void wakeup(void *chan)
\end{lstlisting}

\lstinline{sleep()} marks the calling process as \lstinline{SLEEPING}
(not \lstinline{RUNNABLE}) and releases the CPU by context-switching 
to the scheduler, so that other processes can run.
The \lstinline{chan} argument is called the \indextext{wait channel}.
\lstinline{wakeup(chan)} wakes up all processes (if any)
that have called \lstinline{sleep(chan, ...)} with the same \lstinline{chan}
value. \lstinline{sleep} and \lstinline{wakeup} treat \lstinline{chan}
as an opaque 64-bit value; the only thing they do with it is compare for
equality. The usual pattern is for callers to pass the address of
some convenient object as the \lstinline{chan} argument.

Kernel code calls \lstinline{sleep} to wait for some
\indextext{condition} to become true. For example, the kernel code
that reads from a pipe calls \lstinline{sleep} if the pipe buffer is
currently empty; the condition in this case is the pipe buffer
becoming non-empty (due to another process writing to the pipe).
\lstinline{sleep} and \lstinline{wakeup} do not know what the
condition is: only the calling code knows. The usual pattern is for
the caller to first check the condition, and call \lstinline{sleep}
if it is not true; code that later makes the
condition true calls \lstinline{wakeup}.

Here's a sketch of how the xv6 kernel pipe code uses
\lstinline{sleep} and \lstinline{wakeup}:

\begin{lstlisting}
piperead(pipe){
  acquire(&pipe->lock);
  while(there's no data in pipe->buffer){
    (*@\textcolor{blue}{// ZZZ}@*)
    sleep(&pipe, &pipe->lock);
  }
  remove the data from the pipe;
  release(&pipe->lock);
}

pipewrite(pipe){
  acquire(&pipe->lock);
  append data to pipe->buffer;
  wakeup(&pipe);
  release(&pipe->lock);
}
\end{lstlisting}

This code uses the address of the pipe data structure as
the wait channel.

What is the \lstinline{lk} argument to \lstinline{sleep}?
In all uses of \lstinline{sleep}/\lstinline{wakeup}
the condition involves shared data, used by both the
thread that sleeps and the thread that calls \lstinline{wakeup}, so
there always turns out to be a lock that protects the condition. That lock is
called the \indextext{condition lock}. In the pipe code above, both
functions use the pipe and its buffer while holding the pipe lock,
which in this case is also the condition lock. It's a rule that any code
that calls \lstinline{sleep} or \lstinline{wakeup} must hold the
condition lock, and that the lock must be passed to \lstinline{sleep}
as the second argument.

The reason that the condition lock must be held when \lstinline{sleep}
is called, and that it must be passed to \lstinline{sleep}, is to
prevent the possibility that another thread might call
\lstinline{wakeup} between the check of the condition 
and the call to \lstinline{sleep}. A call to
\lstinline{wakeup} at that point would find no sleeping process to
wake up; the \lstinline{wakeup} would simply return.
But then the call to \lstinline{sleep} might never wake up,
since the \lstinline{wakeup} intended for it has already happened.
This undesirable situation is called a \indextext{lost wake-up}.

In the pipe example above, the lost wake-up being avoided is the
possibility that a thread on another CPU might call \lstinline{pipewrite}
at the point marked {\tt ZZZ},
between \lstinline{piperead}'s check of the condition and its call to
\lstinline{sleep}. The fact that \lstinline{piperead} holds the pipe
lock during the time between when it checks the condition and calls
\lstinline{sleep} prevents \lstinline{pipewrite} from executing, and
thus prevents a lost wake-up.

{\tt sleep()} releases the condition lock so that the code calling
{\tt wakeup()} can proceed. {\tt sleep()} also context-switches to the
scheduler in order to let other threads run while it is waiting. The
implementation performs these two steps in a way that is atomic
(indivisible) with respect to {\tt wakeup()}, to prevent lost
wake-ups.

%% 
\section{Code: Sleep and wakeup}
%% 

Xv6's
\indexcode{sleep}
\lineref{kernel/proc.c:/^sleep/}
and
\indexcode{wakeup}
\lineref{kernel/proc.c:/^wakeup/}
implement the interface used in the example above.
The basic idea is to have
\lstinline{sleep}
mark the current process as
\indexcode{SLEEPING}
and then call
\indexcode{sched}
to release the CPU;
\lstinline{wakeup}
looks for a process sleeping on the given wait channel
and marks it as 
\indexcode{RUNNABLE}.

\lstinline{sleep}
acquires 
\indexcode{p->lock}
\lineref{kernel/proc.c:/DOC: sleeplock1/}
and {\it only then} releases
the condition lock
\lstinline{lk}.
The fact that \lstinline{sleep}
holds one or the other of these locks at all times is what
prevents a concurrent \lstinline{wakeup}
(which must acquire and hold both) from acting,
and thus prevents a lost wake-up.
Now that
\lstinline{sleep}
holds just
\lstinline{p->lock},
it can put the process to sleep by recording
the wait channel,
changing the process state to \texttt{SLEEPING},
and calling
\lstinline{sched}
\linerefs{kernel/proc.c:/chan.=.chan/,/sched/}.
In a moment it will be clear why it's critical that
\lstinline{p->lock} is not released (by \lstinline{scheduler}) until after
the process is marked \texttt{SLEEPING}.

At some point, a process will acquire the condition lock,
set the condition that the sleeper is waiting for,
and call \lstinline{wakeup(chan)}.
It's important that \lstinline{wakeup} is called
while holding the condition lock\footnote{%
%
Strictly speaking it is sufficient if
\lstinline{wakeup}
merely follows the
\lstinline{acquire}
(that is, one could call
\lstinline{wakeup}
after the
\lstinline{release}).%
%
}.
\lstinline{wakeup}
loops over the process table
\lineref{kernel/proc.c:/^wakeup\(/}.
It acquires the
\lstinline{p->lock}
of each process it inspects.
When \lstinline{wakeup} finds a process in state
\indexcode{SLEEPING}
with a matching
\indexcode{chan},
it changes that process's state to
\indexcode{RUNNABLE}.
The next time \lstinline{scheduler} runs, it will
see that the process is ready to be run.

\begin{figure}[t]
  \input{fig/sleep.tex}
  \caption{Overlapping locks to avoid lost wake-up}
    \label{fig:overlap}
\end{figure}

Why do the locking rules for 
\lstinline{sleep}
and
\lstinline{wakeup}
ensure that a process that's going to sleep
won't miss a concurrent wakeup?
The going-to-sleep process holds either
the condition lock or its own
\lstinline{p->lock} 
or both from {\it before} it checks the condition
until {\it after} it has marked itself as \texttt{SLEEPING};
see Figure~\ref{fig:overlap}.
The process calling \texttt{wakeup} needs to aquire \textit{both}
locks.
The waker might acquire the locks first, which means
it will make the condition true before the consuming
thread checks the condition, and the consuming thread
won't need to call {\tt sleep()};
or the waker's {\tt acquire()}s might have to wait until the consuming
thread has completely finished going to sleep and
releases the locks, in which case the waker will then see
that the consuming thread is marked {\tt SLEEPING}
and will wake it up.

Sometimes multiple processes are sleeping
on the same channel; for example, more than one process
reading from a pipe.
A single call to 
\lstinline{wakeup}
will wake them all up.
One of them will run first and acquire the lock that
\lstinline{sleep}
was called with, and (in the case of pipes) read whatever
data is waiting.
The other processes will find that, despite being woken up,
there is no data to be read.
From their point of view the wakeup was ``spurious,'' and
they must sleep again.
For this reason \lstinline{sleep} is always called inside a loop that
re-checks the condition, as in \lstinline{P} above.

No harm is done if two uses of sleep/wakeup accidentally
choose the same channel: they will see spurious wakeups,
but looping as described above will tolerate this problem.
Much of the charm of sleep/wakeup is that it is both
lightweight (no need to create special data
structures to act as wait channels) and provides a layer
of indirection (callers need not know which specific process
they are interacting with).
%% 
\section{Code: Pipes}
%% 
xv6's pipes are an example of code that uses
\lstinline{sleep}
and \lstinline{wakeup} to synchronize producers and
consumers.
We saw the interface for pipes in Chapter~\ref{CH:UNIX}:
bytes written to one end of a pipe are copied
to an in-kernel buffer and then can be read from
the other end of the pipe.
Future chapters will examine the file descriptor support
surrounding pipes, but let's look now at the
implementations of 
\indexcode{pipewrite}
and
\indexcode{piperead}.

Each pipe
is represented by a 
\indexcode{struct pipe},
which contains
a 
\lstinline{lock}
and a 
\lstinline{data}
buffer.
The fields
\lstinline{nread}
and
\lstinline{nwrite}
count the total number of bytes read from
and written to the buffer.
The buffer wraps around:
the next byte written after
\lstinline{buf[PIPESIZE-1]}
is 
\lstinline{buf[0]}.
The counts do not wrap.
This convention lets the implementation
distinguish a full buffer 
(\lstinline{nwrite}
\lstinline{==}
\lstinline{nread+PIPESIZE})
from an empty buffer
(\lstinline{nwrite}
\lstinline{==}
\lstinline{nread}),
but it means that indexing into the buffer
must use
\lstinline{buf[nread}
\lstinline{%}
\lstinline{PIPESIZE]}
instead of just
\lstinline{buf[nread]} 
(and similarly for
\lstinline{nwrite}).

Let's suppose that calls to
\lstinline{piperead}
and
\lstinline{pipewrite}
happen simultaneously on two different CPUs.
\lstinline{pipewrite}
\lineref{kernel/pipe.c:/^pipewrite/}
begins by acquiring the pipe's lock, which
protects the counts, the data, and their
associated invariants.
\lstinline{piperead}
\lineref{kernel/pipe.c:/^piperead/}
then tries to acquire the lock too, but cannot.
It spins in
\lstinline{acquire}
\lineref{kernel/spinlock.c:/^acquire/}
waiting for the lock.
While
\lstinline{piperead}
waits,
\lstinline{pipewrite}
loops over the bytes being written
(\lstinline{addr[0..n-1]}),
adding each to the pipe in turn
\lineref{kernel/pipe.c:/nwrite\+\+/}.
During this loop, it could happen that
the buffer fills
\lineref{kernel/pipe.c:/DOC: pipewrite-full/}.
In this case, 
\lstinline{pipewrite}
calls
\lstinline{wakeup}
to alert any sleeping readers to the fact
that there is data waiting in the buffer
and then sleeps on
\lstinline{&pi->nwrite}
to wait for a reader to take some bytes
out of the buffer.
\lstinline{sleep}
releases 
the pipe's lock
as part of putting
\lstinline{pipewrite}'s
process to sleep.

\lstinline{piperead}
now acquires the pipe's lock and enters its critical section:
it finds that
\lstinline{pi->nread}
\lstinline{!=}
\lstinline{pi->nwrite}
\lineref{kernel/pipe.c:/DOC: pipe-empty/}
(\lstinline{pipewrite}
went to sleep because
\lstinline{pi->nwrite}
\lstinline{==}
\lstinline{pi->nread}
\lstinline{+}
\lstinline{PIPESIZE}
\lineref{kernel/pipe.c:/pipewrite-full/}),
so it falls through to the 
\lstinline{for}
loop, copies data out of the pipe
\lineref{kernel/pipe.c:/DOC: piperead-copy/},
and increments 
\lstinline{nread}
by the number of bytes copied.
That much space in the buffer is now available for writing, so
\lstinline{piperead}
calls
\lstinline{wakeup}
\lineref{kernel/pipe.c:/DOC: piperead-wakeup/}
to wake any sleeping writers
before it returns.
\lstinline{wakeup}
finds a process sleeping on
\lstinline{&pi->nwrite},
the process that was running
\lstinline{pipewrite}
but stopped when the buffer filled.
It marks that process as
\indexcode{RUNNABLE}.

The pipe code uses separate wait channels for reader and writer
(\lstinline{pi->nread}
and
\lstinline{pi->nwrite});
this might make the system more efficient in the unlikely
event that there are lots of
readers and writers waiting for the same pipe.
The pipe code sleeps inside a loop checking the
sleep condition; if there are multiple readers
or writers, all but the first process to wake up
will see the condition is false and sleep again.
%% 
\section{Code: Wait, exit, and kill}

Please read the code for functions {\tt kwait()}, {\tt kexit()},
and {\tt kkill()} in {\tt kernel/proc.c}; these are the
internal implementations of the corresponding system calls.

%% 
\lstinline{sleep}
and
\lstinline{wakeup}
can be used for many kinds of waiting.
An interesting example, introduced in Chapter~\ref{CH:UNIX},
is the interaction between a child's \indexcode{exit}
and its parent's \indexcode{wait}.
At the time of the child's death, the parent may already
be sleeping in {\tt wait}, or may be doing something else;
in the latter case, a subsequent call to {\tt wait} must
observe the child's death, perhaps long after it calls {\tt exit}.
The way that xv6 records the child's demise until {\tt wait}
observes it is for {\tt exit} to put the caller into the \indexcode{ZOMBIE}
state, where it stays until the parent's {\tt wait} notices it, changes
the child's state to {\tt UNUSED}, copies the child's exit status,
and returns the child's process ID to the parent.
If the parent exits before the child, the 
parent gives the child to the
\lstinline{init}
process, which perpetually calls {\tt wait};
thus
every child has a parent to clean up after it.
A challenge is
to avoid races and deadlock between
simultaneous parent 
\lstinline{wait}
and child
\lstinline{exit},
as well as simultaneous
\lstinline{exit} and \lstinline{exit}.

\lstinline{kwait}, the kernel implementation
for {\tt wait}, starts by acquiring
\lstinline{wait_lock}
\lineref{kernel/proc.c:/^kwait/},
which acts as the condition
lock that helps ensure that \lstinline{kwait} doesn't miss a \lstinline{wakeup}
from an exiting child.
Then \lstinline{kwait} scans the process table.
If it finds a child in \texttt{ZOMBIE} state,
it frees that child's resources and
its \lstinline{proc} structure, copies
the child's exit status to the address supplied to \lstinline{wait}
(if it is not 0),
and returns the child's process ID.
If 
\lstinline{kwait}
finds children but none have exited,
it calls
\lstinline{sleep}
to wait for any of them to exit
\lineref{kernel/proc.c:/DOC: wait-sleep/},
then scans again.
\lstinline{kwait} often holds two locks,
\lstinline{wait_lock} and some process's \lstinline{pp->lock};
the deadlock-avoiding order is first \lstinline{wait_lock}
and then \lstinline{pp->lock}.

\lstinline{kexit} \lineref{kernel/proc.c:/^kexit/} records the exit
status, frees some resources, calls \lstinline{reparent} to give any
children to the \lstinline{init} process, wakes up the parent in case
it is in \lstinline{wait}, marks the caller as a zombie, and
permanently yields the CPU. \lstinline{kexit} holds both
\lstinline{wait_lock} and \lstinline{p->lock} during this
sequence.
It holds \lstinline{wait_lock} because 
it's the condition
lock for the \lstinline{wakeup(p->parent)}, preventing a parent in
\lstinline{wait} from losing the wakeup. \lstinline{kexit} must hold
\lstinline{p->lock} for this sequence also, to prevent a parent in
\lstinline{wait} from seeing that the child is in state
\lstinline{ZOMBIE} before the child has finally called
\lstinline{swtch}. \lstinline{kexit} acquires these locks in
the same order as \lstinline{kwait} to avoid deadlock.

It may look incorrect for \lstinline{kexit} to wake up the parent
before setting its state to \lstinline{ZOMBIE}, 
but that is safe:
although
\lstinline{wakeup}
may cause the parent to run,
the loop in the parent's
\lstinline{kwait}
cannot examine the child until the child's
\lstinline{p->lock}
is released by {\tt scheduler},
so
\lstinline{kwait}
can't look at
the exiting process until after
\lstinline{kexit}
has set its state to
\lstinline{ZOMBIE}
\lineref{kernel/proc.c:/state.=.ZOMBIE/}.

While
\lstinline{exit} 
allows a process to terminate itself,
the \lstinline{kill} system call
\lineref{kernel/proc.c:/^kkill/} 
lets one process request that another terminate.
It would be too complex for
\lstinline{kill}
to directly destroy the victim process, since the victim
might be executing on another CPU, perhaps
in the middle of a sensitive sequence of updates to kernel data structures.
Thus
\lstinline{kkill}
does very little: it just sets the victim's
\indexcode{p->killed}
and, if it is sleeping, wakes it up.
Eventually the victim will enter or leave the kernel,
at which point code in
\lstinline{usertrap}
will call
\lstinline{kexit}
if
\lstinline{p->killed}
is set
(it checks by calling
\lstinline{killed}
\lineref{kernel/proc.c:/^killed/}).
If the victim is running in user space, it will 
see that it has been killed the next time it enters
the kernel by making a system call or because the timer (or
some other device) interrupts.

If the victim process is in
\lstinline{sleep},
\lstinline{kkill}'s call to
\lstinline{wakeup}
will cause the victim to return from
\lstinline{sleep}.
This is potentially dangerous because 
the condition being waited for may not be true.
However, xv6 calls to
\lstinline{sleep}
are always wrapped in a
\lstinline{while}
loop that re-tests the condition after
\lstinline{sleep}
returns.
Some calls to
\lstinline{sleep}
also test
\lstinline{p->killed}
in the loop, and abandon the current activity if it is set.
This is only done when such abandonment would be correct.
For example, the pipe read and write code
\lineref{kernel/pipe.c:/killed.pr/} 
returns if the killed flag is set; eventually the
code will return back to trap, which will again
check \lstinline{p->killed} and exit.

Some xv6 
\lstinline{sleep}
loops do not check
\lstinline{p->killed} 
because the code is in the middle of a multi-step
system call that should be atomic (i.e., would be
incorrect if abandoned midway through).
The virtio driver
\lineref{kernel/virtio\_disk.c:/sleep.b/} 
is an example: it does not check
\lstinline{p->killed}
because a disk operation may be one of a set of
writes that are all needed in order for the file system to
be left in a correct state.
A process that is killed while waiting for disk I/O won't
exit until it completes the current system call and
\lstinline{usertrap} sees the killed flag.

\section{Process Locking}

The lock associated with each process (\lstinline{p->lock}) is the
most complex lock in xv6.
A simple way to think about \lstinline{p->lock} is
that it must be held while reading or writing any of the following
\lstinline{struct proc} fields:
\lstinline{p->state},
\lstinline{p->chan},
\lstinline{p->killed},
\lstinline{p->xstate},
and
\lstinline{p->pid}.
These fields can be used by other processes, or by scheduler
threads on other CPUs, so it's natural that they
must be protected by a lock.

However, most uses of \lstinline{p->lock} are protecting higher-level
invariants of xv6's process data structures and algorithms. Here's
the full set of things that \lstinline{p->lock} does:

% it's hard to know how to phrase these things in a uniform way.
% should the discussion be about invariants? protecting data? avoiding
% races? ensuring atomicity? avoiding a specific danger?

% the "atomic" phrasing often doesn't really say what the danger is.

\begin{itemize}

\item Along with \lstinline{p->state}, it prevents races in allocating
  \lstinline{proc[]} slots for new processes.
% makes check for UNUSED atomic with allocation (or something).

\item It conceals a process from view while it is being created
or destroyed.
% makes all the steps of allocation, and destruction, atomic.

\item It prevents a parent's \lstinline{wait} from collecting a
process that has set its state to \lstinline{ZOMBIE} but has
not yet yielded the CPU.
% it atomicizes the setting of the state to ZOMIE and the yielding
% of the CPU.

\item It prevents another CPU's scheduler from deciding to run
a yielding process after it sets its state to \lstinline{RUNNABLE} but
before it finishes \lstinline{swtch}.
% it atomizes the setting of p->state and swtch().

\item It ensures that only one CPU's scheduler decides to run a
  \lstinline{RUNNABLE} processes.
% it atomicizes a scheduler's check for RUNNABLE and actually
% running the process. or setting state to RUNNING.

\item It prevents a timer interrupt from causing a process to
yield while it is in \lstinline{swtch}.
% it makes swtch (or really the whole sequence) atomic w.r.t. timer interrupts

\item Along with the condition lock, it helps prevent \lstinline{wakeup}
from overlooking a process that is calling \lstinline{sleep} but has not
finished yielding the CPU.
% avoids a race between conditionCheck+sleep and wakeup?
% makes condition check atomic with yield?

\item It prevents the victim process of \lstinline{kill} from exiting
and perhaps being re-allocated between \lstinline{kkill}'s check of 
\lstinline{p->pid} and setting \lstinline{p->killed}.
% it makes the pid check atomic with setting killed.

\item It makes \lstinline{kkill}'s check and write of \lstinline{p->state}
atomic.

\end{itemize}


The \lstinline{p->parent} field is protected by the global lock
\lstinline{wait_lock} rather than by \lstinline{p->lock}.
Only a process's parent modifies \lstinline{p->parent}, though
the field is read both by the process itself and by other
processes searching for their children. The purpose of 
\lstinline{wait_lock} is to act as the condition lock when
\lstinline{wait} sleeps waiting for any child to exit. An
exiting child holds either \lstinline{wait_lock} or \lstinline{p->lock}
until after it has set its state to \lstinline{ZOMBIE}, woken
up its parent, and yielded the CPU. \lstinline{wait_lock} also
serializes concurrent \lstinline{exit}s by a parent and child,
so that the \lstinline{init} process (which inherits the child)
is guaranteed to be woken up from its \lstinline{wait}.
\lstinline{wait_lock} is a global lock rather than a per-process
lock in each parent, because, until a process acquires it,
it cannot know who its parent is.

%% 
\section{Real world}
%% 

\lstinline{sleep}
and
\lstinline{wakeup}
are a simple and effective synchronization method,
but there are many others;
semaphores~\cite{dijkstra65} are an example.
The first challenge in all of them is to
avoid the ``lost wakeups'' problem we saw at the
beginning of the chapter.
The original Unix kernel's
\lstinline{sleep}
simply disabled interrupts,
which sufficed because Unix ran on a single-CPU system.
Because xv6 runs on multiprocessors,
it adds an explicit lock to
\lstinline{sleep}.
FreeBSD's
\lstinline{msleep}
takes the same approach.
Plan 9's 
\lstinline{sleep}
uses a callback function that runs with the scheduling
lock held just before going to sleep;
the function serves as a last-minute check
of the sleep condition, to avoid lost wakeups.
The Linux kernel's
\lstinline{sleep}
uses an explicit process queue, called a wait queue, instead of
a wait channel; the queue has its own internal lock.

Scanning the entire set of processes in
\lstinline{wakeup}
is inefficient.  A better solution is to
replace the
\lstinline{chan}
in both
\lstinline{sleep}
and
\lstinline{wakeup}
with a data structure that holds
a list of processes sleeping on that structure,
such as Linux's wait queue.
Plan 9's
\lstinline{sleep}
and
\lstinline{wakeup}
call that structure a rendezvous point.
Many thread libraries refer to the same
structure as a condition variable;
in that context, the operations
\lstinline{sleep}
and
\lstinline{wakeup}
are called
\lstinline{wait}
and
\lstinline{signal}.
All of these mechanisms share the same
flavor: the sleep condition is protected by
some kind of lock dropped atomically during sleep.

xv6's
\lstinline{wakeup}
wakes up all processes that are waiting on a particular wait channel.
If there are more than one of them, they will all try to acquire
the condition lock and re-check the condition; in many cases only
one will be able to do anything useful (e.g., read all the
data waiting in a pipe). The rest will find
the condition is no longer true and go back to sleep;
it was a waste of CPU time to wake them up.
As a result,
most condition variable designs provide two primitives:
\lstinline{signal},
which wakes up one of the processes waiting for the condition variable, and
\lstinline{broadcast},
which wakes up all of them.

Forcibly killing processes poses some problems.
For example, a killed
process may be deep inside the kernel sleeping, and unwinding its
stack requires care, since each function on the call stack
may need to do some clean-up.  Some languages help out by providing
an exception mechanism, but not C.
Furthermore, there are other events that can cause a sleeping process to be
woken up, even though the event it is waiting for has not happened yet.  For
example, when a Unix process is sleeping, another process may send a 
\indexcode{signal}
to it.  In this case, the
process will return from the interrupted system call with the value -1 and with
the error code set to EINTR. The application can check for these values and
decide what to do.  Xv6 doesn't support signals and this complexity doesn't arise.

Xv6's support for
\lstinline{kill}
is not entirely satisfactory: there are sleep loops
which probably should check for
\lstinline{p->killed}.
A related problem is that, even for 
\lstinline{sleep}
loops that check
\lstinline{p->killed},
there is a race between 
\lstinline{sleep}
and
\lstinline{kill};
the latter may set
\lstinline{p->killed}
and try to wake up the victim just after the victim's loop
checks
\lstinline{p->killed}
but before it calls
\lstinline{sleep}.
If this problem occurs, the victim won't notice the
\lstinline{p->killed}
until the condition it is waiting for occurs. This may be quite a bit later
or even never
(e.g., if the victim is waiting for input from the console, but the user
doesn't type any input).

%% 
\section{Exercises}
%% 

\begin{enumerate}

\item Implement counting semaphores in xv6.
Choose a few of xv6's uses of sleep and wakeup and
replace them with semaphores.
Judge the result.

\item Can you implement a variant of {\tt sleep()} that
takes just one argument, the channel, and doesn't need
a lock argument?

\item Fix the race mentioned above between
\lstinline{kill}
and 
\lstinline{sleep},
so that a
\lstinline{kill}
that occurs after the victim's sleep loop checks
\lstinline{p->killed}
but before it calls
\lstinline{sleep}
results in the victim abandoning the current system call.
% Answer: a solution is to check in sleep if p->killed is set before setting
% the processes's state to sleep. 

\item Design a plan so that every sleep loop checks 
\lstinline{p->killed}
so that, for example, a process that is in the virtio driver can return quickly from the while loop
if it is killed by another process.
% Answer: this is difficult.  Moderns Unixes do this with setjmp and longjmp and very carefully programming to clean any partial state that the interrupted systems call may have built up.

\end{enumerate}
