%------------------------------------------------------------------------------
% Copyright (c) 1991-2014, Xavier Leroy and Didier Remy.  
%
% All rights reserved. Distributed under a creative commons
% attribution-non-commercial-share alike 2.0 France license.
% http://creativecommons.org/licenses/by-nc-sa/2.0/fr/
%
% Translation by Daniel C. Buenzli
%------------------------------------------------------------------------------

\chapter{\label{sec/pipes}Classical inter-process communication: pipes}
\cutname{pipes.html}

So far, we have learned how to manage processes and how they can communicate
with the environment by using files. In the remainder of the
course we see how processes running in parallel can cooperate by
communicating among themselves.

\section{Pipes}

Regular files are not a satisfactory communication medium for processes
running in parallel. Take for example a reader/writer situation in
which one process writes data and the other reads them. If a file is used
as the communication medium, the reader can detect that the file
does not grow any more (\ml+read+ returns zero), but it does not know
whether the writer is finished or simply busy computing
more data. Moreover, the file keeps track of all the data transmitted,
requiring needless disk space.

Pipes provide a mechanism suitable for this kind of communication. A
pipe is made of two file descriptors. The first one
represents the pipe's output. The second one represents
the pipe's input. Pipes are created by the system call
\syscall{pipe}:
%
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{pipe}$ : unit -> file_descr * file_descr
\end{listingcodefile}
%
The call returns a pair \ml+(fd_in, fd_out)+ where \ml+fd_in+ is a
file descriptor open in \emph{read mode} on the pipe's output and
\ml+fd_out+ is file descriptor open in \emph{write mode} on the pipe's
input. The pipe itself is an internal object of the kernel that can
only be accessed via these two descriptors. In particular, it has no
name in the file system.

%% To draw fd and pipes
\tikzset{
  fd/.style={draw,rectangle,inner sep=3mm, rounded corners,
             text width=1.2cm, text centered},
  pipe/.style={draw,cylinder,minimum size=6mm,minimum height=18mm,
               anchor=shape center}}


\begin{myimage}[width="60\%"]
\begin{tikzpicture}
\node (pipe) [pipe] {};
\node (out) [fd, left=of pipe] {\texttt{fd\_out}};
\node (in) [fd, right=of pipe] {\texttt{fd\_in}};
\draw [->] (out) to (pipe);
\draw [->] (pipe) to (in);
\end{tikzpicture}
\end{myimage}

A pipe behaves like a queue (\emph{first-in, first-out}). The first
thing written to the pipe is the first thing read from the pipe.
Writes (calls to \indexvalue{write} on the pipe's input
descriptor) fill the pipe and block when the pipe is full. They block
until another process reads enough data at the other end of the pipe
and return when all the data given to \ml+write+ have been
transmitted. Reads (calls to \indexvalue{read} on the pipe's output
descriptor) drain the pipe. If the pipe is empty, a call to \ml+read+
blocks until at least a byte is written at the other end. It then
returns immediately without waiting for the number of bytes requested
by \ml+read+ to be available.

Pipes are useless if they are written and read by the same process
(such a process will likely block forever on a substantial write or on
a read on the empty pipe). Hence they are usually read and written by
different processes. Since a pipe has no name, one of these processes
must be created by forking the process that created the pipe. Indeed,
the two file descriptors of the pipe, like any other file descriptors,
are duplicated by the call to \indexvalue{fork} and thus refer to the
same pipe in the parent and the child process.
\begin{example} The following snippet of code is typical.
%
\begin{lstlisting}
let (fd_in, fd_out) = pipe () in
match fork () with
| 0 -> close fd_in; ... write fd_out buffer1 offset1 count1 ...
| pid -> close fd_out; ... read fd_in buffer2 offset2 count2 ...
\end{lstlisting}
% 
After the \ml+fork+ there are two descriptors open on the pipe's
input, one in the parent and the other in the child. The same
holds for the pipe's output.
%
\begin{myimage}[width="45\%"]
\begin{tikzpicture}
\node (pipe) at (0,0) [pipe] {};
\node (outfather) at (-2,1.5) [fd] {\texttt{fd\_out} parent};
\node (outchild) at (-2,-1.5) [fd] {\texttt{fd\_out} child};
\node (infather) at (2,1.5) [fd] {\texttt{fd\_in} parent};
\node (inchild) at (2,-1.5) [fd] {\texttt{fd\_in} child};
\draw [->] (outfather.south) to [bend right=30] (pipe.west);
\draw [->] (outchild.north) to [bend left=30] (pipe.west);
\draw [->] (pipe.east) to [bend right=30] (infather.south);
\draw [->] (pipe.east) to [bend left=30] (inchild.north);
\end{tikzpicture}
\end{myimage}
% 
In this example the child becomes the writer and the parent the
reader. Consequently the child closes its descriptor \ml+fd_in+ on the
pipe's output (to save descriptors and to avoid programming
errors). This leaves the descriptor \ml+fd_in+ of the parent unchanged
as descriptors are allocated in process memory and after the fork the
parent's and child's memory are disjoint. The pipe, allocated in system
memory, still lives as there's still the descriptor \ml+fd_in+ of the
parent open in read mode on the pipe's output. Following the same
reasoning the parent closes its descriptor on the pipe's input. The
result is as follows:
%
\begin{myimage}[width="45\%"]
\begin{tikzpicture}
\node (pipe) at (0,0) [pipe] {};
\node (outchild) at (-2,-1.5) [fd] {\texttt{fd\_out} child};
\node (infather) at (2,1.5) [fd] {\texttt{fd\_in} parent};
\draw [->] (outchild.north) to [bend left=30] (pipe.west);
\draw [->] (pipe.east) to [bend right=30] (infather.south);
\end{tikzpicture}
\end{myimage}
% 
Data written by the child on \ml+fd_out+ is transmitted to \ml+fd_in+
in 
the parent.
\end{example}
When all the descriptors on a pipe's input are closed and the pipe is
empty, a call to \ml+read+ on its output returns zero:
end of file. And when all the descriptors on a pipe's output are
closed, a call to \ml+write+ on its input kills the writing
process. More precisely the kernel sends the signal \ml+sigpipe+ to
the process calling \ml+write+ and the default handler of this signal
terminates the process. If the signal handler of \ml+sigpipe+ is
changed, the call to \ml+write+ fails with an \ml+EPIPE+ error.
\newpage

\section{\label{ex/crible}Complete example: parallel sieve of Eratosthenes}

This is a classic example of parallel programming. The task of the
program is to enumerate the prime numbers and display them
interactively as they are found. The idea of the algorithm is as
follows. A process enumerates on its output the integers from 2 onwards. We
connect this process to a \quotes{filter} process that reads an
integer $p$ on its input and displays it.

\tikzset{process/.style={draw,rectangle,inner sep=2mm, rounded corners,
             text width=1cm, minimum height=1cm, text
             centered,font=\small},
         output/.style={font=\small,above}}

\begin{myimage}[width="38\%"]
\begin{tikzpicture}
\node (intgen) at (0,0) [process] {ints};
\node (read) at (3.5,0) [process] {read $p$};
\draw [->] (intgen) to node [output] {2, 3, 4, \ldots} (read);
\end{tikzpicture}
\end{myimage}
%
Therefore, the first filter process reads $p=2$. Then it creates a new
filter process connected to its output and filters out the multiples
of $p$ it gets on its input; all numbers it reads that are not a
multiple of $p$ are rewritten on its output.
%
\begin{myimage}[width="65\%"]
\begin{tikzpicture}
\node (intgen) at (0,0) [process] {ints};
\node (filter2) at (3.5,0) [process] {filter $2n$};
\node (read) at (7,0) [process] {read $p$};
\draw [->] (intgen) to node [output] {2, 3, 4, \ldots} (filter2);
\draw [->] (filter2) to node [output] {3, 5, 7, \ldots} (read);
\end{tikzpicture}
\end{myimage}
%
Hence the next process reads $p=3$, which it displays and then starts
to filter multiples of 3, and so on.
%
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (intgen) at (0,0) [process] {ints};
\node (filter2) at (3.5,0) [process] {filter $2n$};
\node (filter3) at (7,0) [process] {filter $3n$};
\node (ldots) at (10,0) {\ldots};
\node (read) at (11,0) [process] {read $p$};
\draw [->] (intgen) to node [output] {2, 3, 4, \ldots} (filter2);
\draw [->] (filter2) to node [output] {3, 5, 7, \ldots} (filter3);
\draw [->] (filter3) to node [output] {5, 7, 11, \ldots} (ldots);
\end{tikzpicture}
\end{myimage}
% 
This algorithm cannot be directly implemented in Unix because it
creates too many processes (the number of primes already found plus
one). Most Unix systems limit the number of process to a few dozens.
Moreover, on a uniprocessor machine, too many processes active
simultaneously can bring the system to its knees because of the
high costs incurred by switching process contexts. In the following
implementation each process first reads $n$ primes $p_1, \ldots, p_n$
on its input before transforming itself in a filter that eliminate the
multiples of $p_1, \ldots, p_n$. In practice $n = 1000$ gives a
reasonable slowdown on process creation.

%% the index $k$ is used below as the limit on the whole sieve; I chose
%% $n$ to use for the # of primes stored in each filter stage.

We start with the process that enumerates integers from 2 to $k$.
%
\begin{listingcodefile}{sieve.ml}
open Unix;;

let input_int = input_binary_int
let output_int = output_binary_int

let generate k output =
  let rec gen m =
    output_int output m;
    if m < k then gen (m+1)
  in 
  gen 2;;
\end{listingcodefile}
To output and input the integers, the following functions are used:
%
\begin{lstlisting}
val $\libvalue{Pervasives}{output\_binary\_int}$ : out_channel -> int -> unit
val $\libvalue{Pervasives}{input\_binary\_int}$ : in_channel -> int
\end{lstlisting}
%
The function \ml+output_binary_int+ from the standard library writes a
four-byte binary representation of an integer on an
\ml+out_channel+. The integer can be read back by the function
\ml+input_binary_int+ on an \ml+in_channel+. Using these functions
from the standard library has two advantages: first, there is no need to
code the function converting integers to a bytewise
representation\footnote{The representation used by these functions is
  unspecified but it is guaranteed to be platform-independent for a
  particular version of the language.}; second, since
these functions use buffered \io, fewer system calls are
performed, which results in better performance. The following functions
create an \ml+in_channel+ or \ml+out_channel+ to buffer the
\io{} on the given descriptor:
%
\begin{listingcodefile}{tmpunix.mli}
val $\indexlibvalue{Unix}{in\_channel\_of\_descr}$ : file_descr -> in_channel
val $\indexlibvalue{Unix}{out\_channel\_of\_descr}$ : file_descr -> out_channel
\end{listingcodefile}
%
They allow a program to perform buffered \io{} on descriptors acquired
indirectly or that are not the result of opening a file. These
functions are not here to mix buffered \io{} with non-buffered
\io; this is possible but very brittle and highly
discouraged~---~particularly for input. Note also that it is possible
but very risky to create more than one \ml+in_channel+ (for example)
on the same descriptor.

We now continue with the filter process. It uses the auxiliary function
\ml+read_first_primes+. A call to \ml+read_first_primes input count+ 
reads \ml+count+ prime numbers on \ml+input+ (an \ml+in_channel+) and eliminates
multiples of the primes already read. These \ml+count+ primes are
displayed as soon as they are read and we return them in a list.
%
\begin{listingcodefile}[style=numbers]{sieve.ml}
let print_prime n = print_int n; print_newline ()

let read_first_primes input count =
  let rec read_primes first_primes count =
    if count <= 0 then first_primes else
    let n = input_int input in
    if List.exists (fun m -> n mod m = 0) first_primes then
      read_primes first_primes count
    else begin
      print_prime n;
      read_primes (n :: first_primes) (count - 1)
    end 
  in
  read_primes [] count$\label{prog:pprime}$;;
\end{listingcodefile}
%
And here is the concrete filter function:  
%
\begin{listingcodefile}[style=numbers]{sieve.ml}
let rec filter input =
  try 
    let first_primes = read_first_primes input 1000 in
    let (fd_in, fd_out) = pipe () in
    match fork () with $\label{prog:sievefilterfork}$
    | 0 ->
        close fd_out;
        filter (in_channel_of_descr fd_in)
    | p ->
        close fd_in;
        let output = out_channel_of_descr fd_out in
        while true do $\label{prog:sievefilterwhile}$
          let n = input_int input in
          if List.exists (fun m -> n mod m = 0) first_primes then ()
          else output_int output n
        done $\label{prog:sievefilterdone}$
  with End_of_file -> ();;
\end{listingcodefile}
%
The filter starts by calling \ml+read_first_primes+ to read the first
1000 prime numbers on its input (the \ml+input+ argument of type
\ml+in_channel+). Then we create a pipe and clone the process with
\ml+fork+. The child starts to filter the output of this pipe.  The
parent reads numbers on its input and writes each one to the pipe if it
is not a multiple of one of the 1000 primes it initially read.

Finally, the main program just connects the integer generator to the
first filter process with a pipe. Invoking the program \ml+sieve k+
enumerates the primes smaller than \ml+k+. If \ml+k+ is omitted (or
not an integer), it defaults to \ml+max_int+.
%
\begin{listingcodefile}[style=numbers]{sieve.ml}
let sieve () =
  let len = try int_of_string Sys.argv.(1) with _ -> max_int in
  let (fd_in, fd_out) = pipe () in
  match fork () with $\label{prog:sievefork}$
  | 0 ->
      close fd_out;
      filter (in_channel_of_descr fd_in)
  | p ->
      close fd_in;
      generate len (out_channel_of_descr fd_out);; $\label{prog:gen}$

handle_unix_error sieve ();;
\end{listingcodefile}
%

In this example we do not wait for the child before stopping the
parent. The reason is that parent processes are \emph{generators} for
their children.

When \ml+k+ is given, the parent will terminate first and close
the descriptor on the input of the pipe connected to its child. Since
{\ocaml} empties the buffers of descriptors open in write mode when a
process stops, the child process will read the last integer provided
by the parent. After that the child also stops {\etc} Thus, in this
program children become orphaned and are temporarily attached to the
process \ml+init+ before they die in turn.

If \ml+k+ is not given, all processes continue indefinitely until one or
more are killed. The death of a process results in the death of its child
as described above. It also closes the output of the pipe connected to
its parent. This will in turn kill the parent at the next write on the
pipe (the parent will receive a \ml+sigpipe+ signal whose default
handler terminates the process).

\begin{exercise}
What needs to be changed so that the parent waits on the termination
of its children?
\end{exercise}
\begin{answer}
Of course the parent must wait on its child. However, before that the
input of the pipe on which the child reads must be closed by the
parent, otherwise the child will wait indefinitely for new integers from the
parent. This leads to a deadlock (closing the channel empties the
buffer before closing the corresponding descriptor, therefore no data
is lost).  Concretely, the line~\ref{prog:gen} of the \ml+sieve+
function needs to be replaced by:
\begin{lstlisting}
let output = out_channel_of_descr fd_out in
generate len output;
close_out output;
ignore(waitpid [] p);;
\end{lstlisting}
Accordingly, we enclose the 
lines~\ref{prog:sievefilterwhile}--\ref{prog:sievefilterdone} of the 
\ml+filter+ function (represented by \ml+...+ below) with the
following lines:
\begin{lstlisting}
try 
  ...
with End_of_file -> 
  close_out output;
  ignore (waitpid [] p)
\end{lstlisting}
\end{answer}

\begin{exercise}
Whenever a prime is found, the function \ml+print_prime+ evaluates
\ml+print_newline ()+. This performs a system call to empty the standard
output buffer and artificially limits the execution speed of the program.
In fact \ml+print_newline ()+ executes \ml+print_char '\n'+
followed by \ml+flush Pervasives.stdout+. What can happen if
\ml+print_newline ()+ is replaced by \ml+print_char '\n'+? What needs
to be added to solve the problem?
\end{exercise}
\begin{answer}
Since the child process is an exact copy of the parent, \io{}
buffers of the standard library are duplicated when \ml+fork+ is
executed. If the buffers are not emptied after each write,
they must be emptied explicitly just before the call to
\ml+fork+. All is needed is to add 
\ml+flush Pervasives.stdout+ after the line~\ref{prog:pprime} of the 
function \ml+read_first_prime+.
\end{answer}
%
\begin{codefile}{finalsieve.ed}
f finalsieve.ml
r sieve.ml
/let print_prime/s/print_newline *()/print_char '\\n'/
/    let (fd_in, fd_out)/a
    flush Pervasives.stdout;
.
/while true do/,/done/c
        try 
          while true do
            let n = input_int input in
            if List.exists (fun m -> n mod m = 0) first_primes then ()
            else output_int output n
          done;
        with End_of_file -> 
          close_out output;
          ignore (waitpid [] p)
.
/generate len (out_channel_of_descr/c
      let output = out_channel_of_descr fd_out in
      generate len output;
      close_out output;
      ignore(waitpid [] p);;
.
wq
\end{codefile}

\section{Named pipes}

On some Unix systems (System~V, SunOS, Ultrix, Linux, \textsc{bsd})
pipes with a name in the file system can be created. These \emph{named
  pipes} (also known as \emph{fifo}) allow processes to communicate
even if they are not in a parent/child relationship. This contrasts
with regular pipes that limit communication between the pipe creator
and its descendants.

The system call \syscall{mkfifo} creates a named pipe: 
%
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{mkfifo}$ : string -> file_perm -> unit
\end{listingcodefile}
%
The first argument is the name of the pipe, and the second one represents the
requested access permissions.

Named pipes are opened with a call to \libvalue{Unix}{openfile} like any
regular file. Reads and writes on a named pipe have the same semantics
as those on regular ones. Opening a named pipe in read-only mode
(resp. write-only mode) blocks until the pipe is opened by another
process for writing (resp. reading); if this has already happened,
there's no blocking. Blocking can be avoided altogether by opening the
pipe with the flag \ml+O_NONBLOCK+, but in this case reads and writes
on the pipe won't block either. After the
pipe is opened, the function \ml+clear_nonblock+ will change this flag to make further
reads or writes on the pipe blocking. Alternatively,
\ml+set_nonblock+ will make reads and writes non-blocking.
%
\begin{listingcodefile}{tmpunix.mli}
val $\indexlibvalue{Unix}{clear\_nonblock}$ : file_descr -> unit
val $\indexlibvalue{Unix}{set\_nonblock}$ : file_descr -> unit
\end{listingcodefile}

\section{Descriptor redirections}

So far, we still do not know how to connect the standard input and
output of processes with a pipe as the shell does to execute
commands like \ml+cmd1 | cmd2+. Indeed, the descriptors we get on the
ends of a pipe with a call to \ml+pipe+ (or to \ml+openfile+ on a
named pipe) are \emph{new} descriptors, distinct from \ml+stdin+,
\ml+stdout+ or \ml+stderr+.

To address this problem, Unix provides the system call \syscall{dup2} 
(read: \quotes{\emph{dup}licate a descriptor \emph{to} another
  descriptor}) that gives one file descriptor another one's meaning.
This
can be done because there is a level of indirection between a file
descriptor (an object of type \libtype{Unix}{file\_descr}) and the object in the
kernel called a \emph{file table entry} that points to the actual
file or pipe and maintains its current read/write position.
%
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{dup2}$ : file_descr -> file_descr -> unit
\end{listingcodefile}
%
The effect of \ml+dup2 fd1 fd2+ is to update the descriptor \ml+fd2+ to refer to
the file table entry pointed to by \ml+fd1+. After the call, these two
descriptors refer to same file or pipe, at the same read/write
position.

\begin{myimage}[width="80\%"]
\begin{tikzpicture}
[ft/.style={draw,rectangle,text width=1.5cm, inner sep=2mm,text centered}]
\node at (1.5,3.5) {Before \texttt{dup2 fd1 fd2}};
\node (fd1) at (0,2) [fd] {\texttt{fd1}};
\node (fd2) at (0,0) [fd] {\texttt{fd2}};
\node (ft1) at (3,2) [ft] {file table entry 1};
\node (ft2) at (3,0) [ft] {file table entry 2};
\draw [->] (fd1) to (ft1);
\draw [->] (fd2) to (ft2);

\node at (7.5,3.5) {After \texttt{dup2 fd1 fd2}};
\node (fd1) at (6,2) [fd] {\texttt{fd1}};
\node (fd2) at (6,0) [fd] {\texttt{fd2}};
\node (ft1) at (9,2) [ft] {file table entry 1};
\node (ft2) at (9,0) [ft] {file table entry 2};
\draw [->] (fd1) to (ft1);
\draw [->] (fd2.east) .. controls +(left:-1cm) and +(right:-1cm) .. (ft1.west);
\end{tikzpicture}
\end{myimage}

\begin{example} 
Standard input redirection.
%
\begin{lstlisting}
let fd = openfile "foo" [O_RDONLY] 0 in
dup2 fd stdin;
close fd;
execvp "bar" [|"bar"|]
\end{lstlisting}
% 
After the call to \ml+dup2+, the descriptor \ml+stdin+ points to the
file \ml+foo+. Any read on \ml+stdin+ will read from the file \ml+foo+
(so does any read on \ml+fd+; but since we won't use it, we close it
immediately). This setting on \ml+stdin+ is preserved by \ml+execvp+
and  the program \ml+bar+ will execute with its standard input
connected to the file \ml+foo+. This is the way the shell executes
commands like \ml+bar < foo+.
\end{example}

\begin{example} 
Standard output redirection.
%
\begin{lstlisting}
let fd = openfile "foo" [O_WRONLY; O_TRUNC; O_CREAT] 0o666 in
dup2 fd stdout;
close fd;
execvp "bar" [|"bar"|]
\end{lstlisting}
% 
After the call to \ml+dup2+, the descriptor \ml+stdout+ points to
the file \ml+foo+. Any write on \ml+stdout+ will write to the file
\ml+foo+ (so does any write on \ml+fd+; but since we won't use it we
close it immediately). This setting on \ml+stdout+ is preserved by
\ml+execvp+ and the program \ml+bar+ will execute with its standard output
connected to the file \ml+foo+. This is the way the shell executes
commands like \ml+bar > foo+.
\end{example}

\begin{example} Connecting the output of a program to the input of another.
%
\begin{lstlisting}
let (fd_in, fd_out) = pipe () in
match fork () with
| 0 -> 
       dup2 fd_in stdin;
       close fd_out;
       close fd_in;
       execvp "cmd2" [|"cmd2"|]
| _ -> 
       dup2 fd_out stdout;
       close fd_out;
       close fd_in;
       execvp "cmd1" [|"cmd1"|]
\end{lstlisting}
%
The program \ml+cmd2+ is executed with its standard input connected to
the output of the pipe. In parallel, the program \ml+cmd1+ is executed
with its standard output connected to the input of the pipe. Therefore
whatever \ml+cmd1+ writes on its standard output is read by \ml+cmd2+
on its standard input.

What happens if \ml+cmd1+ terminates before \ml+cmd2+? When \ml+cmd1+
terminates, all its open descriptors are closed.  This means that there's no
open descriptor on the input of the pipe. When \ml+cmd2+ has read all
the data waiting in the pipe, the next read returns an end of file;
\ml+cmd2+ will then do what it is assigned to do when it reaches the
end of its standard input~---~for example, terminate. 

%% I commented out these examples, the problem is that you need
%% more assumptions on foo bar gee files for them to be understandable.

%% Here's an example:
%% %
%% \begin{lstlisting}
%% cat foo bar gee | grep buz
%% \end{lstlisting}
%% %
Now,  if \ml+cmd2+ terminates before \ml+cmd1+, the last descriptor on
the output of the pipe is closed and \ml+cmd1+ will get
a signal (which by default kills the process) the next time
it tries to write on its standard output. 

%% See comment above.

%% Here's an example, type 
%% %
%% \begin{lstlisting}
%% grep buz gee | more
%% \end{lstlisting}
%% %
%% and quit \ml+more+ before \ml+grep+ ends by typing a \ml+q+. At that moment 
%% \ml+grep+ ends prematurely without reaching the end of \ml+gee+.
\end{example}

\begin{exercise}
Implement some of the other redirections provided by the shell
\ml+sh+. Namely: 
%
\begin{lstlisting}
>>      2>      2>>     2>1     <<
\end{lstlisting}
%
\end{exercise}
\begin{answer}
\begin{itemize}
\item For \ml+>>+, the answer is similar to the \ml+>+ redirection, except that the
file  is opened with the flags \ml+[O_WRONLY; O_APPEND; O_CREAT]+.
%
\item For \ml+2>+, the answer is similar to the \ml+>+ redirection, except that
\ml+dup2 fd stderr+ is executed instead of \ml+dup2 fd stdout+
%
\item For \ml+2>1+, we must call \ml+dup2 stderr stdout+ before executing
the command.
%
\item For \ml+<<+, the shell \ml+sh+ must create a temporary file in
\ml+/tmp+ containing the lines that follow \ml+<<+ and execute the 
command with its standard input redirected from this file. Another 
solution is to connect the command's standard input to the output of a 
pipe and let a child process write the lines following \ml+<<+ on the
input of that pipe.
\end{itemize}
\end{answer}

Swapping two descriptors requires care. The naive sequence
\ml+dup2 fd1 fd2;+ \ml+dup2 fd2 fd1+ does not work. Indeed, the second
redirection has no effect since after the first one both descriptors
\ml+fd1+ and \ml+fd2+ already point to the same file table entry.  The
initial value pointed by \ml+fd2+ was lost. This is like swapping the
contents of two reference cells: a temporary variable is needed to
save one of the two values. Here we can save one of the
descriptors by copying it with the system call \syscall{dup}.
%
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{dup}$ : file_descr -> file_descr
\end{listingcodefile}
%
The call to \ml+dup fd+ returns a new descriptor pointing on the same
file table entry as \ml+fd+. For example we can swap \ml+stdout+ and 
\ml+stderr+ with:
%
\begin{codefile}{dup.ml}
open Unix;;
let exchange () = 
\end{codefile}
%
% There is an error in the original: tmp should be the dup of
% stdout, not stderr.  As originally written, both stdout and stderr
% will point to the standard error port.
\begin{listingcodefile}{dup.ml}
let tmp = dup stdout in
dup2 stderr stdout; 
dup2 tmp stderr;
close tmp;;
\end{listingcodefile}
% 
After the swap, do not forget to close the temporary descriptor
% The original referred to this as a memory leak, but it is more
% accurate to call it a descriptor leak, since the symptoms of the two
% are different.
\ml+tmp+ to prevent a descriptor leak.

\section{Complete example: composing $N$ commands}

We program a command \ml+compose+ such that 
\begin{lstlisting}
compose cmd$\(_1\)$ cmd$\(_2\)$ ... cmd$\(_n\)$ 
\end{lstlisting}
behaves like the shell command:
\begin{lstlisting}
cmd$\(_1\)$ | cmd$\(_2\)$ | ... | cmd$\(_n\)$
\end{lstlisting}
\begin{listingcodefile}[style=numbers]{compose.ml}
open Sys;;
open Unix;;

let compose () =
  let n = Array.length Sys.argv - 1 in
  for i = 1 to n - 1 do $\label{prog:composefor}$
    let (fd_in, fd_out) = pipe () in
    match fork () with
    | 0 ->
        dup2 fd_out stdout;
        close fd_out;
        close fd_in;
        execv "/bin/sh" [| "/bin/sh"; "-c"; Sys.argv.(i) |]
    | _ ->
        dup2 fd_in stdin;
        close fd_out;
        close fd_in
  done;
  match fork () with
  | 0 -> execv "/bin/sh" [|"/bin/sh"; "-c"; Sys.argv.(n) |]
  | _ ->
      let rec wait_for_children retcode =
        try
          match wait () with
          | (pid, WEXITED n) -> wait_for_children (retcode lor n)
          | (pid, _)         -> wait_for_children 127
        with
          Unix_error(ECHILD, _, _) -> retcode in
      exit (wait_for_children 0)
;;
handle_unix_error compose ();;
\end{listingcodefile}
% 
The bulk of the work is done by the \ml+for+ loop starting at
line~\ref{prog:composefor}. For each command except the last one, we
create a new pipe and a child process. The child connects the pipe's
input to its standard output and executes the command. After the
\ml+fork+ it inherits the standard input of its parent. The main
process (the parent) connects the pipe's output to its standard input
and continues the loop. Suppose (induction hypothesis) that at the
beginning of the $i$th iteration, the situation is as follows:
%
\tikzset{
fd/.style={draw,ellipse,font=\small},
pipe/.style={draw,cylinder,minimum size=4mm,minimum
  height=10mm,anchor=shape center},
process/.style={draw,rectangle,inner sep=1mm, rounded corners,
                text width=1.3cm, minimum height=1cm, text
                centered,font=\small}}
\begin{myimage}[width="80\%"]
\begin{tikzpicture}
\node (stdin) at (0,0)[fd] {\texttt{stdin}};
\node (cmd1) at (2.5,0) [process] {\texttt{cmd}$_1$};
\node (pipe1) at (4.5,0) [pipe] {};
\node (cmd2) at (6.5,0) [process] {\texttt{cmd}$_2$};
\node (ldots1) at (8.25,0) {\ldots};
\draw[->] (stdin) to (cmd1);
\draw[->] (cmd1) to (pipe1);
\draw[->] (pipe1) to (cmd2);
\draw[->] (cmd2) to (ldots1);

\node (ldots2) at (0.75,-1.5) {\ldots};
\node (cmd) at (2.5,-1.5) [process] {\texttt{cmd}$_{i-1}$};
\node (pipe2) at (4.5,-1.5) [pipe] {};
\node (compose) at (6.5,-1.5) [process] {\texttt{compose}};
\node (stdout) at (9,-1.5) [fd] {\texttt{stdout}};
\draw[->] (ldots2) to (cmd);
\draw[->] (cmd) to (pipe2);
\draw[->] (pipe2) to (compose);
\draw[->] (compose) to (stdout);
\end{tikzpicture}
\end{myimage}
Rounded boxes represent processes. Their standard input is on the
left, their standard output on the right. The ellipses represent
the initial standard input and output of the \ml+compose+ process.
Just after the call to \ml+pipe+ and \ml+fork+ we have:
%
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (ldots) at (0, 0) {\ldots};
\node (cmd) at (1.5, 0) [process] {\texttt{cmd}$_{i-1}$};
\node (pipe1) at (3.5, 0) [pipe] {};
\node (composec) at (5.5,0) [process] {\texttt{compose} \scriptsize{(child)}};
\node (pipe2) at (7.5, 0) [pipe] {};
\node (composef) at (9.5,0) [process] {\texttt{compose} \scriptsize{(parent)}};
\node (stdout) at (12,0) [fd] {\texttt{stdout}};
\draw[->] (cmd) to (pipe1);
\draw[->] (pipe1) to (composec);
\draw[->] (pipe1.east) to [bend right=45] (composef.west);
\draw[->] (composec.east) to [bend right=45] (stdout.west);
\draw[->] (composef) to (stdout);
\end{tikzpicture}
\end{myimage}
%
When the parent calls \ml+dup2+, we get:
%
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (ldots) at (0, 0) {\ldots};
\node (cmd) at (1.5, 0) [process] {\texttt{cmd}$_{i-1}$};
\node (pipe1) at (3.5, 0) [pipe] {};
\node (composec) at (5.5,0) [process] {\texttt{compose} \scriptsize{(child)}};
\node (pipe2) at (7.5, 0) [pipe] {};
\node (composef) at (9.5,0) [process] {\texttt{compose} \scriptsize{(parent)}};
\node (stdout) at (12,0) [fd] {\texttt{stdout}};
\draw[->] (cmd) to (pipe1);
\draw[->] (pipe1) to (composec);
\draw[->] (composec.east) to [bend right=45] (stdout.west);
\draw[->] (pipe2) to (composef);
\draw[->] (composef) to (stdout);
\end{tikzpicture}
\end{myimage}
%
When the child calls \ml+dup2+ and \ml+execv+, we get:
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (ldots) at (0, 0) {\ldots};
\node (cmd) at (1.5, 0) [process] {\texttt{cmd}$_{i-1}$};
\node (pipe1) at (3.5, 0) [pipe] {};
\node (cmd2) at (5.5,0) [process] {\texttt{cmd}$_{i}$};
\node (pipe2) at (7.5, 0) [pipe] {};
\node (composef) at (9.5,0) [process] {\texttt{compose}};
\node (stdout) at (12,0) [fd] {\texttt{stdout}};
\draw[->] (cmd) to (pipe1);
\draw[->] (pipe1) to (cmd2);
\draw[->] (cmd2) to (pipe2);
\draw[->] (pipe2) to (composef);
\draw[->] (composef) to (stdout);
\end{tikzpicture}
\end{myimage}
%
and everything is ready for the next iteration. 

The last command is forked after the loop because there's no need to
create a new pipe: the process \ml+compose+ already has the right
standard input (the output of the next to last command) and output
(the one initially given to the command \ml+compose+) for the
child. Hence it is sufficient to \ml+fork+ and \ml+exec+. The parent then
waits for its children to terminate: it calls \ml+wait+ repeatedly
until the error \ml+ECHILD+ (no child to wait for) is raised. The
children's return codes are combined with a bitwise \quotes{or}
(\ml+lor+ operator) to create a meaningful return code for
\ml+compose+ : zero if all the children returned zero, different from
zero otherwise.

Note that we execute commands through the shell \ml+/bin/sh+. This
prevents us from having to parse complex commands into tokens as
in the following invocation:
%
\begin{lstlisting}
compose "grep foo" "wc -l"
\end{lstlisting}
%
Adding this functionality to our program would complicate it needlessly.

\section{Input/output multiplexing}

In all the examples so far, processes communicate \emph{linearly}:
each process reads data coming from at most one other process. In this
section we highlight and solve the problems occurring whenever a
process needs to read data coming from \emph{many} processes.

Consider the example of a multi-windowed terminal emulator. Suppose we
have a computer, called the client, connected to a Unix machine by a
serial port. We want to emulate, on the client, many terminal windows
connected to different user processes on the Unix machine. For example,
one window can be connected to a shell and another to a text
editor. Outputs from the shell are displayed in the first window and
those from the editor in the other. If the first window is
active, keystrokes from the client's keyboard are sent to the input of
the shell and if the second window is active they are sent to the
input of the editor.

Since there's only a single physical link between the client and the
Unix machine, we need to multiplex the virtual connections between
windows and processes by interleaving the data transmissions.
Here's the protocol we are going to use. On the serial port, we send
messages with the following structure: 
%
\begin{itemize}
\item One byte indicating the process number or window number of the receiver.
\item One byte indicating the number $N$ of bytes that are following.
\item $N$ bytes of data to send to the receiver.
\end{itemize}
% 
On the Unix machine, user processes (shell, editor, \etc) are
connected by a pipe to one or more auxiliary processes that read and
write on the serial port and (de)multiplex the data. The serial port
is a special file (\ml+/dev/ttya+, for example), on which the
auxiliary processes \ml+read+ and \ml+write+ to communicate with the 
client.

Demultiplexing (transmission from the client to the user processes)
does not pose any particular problem. We just need a process that reads
messages on the serial port and writes the extracted data on the pipe
connected to the standard input of the receiving user process.
%
\tikzset{
fd/.style={draw,ellipse,font=\small},
pipe/.style={draw,cylinder,minimum size=4mm,minimum
  height=10mm,anchor=shape center},
process/.style={draw,rectangle,inner sep=1mm, rounded corners,
                text width=1.4cm, minimum height=1cm, text
                centered,font=\small}}

\begin{myimage}[width="55\%"]
\begin{tikzpicture}
\node (dev) at (0,0) [fd] {\texttt{/dev/ttya}};
\node (demux) at (3,0) [process] {demulti\-plexer};
\node (shell) at (5.5,0.75) [process] {\texttt{shell}};
\node (emacs) at (5.5,-0.75) [process] {\texttt{emacs}};
\draw[->] (dev) to (demux);
\draw[->] (demux) to (shell.west);
\draw[->] (demux) to (emacs.west);
\end{tikzpicture}
\end{myimage}
% 
Multiplexing (transmission from user processes to the client) is
more tricky. Let us try to mimic the demultiplexer: a process reads
sequentially the output of the pipes connected to the standard output
of the user processes and then writes the data it reads as message on
the serial port by adding the receiving window number and the length
of the data.
%
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (dev) at (0,0) [fd] {\texttt{/dev/ttya}};
\node (demux) at (3,0) [process] {demulti\-plexer};
\node (shell) at (5.5,0.75) [process] {\texttt{shell}};
\node (emacs) at (5.5,-0.75) [process] {\texttt{emacs}};
\node (mux) at (8,0) [process] {multi\-plexer};
\node (dev2) at (11,0) [fd] {\texttt{/dev/ttya}};
\draw[->] (dev) to (demux);
\draw[->] (demux) to (shell.west);
\draw[->] (demux) to (emacs.west);
\draw[->] (shell.east) to (mux);
\draw[->] (emacs.east) to (mux);
\draw[->] (mux) to (dev2);
\end{tikzpicture}
\end{myimage}
% 
This does not work, because reading a pipe can block. For example, if we
try to read the output of the shell but it has nothing to
display at that moment, the multiplexer process will block, and waiting
characters from the editor will be ignored.
There's no way to know in advance on which pipes there is
data waiting to be displayed (in parallel algorithms, the situation
where a process is perpetually denied access to a shared resource is
called \emph{starvation}).

Here is another approach: we associate with each user process a
\emph{repeater} process. The repeater reads the output of the pipe
connected to the standard output of the user process, transforms the
data into messages and writes the result directly on the serial port
(each repeater process opens \ml+/dev/ttya+ in write mode).
%
\begin{myimage}[width="100\%"]
\begin{tikzpicture}
\node (dev) at (0,0) [fd] {\texttt{/dev/ttya}};
\node (demux) at (3,0) [process] {demulti\-plexer};
\node (shell) at (5.5,0.75) [process] {\texttt{shell}};
\node (emacs) at (5.5,-0.75) [process] {\texttt{emacs}};
\node (rep1) at (8,0.75) [process] {repeater};
\node (rep2) at (8,-0.75) [process] {repeater};
\node (dev2) at (11,0) [fd] {\texttt{/dev/ttya}};
\draw[->] (dev) to (demux);
\draw[->] (demux) to (shell.west);
\draw[->] (demux) to (emacs.west);
\draw[->] (shell.east) to (rep1);
\draw[->] (emacs.east) to (rep2);
\draw[->] (rep1.east) to (dev2.175);
\draw[->] (rep2.east) to (dev2.185);
\end{tikzpicture}
\end{myimage}
% 

Since each user process has its output transmitted independently,
blocking problems are solved. However the protocol may not be
respected. Two repeaters may try to write a message at the same time
and the Unix kernel does not guarantee the atomicity of writes, \ie{} 
that they are performed in a single uninterruptible operation.
Thus the kernel may choose to write only a part of a message from a
repeater to \ml+/dev/ttya+, then write a full message from another
repeater and finally write the remaining part of the first message.
This will utterly confuse the demultiplexer on the client: 
it will interpret the second message as part of the data of the
first and then interpret the rest of the data as a new message header.

To avoid this, repeater processes must synchronize so that at anytime
at most one of them is writing on the serial port (in parallel
algorithms we say that we need to enforce the mutual exclusion of
repeaters on the access to the serial link). Technically, this can be
done with concepts we have already seen so far: repeaters can create a
specific file (the \quotes{lock}) with the \ml+O_EXCL+ flag before
sending a message and destroy it after they are done writing to the
serial port. However this technique is not very efficient because the
lock creation and destruction costs are too high.

A better solution is to take the first approach (a single
multiplexer process) and set the output of the pipes connected to the
standard output of user processes in non-blocking mode with
\ml+set_nonblock+. A read on an empty pipe will not block but return
immediately by raising the error \ml+EAGAIN+ or \ml+EWOULDBLOCK+. We
just ignore this error and try to read the output of the next user
process. This will prevent starvation and avoid any mutual exclusion
problem. However it is a very inefficient solution, the multiplexer
process performs what is called \quotes{busy waiting}: it uses
processing time even if no process is
sending data. This can be alleviated by introducing calls to
\ml+sleep+ in the reading loop; unfortunately, it is very difficult to find
the right frequency. Short \ml+sleep+s cause needless processor load when there
is little data, and long \ml+sleep+s introduce perceptible delays when there is a lot of data.

This is a serious problem. To solve it, the designers of \textsc{bsd}
Unix introduced a new system call, \ml+select+, which is now
available on most Unix variants. A call to \ml+select+ allows a
process to wait (passively) on one or more input/output events.
An event can be:
%
\begin{itemize}
\item A read event: there is data to read on that descriptor.

\item A write event: it is possible to write on that descriptor
  without blocking.

\item An exceptional event: an exceptional condition is
  true on that descriptor. For example, on certain network connections
  high-priority data (\emph{out-of-band data}) can be sent that
  overtakes normal data waiting to be sent. Receiving this kind of 
  high-priority data is an exceptional condition.
\end{itemize}
%
The system call \syscall{select} has the following signature:
%
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{select}$ : 
    file_descr list -> file_descr list -> file_descr list -> 
      float -> file_descr list * file_descr list * file_descr list
\end{listingcodefile}
%
The first three arguments are sets of descriptors represented by
lists: the first argument is the set of descriptors to watch for read
events; the second argument is the set of descriptors to watch for
write events; the third argument is the set of descriptors to watch
for exceptional events. The fourth argument is a timeout in
seconds. If it is positive or zero, the call to \ml+select+ will return
after that time, even if no event occurred. If it is negative, the call
to \ml+select+ waits indefinitely until one of the requested events occurs.

The \ml+select+ call returns a triplet of descriptor lists: the first
component is the list of descriptors ready for reading, the second
component those ready for writing and the third one those on which an
exceptional condition occurred. If the timeout expires before any
event occurs, the three lists are empty.

\begin{example} 
The code below watches read events on the descriptors \ml+fd1+ and 
\ml+fd2+ and returns after 0.5 seconds. 
\begin{lstlisting}
match select [fd1; fd2] [] [] 0.5 with
| [], [], [] -> (* the 0,5s timeout expired *)
| fdl, [], [] ->
    if List.mem fd1 fdl then
         (* read from fd1 *);
    if List.mem fd2 fdl then
         (* read from fd2 *)
\end{lstlisting}
\end{example}

\begin{example} 
The following \ml+multiplex+ function is central to the
multiplexer/demultiplexer of the multi-windowed terminal emulator
described above.

To simplify, the multiplexer just sets the receiver of messages
according to their provenance and the demultiplexer redirects data
directly to the receiver number. In other words, we assume that either
each sender talks to a receiver with the same number, or that the
correspondence between them is magically established in the middle of
the serial link by rewriting the receiver number.

The \ml+multiplex+ function takes a descriptor open on the serial port
and two arrays of descriptors of the same size, one containing pipes
connected to the standard input of the user processes, the other
containing pipes connected to their standard output.
%
\begin{listingcodefile}{multiplex.ml}
open Unix;;

let rec really_read fd buff start length =
  if length <= 0 then () else
    match read fd buff start length with
    | 0 -> raise End_of_file
    | n -> really_read fd buff (start+n) (length-n);;

let buffer = String.create 258;;

let multiplex channel inputs outputs =
  let input_fds = channel :: Array.to_list inputs in
  try
    while true do
      let (ready_fds, _, _) = select input_fds [] [] (-1.0) in
      for i = 0 to Array.length inputs - 1 do
        if List.mem inputs.(i) ready_fds then begin
          let n = read inputs.(i) buffer 2 255 in
          buffer.[0] <- char_of_int i;
          buffer.[1] <- char_of_int n;
          ignore (write channel buffer 0 (n+2));
          ()
        end
      done;
      if List.mem channel ready_fds then begin
        really_read channel buffer 0 2;
        let i = int_of_char(buffer.[0])
        and n = int_of_char(buffer.[1]) in
        if n = 0 then close outputs.(i) else 
        begin
          really_read channel buffer 0 n;
          ignore (write outputs.(i) buffer 0 n);
          ()
        end
      end
    done
  with End_of_file -> () ;;
\end{listingcodefile}

The \ml+multiplex+ function starts by constructing a set of
descriptors (\ml+input_fds+) that contain the input descriptors
(those connected to the standard output of the user processes) and the
descriptor of the serial port. On each iteration of the 
\ml+while+ loop we call \ml+select+ to watch for pending reads in
\ml+input_fds+. We do not watch for any write or exceptional event and
we do not limit the waiting time. When \ml+select+ returns, we test whether
there is data waiting on an input descriptor or on the serial port.

If there is data on an input descriptor we \ml+read+ this input into a
buffer, add a message header and write the result on the serial
port. If \ml+read+ returns zero this indicates that the corresponding
pipe was closed. The terminal emulator on the client will receive a
message with zero bytes, signaling that the user process
with that number died; it can then close the corresponding window.

If there is data on the serial port, we read the two-byte message
header which gives us the number \ml+i+ of the receiver and the number
\ml+n+ of bytes to read. We then read \ml+n+ bytes on the channel and
write them on the output \ml+i+ connected to the standard input of the
corresponding user process. However, if \ml+n+ is 0, we close the
output $i$. The idea is that the terminal emulator at the other end
sends a message with \ml+n = 0+ to indicate an end of file on the
standard input of the receiving user process.

We get out of the loop when \ml+really_read+ raises the exception
\ml+End_of_file+, which indicates an end of file on the
serial port.
\end{example}

\section{\label{single_write}Miscellaneous: \texttt{write}}

The function \ml+write+ of the \ml+Unix+ module iterates the system call
\syscall{write} until all the requested bytes are effectively written.
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{write}$ : file_descr -> string -> int -> int -> int
\end{listingcodefile}
% 
However, when the descriptor is a pipe (or a socket, see
chapter~\ref{sec/sockets}), writes may block and the system call
\ml+write+ may be interrupted by a signal. In this case the {\ocaml}
call to \ml+Unix.write+ is interrupted and the error \ml+EINTR+ is
raised. The problem is that some of the data may already have been
written by a previous system call to \ml+write+ but the actual size
that was transferred is unknown and lost. This renders the function 
\ml+write+ of the \ml+Unix+ module useless in the presence of signals.
 
To address this problem, the \ml+Unix+ module also provides the
\quotes{raw} system call \ml+write+ under the name
\ml+single_write+. 
\begin{listingcodefile}{tmpunix.mli}
val $\libvalue{Unix}{single\_write}$ : file_descr -> string -> int -> int -> int
\end{listingcodefile}
With \ml+single_write+, if an error is raised it is guaranteed that no
data is written.

The rest of this section shows how to implement this
function. Fundamentally, it is just a matter of interfacing {\ocaml} with
C (more information about this topic can be found in the relevant
section of the {\ocaml} manual). The following code is written in the file
\ml+single_write.c+:
%
\begin{listingcodefile}[style=numbers]{single_write.c}
#include <errno.h>
#include <string.h>
#include <caml/mlvalues.h>
#include <caml/memory.h>
#include <caml/signals.h>
#include <caml/unixsupport.h>

CAMLprim value caml_single_write
        (value fd, value buf, value vofs, value vlen) {
  CAMLparam4(fd, buf, vofs, vlen);
  long ofs, len;
  int numbytes, ret;
  char iobuf[UNIX_BUFFER_SIZE];
  ofs = Long_val(vofs)
  len = Long_val(vlen)
  ret = 0;
  if (len > 0) {
    numbytes = len > UNIX_BUFFER_SIZE ? UNIX_BUFFER_SIZE : len;
    memmove (iobuf, &Byte(buf, ofs), numbytes);
    caml_enter_blocking_section (); $\label{prog:enterbs}$
    ret = write(Int_val(fd), iobuf, numbytes);
    caml_leave_blocking_section (); $\label{prog:leavebs}$
    if (ret == -1) uerror("single_write", Nothing);
  }
  CAMLreturn (Val_int(ret));
}
\end{listingcodefile}
% 
The first two lines include standard C headers. The following four
lines include C headers specific to {\ocaml} installed by the
distribution. The \ml+unixsupport.h+ header defines reusable C
functions of the {\ocaml} Unix library.

The most important line is the call to \ml+write+. Since the call may
block (if the descriptor is a pipe or a socket) we need to release the
global lock on the {\ocaml} runtime immediately before the call
(line~\ref{prog:enterbs}) and reacquire it right after
(line~\ref{prog:leavebs}). This makes the function compatible with the
\ml+Thread+ module (see chapter~\ref{sec/coprocessus}): it allows
other threads to execute during the blocking call. 

During the system call {\ocaml} may perform a garbage collection and
the address of the {\ocaml} string \ml+buf+ may move in memory. To
solve this problem we copy \ml+buf+ into the C string \ml+iobuf+.
This has an additional cost, but only in the order of magnitude of
10\% (and not 50\% as one might think) because the overall cost of the
function is dominated by the system call. The size of this C string is
defined in \ml+unix_support.h+. If an error occurs during the system
call (indicated by a negative return value) it is propagated to
{\ocaml} by the function \ml+uerror+, defined in the {\ocaml} Unix library.

To access this code from {\ocaml}, the file \ml+unix.mli+ declares:
%
\begin{codefile}{write.mli}
open Sys
open Unix
val single_write : file_descr -> string -> int -> int -> int 
(** Same as [write] but does not attempt to write all data. Return after
the first successful partial transfer. *)
\end{codefile}
%
\begin{codefile}{write.ml}
open Sys
open Unix
\end{codefile}
%
\begin{listingcodefile}{write.ml}
external unsafe_single_write :
  file_descr -> string -> int -> int -> int = "caml_single_write"
\end{listingcodefile}
%
But in practice we verify the arguments before calling the function: 
\begin{listingcodefile}{write.ml}
let single_write fd buf ofs len =
  if ofs < 0 || len < 0 || ofs > String.length buf - len
  then invalid_arg "Unix.write"
  else unsafe_single_write fd buf ofs len
\end{listingcodefile}
%
This function has been available in the \ml+Unix+ module since version
\texttt{3.08}. But if we had written the program above ourselves we would
need to compile it as follows to use it (assuming the {\ocaml} code is
in the files \ml+write.mli+ and \ml+write.ml+):
%
\begin{lstlisting}
ocamlc -c single_write.c write.ml
ocamlc -custom -o prog unix.cma single_write.o write.cmo mod1.ml mod2.ml
\end{lstlisting}
%
It is often more practical to build a library \ml+write.cma+ containing
both the C and the {\ocaml} code:
%
\begin{lstlisting}
ocamlc -custom -a -o write.cma single_write.o write.cmo
\end{lstlisting}
%
The library \ml+write.cma+ can then be used like \ml+unix.cma+:
%
\begin{lstlisting}
ocamlc -o main.byte unix.cma write.cma main.ml
\end{lstlisting}

The semantics of \ml+single_write+ is as close as possible to the
system call \ml+write+. The only remaining difference is when the
original string is very long (greater than \ml+UNIX_BUFFER_SIZE+); the
call may then not write all the data and must be iterated.  The
atomicity of \ml+write+ (guaranteed for regular files) is thus not
guaranteed for long writes. This difference is generally insignificant but one should
be aware of it.

On top of this function we can implement a higher-level function
\ml+really_write+, analogous to the function \ml+really_read+ of the
multiplexer example, that writes exactly the requested amount of data
(but not atomically).
%
\begin{codefile}{misc.mli}
val really_write : file_descr -> string -> int -> int -> unit
(** as [single_write] but restarts on [EINTR] until all bytes have been
written. When an error occurs, some unknown number of bytes may have been
written. Hence, an error should in general be considered as fatal. *)
\end{codefile}
%
\begin{listingcodefile}{misc.ml}
let rec really_write fd buffer offset len =
  let n = restart_on_EINTR (single_write fd buffer offset) len in
  if n < len then really_write fd buffer (offset + n) (len - n);;
\end{listingcodefile}
%
\begin{codefile}{copyintr.ml}
open Sys
open Unix

let buffer_size = 10240

let copy fdin fdout = 
  let buffer = String.create buffer_size in
  let rec copy ()  =
    let len = 1 + Random.int (buffer_size - 1) in
    let n = Misc.restart_on_EINTR (read fdin buffer 0) len in
    if n > 0 then
      begin
        Misc.really_write fdout buffer 0 n;
        copy ()
      end in
  copy ()

let main () =
  let eintr _ = () in
  let _ = signal sigalrm (Signal_handle eintr) in
  let _ = setitimer ITIMER_REAL { it_interval = 1e-5; it_value = 1e-5; } in
  copy stdin stdout;;

handle_unix_error main ()
\end{codefile}
%
\begin{codefile}{copyintr.test}
COP=./copyintr.byte 
$COP < $COP | $COP | $COP | diff --brief - $COP 
\end{codefile}

