<!DOCTYPE html>
            
<HTML>
<HEAD>
<meta name="booktitle" content="Developing Applications With Objective Caml" >
 <meta charset="ISO-8859-1"><meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0">
<META name="GENERATOR" content="hevea 1.05-7 of 2000-02-24">
<META NAME="Author" CONTENT="Christian.Queinnec@lip6.fr">
<LINK rel=stylesheet type="text/css" href="videoc-ocda.css">
<script language="JavaScript" src="videoc.js"><!--
//--></script>
<TITLE>
 Presentation of Part IV
</TITLE>
</HEAD>
<BODY class="regularBody">
<A HREF="book-ora161.html"><IMG SRC ="previous_motif.gif" ALT="Previous"></A>
<A HREF="index.html"><IMG SRC ="contents_motif.gif" ALT="Contents"></A>
<A HREF="book-ora163.html"><IMG SRC ="next_motif.gif" ALT="Next"></A>
<HR>

<H2> Presentation of Part IV</H2>The fourth part introduces the concepts of parallel programming and
presents models for shared and distributed memory. It is not necessary
to have access to a parallel supercomputer to express concurrent algorithms
or to implement distributed applications. In this preamble we define 
the different terms used in the following chapters.<BR>
<BR>
In sequential programming, one instruction is executed after another.
This is called causal dependency. Sequential programs have
the property of being deterministic. For the same input, one program
will always terminate or never terminate. In the case of termination,
always the same result will be produced. Determinism implies that
the same circumstances will always lead to the same effects.
The only exceptions, which we have already met in Objective CAML,
are provided by functions taking external information as input,
such as the function <TT>Sys.time</TT>.<BR>
<BR>
In parallel programming, a program is split into several active processes.
Each process is sequential, but several instructions, belonging to
different processes, are executed in parallel, ``at the same time.''
The sequence is transformed into concurrency. This is called
causal independence. The same circumstances may lead to different,
mutually exclusive effects (only one effect is produced). An immediate
consequence is the loss of determinism: the same program with
the same input may or may not terminate. In the case of termination
different results may be produced.<BR>
<BR>
In order to control the execution of a parallel program, it is necessary
to introduce two new notions:
<UL>
<LI>
  synchronization, which introduces a conditional wait to
several processes;
 
<LI> communication through the passing of messages between processes.
</UL>
From the point of view of causality, synchronization assures that
several independent circumstances have to be reproduced before an
effect may take place. Communications have a temporal constraint:
a message can not be received before it is sent. Communication
can occur in different variants: communication directed from
one process to one other (point-to-point) or as distribution
(one-to-all, or all-to-all).<BR>
<BR>
The two models of parallel programming described by figure
<A HREF="book-ora162.html#fig-mod-mem">17.13</A> differ in execution control by the forms
of synchronization and communication.
<BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV>
<DIV ALIGN=center>
<IMG SRC="book-ora078.gif"> 
</DIV>
<BR>
<DIV ALIGN=center>Figure 17.13: Models of parallelism</DIV><BR>

<A NAME="fig-mod-mem"></A>
<DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE>Each process <I>P</I><SUB><FONT SIZE=2><I>i</I></FONT></SUB> corresponds to a sequential process. The set of
these processes, interacting via shared memory (<B>M</B>) or via a
medium, constitutes a parallel application.<BR>
<BR>

<H5> Shared Memory Model</H5>Communication is implicit in the shared memory model.
An information is given through writing into a zone of
the shared memory. It is received when another process
reads this zone. The synchronization, in contrast, has
to be explicit. Constructs of mutual exclusion and waiting
conditions are used.<BR>
<BR>
This model is used when shared resources are used in a concurrent
way. The construction of operating systems can be cited as an example.<BR>
<BR>

<H5> Distributed Memory Model</H5>In this model each sequential process <I>P</I><SUB><FONT SIZE=2><I>i</I></FONT></SUB> has a private memory
<I>M</I><SUB><FONT SIZE=2><I>i</I></FONT></SUB>, to which no other process has access. The processes have
to communicate
in order to transmit information through a medium. The difficulties
in this model arise from the implementation of the medium. The
programs which care for this are called <EM>protocols</EM>.<BR>
<BR>
Protocols are organized in layers. The higher-level protocols
implement more elaborate services, using the lower-level services.<BR>
<BR>
There exist several types of communication. They depend on the capability
of the medium to store information and of the blocking, respectively
non-blocking character of sender and receiver. We talk about
synchronous communication when the transfer of information is
not possible before a global synchronization between sender and
receiver takes place. In his case both sender and receiver may be blocked.<BR>
<BR>
If the medium has the storage capabilities, it can store 
messages for a later transmission. Therefore the
communication can be asynchronous and non-blocking. It may be
necessary to indicate the storage capacity of the medium,
the order of transmission, the delay and the reliability
of transmissions.<BR>
<BR>
Finally, if the transmission is non-blocking with a medium
not able to store messages, a volatile communication results:
only the receiving processes which are ready will receive
the sent message, which is lost for the other processes.<BR>
<BR>
In the model of distributed memory the communication is explicit,
but the synchronization is implicit (Synchronization is
produced by communication). It is dual to the model of shared memory.<BR>
<BR>

<H5> Physical and Logical Parallelism</H5>The model of distributed memory is valid in the case of
physical and logical parallelism. Physical parallelism refers
for example to a computer network. Examples for logical parallelism
are Unix processes communicating via pipes, or lightweight
processes communicating via channels. There are no common global
values known by all proceses like, for example, a global clock.<BR>
<BR>
The model of distributed memory is closer to physical parallelism,
where there is no effectively shared memory. Nevertheless,
shared memory can be simulated across a computer nerwork.<BR>
<BR>
The fourth part will show how to construct parallel applications
with Objective CAML using the two presented models. It relies on the
<TT>Unix</TT> library, which interfaces Unix system calls to
Objective CAML, and on the <TT>Thread</TT> library, which implements
lightweight processes. A major part of the <TT>Unix</TT> library is ported
to Windows, especially the functions on file
descriptors. These are used to read and to write on files,
but also for communication pipes and for sockets
of computer networks.<BR>
<BR>
Chapter <A HREF="index.html#chap-PS">18</A> describes essential concepts of the <TT>Unix</TT>
library. It concentrates on the communication of a process with it's
exterior and with other processes. The notion of process in this chapter
is that of a ``heavyweight process'' as in Unix. They are created
by the <TT>fork</TT> system call which duplicates the execution context
and the memory for the data, producing a chain of processes.
The interaction between processes is implemented by signals or by
communication pipes.<BR>
<BR>
Chapter <A HREF="index.html#chap-PC">19</A> concentrates on the notion of lightweight processes
of the <TT>Thread</TT> library. In contrast to the heavy processes
mentioned before, they duplicate nothing but the execution context
of an existing process. The memory is shared between the creator
and the thread. Depending on the programming style,
the light Objective CAML processes permit to use the parallelism model
of shared memory (imperative style) or the model of separated memory
(purely functional style). The <TT>Thread</TT> library contains
several modules allowing to start and stop threads,
to manage locks for mutual exclusion, to wait for a condition
and to communicate between threads via channels. In this model,
there is no gain in execution time, not even for multi-processor machines.
But the formulation of parallel algorithms is made easier.<BR>
<BR>
Chapter <A HREF="index.html#chap-PD">20</A> is devoted to the construction of distributed
Internet applications. The Internet is presented from the point
of view of low-level protocols. With the help of communication sockets
several processes running on different machines are able to communicate
with each other. The communication through sockets is an asynchronous
point-to-point communication. The role of the different processes taking
part in the communication of a distributed application is in general
asymmetrical. This is the case for client-server architectures.
The server is a process accepting requests and trying to respond.
The other process, the client, sends a request to the server and
waits for a response. Many services accessible in the Internet
follow this architecture.<BR>
<BR>
Chapter <A HREF="index.html#chap-PART4-Applications">21</A> presents a library and two
complete applications. The library allows to define the communication
between clients and servers starting from a given protocol.
The first application revisits the robots of chapter
<A HREF="index.html#chap-PART3-Applications">17</A> to give a distributed version.
The second application constructs an HTTP server to manage a
request form taking up again the management of associations
presented in chapter <A HREF="index.html#chap-PART1-Applications">6</A>.<BR>
<BR>
<BR>
<BR>
<HR>
<A HREF="book-ora161.html"><IMG SRC ="previous_motif.gif" ALT="Previous"></A>
<A HREF="index.html"><IMG SRC ="contents_motif.gif" ALT="Contents"></A>
<A HREF="book-ora163.html"><IMG SRC ="next_motif.gif" ALT="Next"></A>
</BODY>
</HTML>
