\documentclass{article}

\usepackage{color}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{latexsym}
\usepackage{amsfonts}
%\usepackage{times}
\usepackage{url}
%\usepackage{bibspacing}
%\setlength{\bibspacing}{\baselineskip}
\usepackage{hyperref}
\usepackage{xspace}
\usepackage{graphicx}
\usepackage{alltt}

 \definecolor{grey}{rgb}{0.9,0.9,0.9}



\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{propo}[thm]{Proposition}
\newtheorem{fct}[thm]{Fact}
\newtheorem{defn}[thm]{Definition}
\newtheorem{exmp}[thm]{Example}
\newtheorem{assm}[thm]{Assumption}
\newtheorem{clm}[thm]{Claim}
\newtheorem{techclm}[thm]{Technical Claim}
\newtheorem{rem}[thm]{Remark}
\newtheorem{cons}[thm]{Construction}

\newenvironment{theorem}{\begin{thm}\begin{rm}}%
{\end{rm}\end{thm}}
\newenvironment{lemma}{\begin{lem}\begin{rm}}%
{\end{rm}\end{lem}}
\newenvironment{corollary}{\begin{cor}\begin{rm}}%
{\end{rm}\end{cor}}
\newenvironment{proposition}{\begin{propo}\begin{rm}}%
{\end{rm}\end{propo}}
\newenvironment{fact}{\begin{fct}\begin{rm}}%
{\end{rm}\end{fct}}
\newenvironment{definition}{\begin{defn}\begin{em}}%
{\end{em}\end{defn}}
\newenvironment{example}{\begin{exmp}\begin{em}}%
{\end{em}\end{exmp}}
\newenvironment{assumption}{\begin{assm}\begin{em}}%
{\end{em}\end{assm}}
\newenvironment{claim}{\begin{clm}\begin{rm}}%
{\end{rm}\end{clm}}
\newenvironment{techclaim}{\begin{techclm}\begin{rm}}%
{\end{rm}\end{techclm}}
\newenvironment{remark}{\begin{rem}\begin{em}}%
{\end{em}\end{rem}}
\newenvironment{construction}{\begin{cons}\begin{em}}%
{\end{em}\end{cons}}



\newcommand{\Succ}{\mathsf{Succ}}
\newcommand{\Adv}{\mathbf{Adv}}
 \newcommand{\Exp}{\mathbf{Exp}}
\begin{document}

\section{Introduction}
In this first part we present a model for distributed computing for message-passing systems with no failures. We consider two main timing models, synchronous and asynchronous. We also provide the definition for time complexity and message complexity.\\
We then present a few simple algorithms for message-passing systems, both synchronous and asynchronous.

\subsection{System}
We represent a distributed system as an undirected graph in which each node represent a processor and an edge is present between two nodes if and only if there is a channel between the corresponding processors.\\
Formally, a system or algorithm consist of $n$ processors (node) $p_0, p_1, p_2 ..., p_{n-1}$. Each processor $p_i$ is modeled as a (possibly infinite) state machine with state set $Q_i$. Each state fo processor $p_i$ contains $2d$ (where $d$ is the degree of $p_i$) special components, \textit{$outbuf_i[l]$} and \textit{$inbuf_i[l]$}, $1 < l \leq r$: outbuf holds the messages that $p_i$ has sent to its neighbors not yet delivered and inbuf holds messages that have been delivered to $p_i$ not yet processed. The state set $Q_i$ contains a subset of initial state where  inbuf has to be empty but outbuf need not to be.
The processor's state comprises accessible state of $p_i$. The transition function of $p_i$ takes as input a values for the accessible state of $p_i$. It produces as output a value for the accessible state of $p_i$ in which each $inbuf_i[l]$ is empty and an output of at most one message to be sent to the $l$ neighbor.\\
A configuration is a vector $C=(q_0, ..., q_{n-1})$ where $q_i$ is a state of $p_i$.\\
For message-passing system we consider two kinds of events: computational events, , $comp(i)$, representing a computational step fo $p_i$ in which $p_i$ transition function is applied to its current accessible state, and delivery events $del(i,j,m)$ representing the delivery of a message $m$ for $i$ to $j$.
The behavior of a system is modeled as an execution, which is a sequence of configurations alternating with events. This seuqnece must satisfy several conditions depending on the system. We classify them as:
\begin{itemize}
\item \textbf{safety condition} must hold in every finite prefix of the sequence, e.g, every step of processor $p_i$ is followed by a step of $p_j$. 
\item \textbf{liveness condition} must hold a certain number of times (possibly infinite). For example the condition ``eventually $p_1$ terminates'' requires that the termination happen once.
\end{itemize}
We will call \textbf{execution} any sequence satisfying all the safety conditions. If it also satisfy all required liveness conditions we will call it \textbf{admissible}.

\subsection{Asynchronous systems}
A system is asynchronous if there is no fixed upper bound on how long it take for a messege to be delivered or how much time elapses between steps of a processor, e.g, email.\\

In the async model an execution is admissible if each processor has an infinite number of computation events and every message sent is eventually delivered. This requirement model the fact that the processors do not fail.\\
The termination of an algorithm is defined by having a transition function not changing the processor state after certain point.

\subsection{Synchronous System}
In this model processors execute in lockstep. The execution is partitioned into rounds and in each of them every processor can send a message to each neighbor, the message are delivered, and every processor compute based on the messages just received.\\
An execution is admissible if it is finite. Because of the round structure, this implies that every processor takes a finite number of steps and every message sent is eventually delivered.\\

Note that in a synchronous system with no failures, once the algorithm is fixed, 
the only relevant aspect of executions that can differ is the initial configuration. In an 
asynchronous system, there can be many different executions of the same algorithm, 
even with the same initial configuration and no failures, because the interleaving of 
processor steps and the message delays are not fixed. 


\subsection{Complexity}
We are interested in two complexity measurement, the number of messages and the amount of time, required by distributed algorithm.
To define them we add the notion of algorithm terminating.
We assume that each processor's state set includes a subset of terminated states and each 
processor's transition function maps terminated states only to terminated states. We 
say that the system has terminated when all processors are in terminated 
states and no messages are in transit.
The message complexity of an algorithm for either a synchronous or an asynchronous message-passing system is the maximum, over all admissible executions of 
the algorithm, of the total number of messages sent. number of rounds until termination. Thus the time complexity of an algorithm for 
a synchronous message-passing system is the maximum number of rounds, in any 
admissible execution of the algorithm, until the algorithm has terminated.
Measuring time in an asynchronous system is less straightforward. A common 
approach, and the one we will adopt, is to assume that the maximum message delay in 
any execution is one unit of time and then calculate the running time until termination. 
We define the delay of a message to be the time that elapses between the com- 
putation event that sends the message and the computation event that processes the 
message. In other words, it consists of the amount of time that the message waits in 
the sender's outbuf together with the amount of time that the message waits in the 
recipient's inbuf. 
The time complexity of an asynchronous algorithm is the maximum time until 
termination among all timed admissible executions in which every message delay is 
at most one. 
 

 

\section{Simple distributed systems}

\subsection{Broadcast}
We assume we want to broadcast a single message through a spanning tree to all the nodes in the network. We define the process $p_r$ as the root of the spanning tree that wants to send a message M to all the other processors.\\
The spanning tree rooted at pr is maintained in 
a distributed fashion: Each processor has a distinguished channel that leads to its 
parent in the tree as well as a set of channels that lead to its children in the tree. 
The spanning tree rooted at pr is maintained in 
a distributed fashion: Each processor has a distinguished channel that leads to its 
parent in the tree as well as a set of channels that lead to its children in the tree. 
The code of the algorithm follows:
\begin{alltt}
Algorithm 1

Initially (M) is in transit from \(p_r\) to all its children
in the spanning tree.
 
Code for pr: 
	1: upon receiving no message: // first computation event by pr 
	2: terminate
	 
Code for \(p_i, 0 \leq i \leq n — 1, i \neq r\): 
	3: upon receiving (M) from parent: 
	4: send (M) to all children 
	5: terminate 
\end{alltt}
Note that this algorithm is correct whether the system is synchronous or asynchronous. Furthermore, as we discuss now, the message and time complexities of the 
algorithm are the same in both models. 
What is the message complexity of the algorithm? Clearly, the message (M) is 
sent exactly once on each channel that belongs to the spanning tree (from the parent to 
the child) in both the synchronous and asynchronous cases. That is, the total number 
of messages sent during the algorithm is exactly the number of edges in the spanning 
tree rooted at pr. Recall that a spanning tree of n nodes has exactly n — 1 edges; 
therefore, exactly n — 1 messages are sent during the algorithm. \\
Let us now analyze the time complexity of the algorithm. It is easier to perform 
this analysis when communication is synchronous and time is measured in rounds. \\

\begin{lemma}
In every admissible execution of the broadcast algorithm in the syn- chronous model, every processor at distance t from pr in the spanning tree receives the message M in round t. 
\end{lemma}

\begin{theorem}
There is a synchronous broadcast algorithm with message complexity 
$n — 1$ and time complexity d, when a rooted spanning tree with depth d is known in 
advance. 
\end{theorem}

A similar analysis applies when communication is asynchronous. 


%\subsection{Convergcast}
%The broadcast problem requires one-way communication, from the root, $p_r$, to all the 
%nodes of the tree. Consider now the complementary problem, called convergecast, 
%of collecting information from the nodes of the tree to the root. For simplicity, we 
%consider a specific variant of the problem in which each processor $p_i$ starts with 
%a value $x_i$ and we wish to forward the maximum value among these values to the 
%root $p_r$. 

\subsection{Flooding and building a spanning tree - BFS}
Let us now consider the slightly more complicated problem of broadcast 
without a preexisting spanning tree, starting from a distinguished processor $p_r$. First 
we consider an asynchronous system. \\
The algorithm, called flooding, starts from $p_r$, which sends the message M to all 
its neighbors. When processor $p_i$ receives M for the first time, from some neighboring processor $p_j$, it sends M to all its 
neighbors except $p_j$.\\
Clearly, a processor will not send M more than once on any communication 
channel. Thus M is sent at most twice on each communication channel (once 
by each processor using this channel).Therefore at most the number of messages sent is ${n(n-1) \over 2}$\\
Effectively, the flooding algorithm induces a spanning tree.\\
To explicitly construct a spanning tree we need to modify a little the algorithm addig some extra message overhead. We just have to include $parent$, $children$ (nodes which respond with this messages are denoted as children and viceversa)  and $already$ messages which indicate that the nodes is already in the tree.\\

The pseudocode of the algorithm follows:

\begin{alltt}
Algorithm 2

Initially parent = \(\bot\), children = $\null$, and other = $\null$
. 
1: upon receiving no message: 
2: if $p_i = p_r$ and parent \(\bot\) then // root has not yet sent (M) 
3: send M to all neighbors 
4: parent= \(p_i\) 

5: upon receiving M from neighbor \(p_j\): 
6: if parent = \(\bot\) then // \(p_i\) has not received M before 
7: parent = $p_j$ 
8: send \(\langle parent \rangle\) to \(p_j\) 
9: send M to all neighbors except $p_j$ 
10: else send \(\langle already \rangle\) to \(p_j\) 

11: upon receiving \(\langle parent \rangle\) from neighbor \(p_j\): 
12: add \(p_j\) to children 
13: if \(children \bigcup other\) contains all neighbors except parent then 
14: terminate 

15: upon receiving\(\langle already \rangle\) from neighbor \(p_j\): 
16: add \(p_j\) to other 
17: if \(children \bigcup other\) contains all neighbors except parent then 
18: terminate 

\end{alltt} 

\begin{lemma}
In every admissible execution in the asynchronous model,  Algortihm 2 
constructs a BFS tree of the network rooted at pr. 
\end{lemma}

\begin{theorem}
There is an asynchronous (synchronous) algorithm to find a spanning tree of a network 
with m edges and diameter D, given a distinguished node, with message complexity 
$0(m)$ and time complexity $O(D)$. 
\end{theorem}

\begin{lemma}
In every admissible execution in the synchronous model, Algortihm 2  
constructs a BFS tree of the network rooted at $p-r$. 
\end{lemma}

\subsection{DFS}
Another basic algorithm constructs a depth-first search (DFS) tree of the communication network, rooted at a particular node. A dfs tree is constructed by adding one 
node at a time, more gradually than the spanning tree constructed by the previous algorithm, 
which attempts to add all the nodes at the same level of the tree concurrently. \\
The pseudocode follows:
\begin{alltt}
Algorithm 3: DFS

Initially parent = \(\bot\), children = \(\null\) , unexplored = all neighbors of \(p_i\) 

1: upon receiving no message: 
2: if \(p_i = p_r\) and parent = \(\bot\) then 
3: \(parent = p_i\) 
4: explore()

6: upon receiving M from \(p_f\). 
7: if parent = \(\bot\) then 
8: 		\(parent =  p_j\) 
9: remove \(p_j\) from unexplored 
10: explore () 
11: else 
12: send \(\langle already \rangle\) to \(p_j\) 
13: remove pj from unexplored 
14: upon receiving \(\langle already \rangle\) from \(p_j\): 
15: explore() 

16: upon receiving \(\langle parent \rangle\) from \(p_j\):  
17: add \(p_j\)  to children 
18: explore() 

19: procedure explore(): 
20: if \(unexplored \neq \null \) then 
21: let \(p_k\) be a processor in unexplored 
22: remove \(p_k\) from unexplored 
23: send M to \(p_k\); 
24: else 
25: if \(parent \neq p_i\) then send \(\langle parent \rangle\) to parent 
26: terminate // dfs subtree rooted at \(p_i\) has been built 

\end{alltt}

\begin{lemma}
In every admissible execution in the asynchronous model, Algortihm 3 
constructs a DFS tree of the network rooted at pr. 
\end{lemma}

To calculate the message complexity of the algorithm, note that each processor 
sends (M) at most once on each of its adjacent edges; also, each processor generates 
at most one message (either $\langle already \rangle$ or $\langle parent \rangle$) in response to receiving (M) on 
each of its adjacent edges. Therefore, at most 4m messages are sent by Algorithm 3. 
The time compelxity is also O(m)

\begin{theorem}
There is an asynchronous algorithm to find a depth-first search spanning tree of a network with m edges and n nodes, given a distinguished node, with 
message complexity O(m) and time complexity O(m)
\end{theorem}
%\bibliographystyle{plain}
%\bibliography{crypto}
\end{document}