%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                                  %
% Title   : m_c.tex                %
% Subject : Maintenance manual of  %
%           Scotch                 %
%           Code explanations      %
% Author  : Francois Pellegrini    %
%                                  %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Code explanations}
\label{sec-code}

This section explains some of the most complex algorithms implemented
in \scotch\ and \ptscotch.

\subsection{\texttt{dgraphCoarsenBuild()}}

The \texttt{dgraphCoarsenBuild()} routine creates a coarse distributed
graph from a fine distributed graph, using the result of a distributed
matching. The result of the matching is available on all MPI processes
as follows:
\begin{itemize}
\iteme[\texttt{coardat.\lbt multlocnbr}]
  The number of local coarse vertices to be created.
\iteme[\texttt{coardat.\lbt multloctab}]
  The local multinode array. For each local coarse vertex to be
  created, it contains two values. The first one is always positive,
  and represents the global number of the first local fine vertex to
  be mated. The second number can be either positive or negative. If
  it is positive, it represents the global number of the second local
  fine vertex to be mated. If it is negative, its opposite, minus two,
  represents the local edge number pointing to the remote vertex to be
  mated.
\iteme[\texttt{coardat.\lbt procgsttax}]
  Array (restricted to ghost vertices only) that records on which
  process is located each ghost fine vertex.
\end{itemize}

\subsubsection{Creating the fine-to-coarse vertex array}

In order to build the coarse graph, one should create the array that
provides the coarse global vertex number for all fine vertex ends
(local and ghost). This information will be stored in the
\texttt{coardat.\lbt coargsttax} array.

Hence, a loop on local multinode data fills
\texttt{coardat.\lbt coargsttax}. The first local multinode vertex
index is always local, by nature of the matching algorithm.
If the second vertex is local too, \texttt{coardat.\lbt coargsttax} is
filled instantly. Else, a request for the global coarse vertex number
of the remote vertex is forged, in the \texttt{vsnddattab} array,
indexed by the current index \texttt{coarsndidx} extracted from the
neighbor process send index table \texttt{nsndidxtab}. Each request
comprises two numbers: the global fine number of the remote vertex for
which the coarse number is seeked, and the global number of the
coarse multinode vertex into which it will be merged.

Then, an all-to-all-v data exchange by communication takes place,
using either the \texttt{dgraph\lbt Coarsen\lbt Build\lbt Ptop()} or
\texttt{dgraph\texttt Coarsen\lbt Build\lbt Coll()} routines.  Apart
from the type of communication they implement (either point-to-point
or collective), these routines do the same task: they process the
pairs of values sent from the \texttt{vsnddattab} array. For each pair
(the order of processing is irrelevant), the \texttt{coargsttax} array
of the receiving process is filled-in with the global multinode value
of the remotely mated vertex. Hence, at the end of this phase, all
processes have a fully valid local part of the \texttt{coargsttax}
array; no value should remain negative (as set by default). Also, the
\texttt{nrcvidxtab} array is filled, for each neighbor process, of the
number of data it has sent. This number is preserved, as it will serve
to determine the number of adjacency data to be sent back to each
neighbor process.

Then, data arrays for sending edge adjacency are filled-in. The
\texttt{ercvdsptab} and \texttt{ercvcnttab} arrays, of size
\texttt{procglbnbr}, are computed according to the data stored in
\texttt{coardat.\lbt dcntglbtab}, regarding the number of vertex- and
edge-related data to exchange.

By way of a call to \texttt{dgraphHaloSync()}, the ghost data of the
\texttt{coargsttax} array are exchanged.

Then, \texttt{edgelocnbr}, an upper bound on the number of local
edges, as well as \texttt{ercvdatsiz} and \texttt{esnddatsiz}, the
edge receive and send array sizes, respectively.

Then, all data arrays for the coarse graph are allocated, plus the
main adjacency send array \texttt{esnddsptab}, its receive counterpart
\texttt{ercvdattab}, and the index send arrays \texttt{esnddsptab} and
\texttt{esndcnttab}, among others.

Then, adjacency send arrays are filled-in. This is done by performing
a loop on all processes, within which only neighbor processes are
actually considered, while index data in \texttt{esnddsptab} and
\texttt{esndcnttab} is set to $0$ for non-neighbor processes. For each
neighbor process, and for each vertex local which was remotely mated
by this neighbor process, the vertex degree is written in the
\texttt{esnddsptab} array, plus optionally its load, plus the edge
data for each of its neighbor vertices: the coarse number of its end,
obtained through the \texttt{coargsttax} array, plus optionally the
edge load. At this stage, two edges linking to the same coarse
multinode will not be merged together, because this would have
required a hash table on the send side. The actual merging will be
performed once, on the receive side, in the next stage of the
algorithm.

\subsection{\texttt{dgraphFold()} and \texttt{dgraphFoldDup()}}

The \texttt{dgraph\lbt Fold()} routine creates a ``folded''
distributed graph from the input distributed graph. The folded graph
is such that it spans across only one half of the processing elements
of the initial graph (either the first half, or the second half). The
purpose of this folding operation is to preserve a minimum average
number of vertices per processing element, so that communication cost
is not dominated by message start-up time. In case of an odd number
of input processing elements, the first half of them is always bigger
that the second.

The \texttt{dgraph\lbt Fold\lbt Dup()} routine creates two folded
graphs: one for each half. Hence, each processing element hosting the
initial graph will always participate in hosting a new graph, which
will depend on the rank of the processing element. When the MPI
implementation supports multi-threading, and multi-threading is
activated in \scotch, both folded graphs are created concurrently.

The folding routines are based on the computation of a set of
(supposedly efficient) point-to-point communications between the
\textit{sender processes}, which will not retain any graph data, and
the \textit{receiver processes}, which will host the folded
graph. However, in case of unbalanced vertex distributions, overloaded
receiver processes (called \textit{sender receiver processes}) may
also have to send their extra vertices to underloaded receiver
processes. A receiver process may receive several chunks of vertex
data (including their adjacency) from several sender processes. Hence,
folding amounts to a redistribution of vertex indices across all
receiver processes. In particular, end vertex indices have to be
renumbered according to the global order in which the chunks of data
are exchanged. This is why the computation of these exchanges, by way
of the \texttt{dgraph\lbt Fold\lbt Comm()} routine, has to be fully
deterministic and reproducible across all processing elements, to
yield consistent communication data. The result of this computation
is a list of point-to-point communications (either all sends or
receives) to be performed by the calling process, and an array of
sorted global vertex indices, associated with vertex index adjustment
values, to convert global vertex indices in the adjacency of the
initial graph into global vertex indices in the adjacency of the
folded graph. This array can be used, by way of dichotomy search, to
find the proper adjustment value for any end vertex number.

To date, the \texttt{dgraph\lbt Redist()} routine is not based on a
set of point-to-point communications, but collectives. It could well
be redesigned to re-use the mechanisms implemented here, with relevant
code factorization.

\subsubsection{\texttt{dgraphFoldComm()}}

The \texttt{dgraphFoldComm()} routine is at the heart of the folding
operation. It computes the sets of point-to-point communications
required to move vertices from the sending half of processing elements
to the receiving half, trying to balance the folded graph as much as
possible in terms of number of vertices. For receiver processes, it
also computes the data needed for the renumbering of the adjacency
arrays of the graph chunks received from sender (or sender receiver)
processes.

It is to be noted that the end user and the \scotch\ algorithms may
have divergent objectives regarding balancing: in the case of a
weighted graph representing a computation, where some vertices bear a
higher load than others, the user may want to balance the load of its
computations, even if it results in some processing elements having
less vertices than others, provided the sums of the loads of these
vertices are balanced across processing elements. On the opposite, the
algorithms implemented in \scotch\ operate on the vertices themselves,
irrespective of the load values that is attached to them (save for
taking them into account for computing balanced partitions). Hence,
what matters to \scotch\ is that the number of vertices is balanced
across processing elements. Whenever \scotch\ is provided with an
unbalanced graph, it will try to rebalance it in subsequent
computations (\eg, folding). However, the bulk of the work, on the
initial graph, will be unbalanced according to the user's
distribution.

During a folding onto one half of the processing elements, the
processing elements of the other half will be pure
senders, that need to dispose of all of their vertices and
adjacency. Processing elements of the first half will likely be
receivers, that will take care of the vertices sent to them by
processing elements of the other half. However, when a processing
element in the first half is overloaded, it may behave as a
sender rather than a receiver, to dispose of its extra vertices and
send it to an underloaded peer.

The essential data that is produced by the \texttt{dgraph\lbt Fold\lbt
Comm()} routine for the calling processing element is the following:
\begin{itemize}
\iteme[\texttt{commmax}]
  The maximum number of point-to-point communications that can be
  performed by any processing element. The higher this value, the
  higher the probability to spread the load of a highly overloaded
  processing element to (underloaded) receivers. In the extreme case
  where all the vertices are located on a single processing element,
  $(\mbox{\texttt{procglbnbr}} - 1)$ communications would be
  necessary. To prevent such a situation, the number of communications
  is bounded by a small number, and receiver processing elements can
  be overloaded by an incoming communication. The algorithm strives to
  provide a \textit{feasible} communication scheme, where the current
  maximum number of communications per processing element suffices to
  send the load of all sender processing elements. When the number of
  receivers is smaller than the number of senders (in practice, only
  by one, in case of folding from an odd number of processing
  elements), at least two communications have to take place on some
  receiver, to absorb the vertices sent. The initial maximum number of
  communications is defined by \texttt{DGRAPH\lbt FOLD\lbt COMM\lbt
  NBR};
\iteme[\texttt{commtypval}]
  The type of communication and processing that the processing element
  will have to perform: either as a sender, a receiver, or a sender
  receiver. Sender receivers will keep some of their vertex data, but
  have to send the rest to other receivers. Sender receivers do send
  operations only, and never receive data from a sender;
\iteme[\texttt{commdattab}]
  A set of slots, of type \texttt{Dgraph\lbt Fold\lbt Comm\lbt Data},
  that describe the point-to-point communications that the processing
  element will initiate on its side. Each slot contains the number of
  vertices to send or receive, and the target or source process index,
  respectively;
\iteme[\texttt{commvrttab}]
  A set of values associated to each slot in \texttt{comm\lbt dat\lbt
  tab}, each of which contains the global index number of the first
  vertex of the graph chunk that will be transmitted;
\iteme[\texttt{proccnttab}]
  For receiver processes only, the count array of same name of the
  folded distributed graph structure;
\iteme[\texttt{vertadjnbr}]
  For receiver processes only, the number of elements in the dichotomy
  array \texttt{vert\lbt adj\lbt tab};
\iteme[\texttt{vertadjtab}]
  A sorted array of global vertex indices. Each value represent the
  global start index of a graph chunk that will been exchanged (or
  which will remain in place on a receiver processing element);
\iteme[\texttt{vertdlttab}]
  The value which has to be added to the indices of the vertices in
  the corresponding chunk represented in \texttt{vert\lbt adj\lbt
  tab}. This array and the latter serve to find, by dichotomy, to
  which chunk an end vertex belongs, and modify its global vertex
  index in the edge array in the receiver processing element. Although
  \texttt{vert\lbt adj\lbt tab} and \texttt{vert\lbt dlt\lbt tab}
  contain strongly related information, they are separate arrays, for
  the sake of memory locality. Indeed, \texttt{vert\lbt adj\lbt tab}
  will be subject to a dichotomy search, involving many memory reads,
  before the proper index is found and a single value is retrieved
  from the \texttt{vert\lbt dlt\lbt tab} array.
\end{itemize}

The first stage of the algorithm consists in sorting a global process
load array in ascending order, in two parts: the sending half, and the
receiving half. These two sorted arrays will contain the source
information which the redistribution algorithm will use. Because the
receiver part of the sort array can be modified by the algorithm, it
is recomputed whenever \texttt{commmax} is incremented. It is the same
for \texttt{sort\lbt snd\lbt bas}, the index of the first non-empty
sender in the sort array.
\\

In a second stage, the algorithm will try to compute a valid
communication scheme for vertex redistribution, using as many as
\texttt{commmax} communications (either sends or receives) per
processing element. During this outermost loop, if a valid
communication scheme cannot be created, then \texttt{commmax} is
incremented and the communication scheme creation algorithm is
restarted. The initial value for \texttt{commmax} is
\texttt{DGRAPH\lbt FOLD\lbt COMM\lbt NBR}.

The construction of a valid communication scheme is performed within
an intermediate loop. At each step, a candidate sender process is
searched for: either a sender process which has to dispose of all of
its vertices, or an overloaded receiver process, depending on which
has the biggest number of vertices to send. If candidate senders can
no longer be found, the stage has succeeded with the current value of
\texttt{commmax}; if a candidate sender has been found but a candidate
receiver has not, the outermost loop is restarted with an incremented
\texttt{commmax} value, so as to balance loads better.

Every time a sender has been found and one or more candidate receivers
exist, an inner loop creates as many point-to-point communications as
to spread the vertices in chunks, across one or more available
receivers, depending on their capacity (\ie, the number of vertices
they can accept). If the selected sender is a sender receiver, the
inner loop will try to interleave small communications from pure
senders with communications of vertex chunks from the selected
sender receiver. The purpose of this interleaving is to reduce the
number of messages per process: a big message from a sender receiver
is likely to span across several receivers, which will then perform
only a single receive communication. By interleaving a small
communication on each of the receivers involved, the latter will only
have to perform one more communication (\ie, two communications only),
and the interleaved small senders will be removed off the list,
reducing the probability that afterwards many small messages will sent
to the same (possibly eventually underloaded) receiver.
\\

In a third stage, all the data related to chunk exchange, which was
recorded in a temporary form in the \texttt{vertadjtab},
\texttt{vertdlttab} and \texttt{slotsndtab} arrays, is compacted to
remove empty slots and to form the final \texttt{vertadjtab} and
\texttt{vertdlttab} arrays to be used for dichotomy search.
\\

The data structures that are used during the computation of
vertex global index update arrays are the following:
\begin{itemize}
\iteme[\texttt{vertadjtab} and \texttt{vertdlttab}]
  These two arrays have been presented above. They are created only
  for receiver processes, and will be filled concurrently. They are of
  size $((\mbox{\texttt{commmax}} + 1) * \mbox{\texttt{orgprocnbr}})$,
  because in case a process is a sender receiver, it has to use a
  first slot to record the vertices it will keep locally, plus
  \texttt{commmax} for outbound communications.  During the second
  stage of the algorithm, for some slot \texttt{i},
  \texttt{vertadjtab[i]} holds the start global index of the chunk of
  vertices that will be kept, sent or received, and
  \texttt{vertdlttab[i]} holds the number of vertices that will be
  sent or received.  During the third stage of the algorithm, all this
  data will be compacted, to remove empty slots. After this,
  \texttt{vertadjtab} will be an array of global indices used for
  dichotomy search in \texttt{dgraph\lbt Fold()}, and
  \texttt{vertdlttab[i]} will hold the adjustment value to apply to
  vertices whose global indices are comprised between
  \texttt{vertadjtab[i]} and \texttt{vertadjtab[i+1]}.
\iteme[\texttt{slotsndtab}]
  This array only has cells for receiver-slide slots, hence a size of
  $((\mbox{\texttt{commmax}} + 1) * \mbox{\texttt{procfldnbr}})$
  items. During the second stage of the algorithm, it is filled so
  that, for any non-empty communication slot \texttt{i} in
  \texttt{vertadjtab} and \texttt{vertdlttab}, representing a receive
  operation, \texttt{slotsndtab[i]} is the slot index of the
  corresponding send operation. During the third stage of the
  algorithm, it is used to compute the accumulated vertex indices
  across processes.
\end{itemize}

Here are some examples of redistributions that are computed by the
\texttt{dgraph\lbt Fold\lbt Comm()} routine.

\begin{lstlisting}
orgvertcnttab = { 20, 20, 20, 20, 20, 20, 20, 1908 }
partval = 1
vertglbmax = 1908
Proc [0] (SND) 20 -> 0 : { [4] <- 20 }
Proc [1] (SND) 20 -> 0 : { [5] <- 20 }
Proc [2] (SND) 20 -> 0 : { [6] <- 20 }
Proc [3] (SND) 20 -> 0 : { [6] <- 20 }
Proc [4] (RCV) 20 -> 512 : { [0] -> 20 }, { [7] -> 472 }
Proc [5] (RCV) 20 -> 512 : { [1] -> 20 }, { [7] -> 472 }
Proc [6] (RCV) 20 -> 512 : { [2] -> 20 }, { [7] -> 452 }, { [3] -> 20 }
Proc [7] (RSD) 1908 -> 512 : { [4] <- 472 }, { [5] <- 472 }, { [6] <- 452 }
commmax = 4
commsum = 14
\end{lstlisting}
We can see in the listing above that some interleaving took place
on the first receiver (proc.~4) before the sender receiver (proc.~7)
did its first communication towards it.

\begin{lstlisting}
orgvertcnttab = { 0, 0, 0, 20, 40, 40, 40, 100 }
partval = 1
vertglbmax = 100
Proc [0] (SND) 0 -> 0 : 
Proc [1] (SND) 0 -> 0 : 
Proc [2] (SND) 0 -> 0 : 
Proc [3] (SND) 20 -> 0 : { [4] <- 20 }
Proc [4] (RCV) 40 -> 60 : { [3] -> 20 }
Proc [5] (RCV) 40 -> 60 : { [7] -> 20 }
Proc [6] (RCV) 40 -> 60 : { [7] -> 20 }
Proc [7] (RSD) 100 -> 60 : { [5] <- 20 }, { [6] <- 20 }
commmax = 4
commsum = 6
\end{lstlisting}
In the latter case, one can see that the pure sender that has been
interleaved (proc.~3) sufficed to fill-in the first receiver
(proc.~4), so the first communication of the sender receiver (proc.~7)
was towards the next receiver (proc.~5).

\subsection{\texttt{dmeshDgraphDual()}}

The \texttt{dmeshDgraphDual()} routine creates a dual distributed
graph of type \texttt{Dgraph} from a distributed mesh of type
\texttt{Dmesh}. It can be seen as the distributed-memory version of
the \texttt{meshGraphDual()} routine. An edge will be created between
two elements only if these elements have at least \texttt{noconbr}
nodes in common.

At the time being, the \texttt{Dmesh} data structure only stores the
adjacency from local element vertices to node vertices, using their
global, based, numbering. Consequently, building the
element-to-element connectivity operates in three phases:
firstly, to redistribute element-to-node edge information so as to
build the node-to-element adjacency of each node; secondly, to provide
relevant node adjacencies to processes requiring them (possibly
duplicating the same adjacency on multiple processes); this will
allow, in a third phase, to build the element-to-element adjacency of
each local element.

\subsubsection{Determining the node vertex range}

In a preliminary sweep over every local element-to-node edge array,
the local maximum global node index \texttt{vnodlocmax} is
computed. Then, by way of an all-reduce-max operation, the global
maximum global node index \texttt{vnodglbmax} is obtained. If the node
global indices are all used, then the global number of vertex nodes,
\texttt{vnodglbnbr}, is equal to $\mbox{\texttt{vnodglbmax}} -
\mbox{\texttt{baseval}} + 1$, as valid node vertex global indices
range from \texttt{baseval} to \texttt{vnodglbmax}, included.

In debug mode, the local minimum global node index \texttt{vnodlocmin} is
also computed, and all-reduced-min into \texttt{vnodglbmin}, which
should be equal to \texttt{baseval}.

Knowing the global node vertex index range is necessary to evenly
distribute node vertex data across all processes, assuming node
vertices will have an equivalent number of neighbors overall. The
absence of some node vertex indices in this range will not break the
algorithm (isolated node vertices will be created in the first phase,
which will not be propagated anywhere in the second phase), but may
cause load imbalance when handling the node vertices on each process.

\subsubsection{Creating node adjacencies}

In order to build node vertex adjacencies across all processes, some
all-to-all communication must take place, in order to send
element-to-node edge data to the processes that will host the given
node vertices, turning the gathered data into node-to-element
data. All-to-all communication of edges will be controlled by four
arrays of \texttt{int}'s, of size \texttt{procglbnbr} each:
\texttt{esnd\lbt cnt\lbt tab}, the edge send count array;
\texttt{esnd\lbt dsp\lbt tab}, the edge send displacement array;
\texttt{ercv\lbt cnt\lbt tab}, the edge receive count array; and
\texttt{ercv\lbt dsp\lbt tab}, the edge receive displacement array.
The edge data to be sent will be placed into \texttt{esnddattab}, the
edge send data array, while the received edge data will be available
in \texttt{ercvdattab}, the edge receive data array.

In order to determine how many edges have to be sent to each process,
per-process singly linked lists are built, by way of two arrays:
\texttt{prfr\lbt loc\lbt tab} (``\mbox{(per-)}process first (index),
local array''), of size \texttt{proc\lbt glb\lbt nbr} since there must
be as many lists as there are destination processes, and
\texttt{eene\lbt loc\lbt tax} (``element edge next (index), local
based array''), of size $(2 * \texttt{eelm\lbt loc\lbt nbr})$ since
each of the local element-to-node edges has to be chained to (only)
one list, to be sent to the relevant process, and each chaining will
require two data: the global element number (which could not be
retrieved in $O(1)$ time else), and the edge index of the next edge in
the chaining (which will be the sentinel value \texttt{-1} at the end
of the list).

All the cells of \texttt{prfr\lbt loc\lbt tab} are initialized with
\texttt{-1}, the end-of-list sentinel, and all cells of
\texttt{esnd\lbt cnt\lbt tab} are initialized to $0$, as this array
will be used to count the number of edges to send to each process.

Then, the adjacencies of all local element vertices are traversed. For
each element-to-node edge of index $e$, the index $p$ of the process
which will holds the node vertex is computed in $O(1)$ time, using the
\texttt{dmesh\lbo Dgraph\lbt Dual\lbt Proc\lbo Num\,()} routine.
The edge data is then chained at the head of the linked list for this
process: $\texttt{prfr\lbt loc\lbt tab[}p\texttt{]}$ stores the index
of the edge, while $\texttt{eene\lbt loc\lbt tax[}2 * e\texttt{]}$
stores the element global index, and $\texttt{eene\lbt loc\lbt tax[}2
* e + 1 \texttt{]}$ receives the old value of
$\texttt{prfr\lbt loc\lbt tab[}p\texttt{]}$, to maintain the forward
chaining. Also, $\texttt{esnd\lbt loc\lbt tab[}p\texttt{]}$ is
increased by $2$, since two more data will be sent to $p$ in the
upcoming all-to-all exchange.

Then, the contents of \texttt{esnd\lbt cnt\lbt tab} are all-to-all
exchanged to fill-in \texttt{ercv\lbt cnt\lbt tab}, which indicates
the amount of edge data to be received from each process; the sum of
its cells gives \texttt{ercv\lbt dat\lbt siz}, which amounts to twice
the number of local node-to-element edges to be created. The
\texttt{vnod\lbt loc\lbt tax} and \texttt{enod\lbt loc\lbt tax} arrays
can then be allocated, to hold the node vertex indices and edge
adjacency, respectively. Then, from \texttt{esnd\lbt cnt\lbt tab} and
\texttt{ercv\lbt cnt\lbt tab} are derived the displacement arrays
\texttt{esnd\lbt dsp\lbt tab} and
\texttt{ercv\lbt dsp\lbt tab}, respectively. Then, the
\texttt{esnd\lbt dat\lbt tab} and
\texttt{ercv\lbt dat\lbt tab} temporary arrays can be allocated,
after those that will remain in memory longer.

Then, the per-process linked lists are traversed, and the
element-to-node edge data, now turned into node-to-element edge data,
is copied into the \texttt{esnd\lbt dat\lbt tab} array, after which an
all-to-allv data exchange makes it available in the
\texttt{ercv\lbt dat\lbt tab} array of each process.

Then, the received edge array is traversed, to count in
\texttt{vnod\lbt loc\lbt tax} the number of edges per node
vertex. Once this counting is done, the
\texttt{vnod\lbt loc\lbt tax} is turned into a displacement array,
which will be used to place node-to-element edges at their proper
place in \texttt{enod\lbt loc\lbt tax}. After this, the the received
edge array is traversed again to record the node-to-element edges in
\texttt{enod\lbt loc\lbt tax}, and the contents of
\texttt{vnod\lbt loc\lbt tax} are restored.

\subsubsection{Making node adjacencies available to concerned elements}

To create element-to-element adjacencies from element-to-node
adjacencies, the node-to-element adjacencies of all nodes used as
neighbors of some element vertex have to be copied to the process
owning this element vertex. Hence, the same node adjacency may have to
be sent to several processes at the same time. In order to determine
to which process the adjacency of some node vertex has to be sent, one
can take advantage of the order in which edge data have been received
in the \texttt{ercv\lbt dat\lbt tab} array: the adjacency of a node
has to be sent to some process $p$ if the global node index of this node
vertex appears in the sub-array of \texttt{ercv\lbt dat\lbt tab}
starting from index $\texttt{ercv\lbt dsp\lbt tab[}p\texttt{]}$ and
ending before index $\texttt{ercv\lbt dsp\lbt tab[}p+1\texttt{]}$ (or
\texttt{ercv\lbt dat\lbt siz} for the last sub-array). However, a
node vertex adjacency needs only be sent once to any process, even if
more than one of its local elements need it. To do so, a local node
vertex flag array, \texttt{vnfl\lbt loc\lbt tax}, of size
\texttt{vnod\lbt loc\lbt nbr}, will contain the most recent process
number requesting the node vertex. Hence, a node vertex adjacency will
only be copied once to the node adjacency send data array for this
process. All cells of the flag array are initially set to \texttt{-1},
an invalid process number.

In a first pass across the \texttt{ercv\lbt dat\lbt tab} array, the
number of node data to be sent to each process is computed, and stored
in the relevant cell of the \texttt{nsnd\lbt cnt\lbt tab} (``node
(data) send count'') array. For each concerned node vertex, the number
of data items to be sent is equal to two (the global number of the
node, and its degree), plus the number of element neighbors of the
node vertex. A node vertex $v$ will be accounted for, for a given
process $p$, only if $\texttt{vnfl\lbt loc\lbt tax[}v\texttt{]} < p$,
and once the node vertex is accounted for, it is flagged by setting
$\texttt{vnfl\lbt loc\lbt tax[}v\texttt{]}$ to $p$.

Then, the contents of the \texttt{nsnd\lbt cnt\lbt tab} array are
all-to-all exchanged, to produce the \texttt{nrcv\lbt cnt\lbt tab}
array. From these two can be derived the \texttt{nsnd\lbt dsp\lbt tab}
and \texttt{nrcv\lbt dsp\lbt tab} send and receive displacement arrays,
respectively, and \texttt{nsnd\lbt dat\lbt siz} and
\texttt{nrcv\lbt dat\lbt siz}, the overall number of data to be sent
and received, respectively. The two node data send and receive arrays,
\texttt{nsnd\lbt dat\lbt tab} and \texttt{nrcv\lbt dat\lbt tab}, can be
allocated with these prescribed sizes. The send array will be
allocated last, since it will be freed first, as soon as the
data exchange completes.

In a second pass across the \texttt{ercv\lbt dat\lbt tab} array, the
adjacencies of the nodes that are encountered for the first time in
this pass are copied to the \texttt{nsnd\lbt dat\lbt tab} array, one
process after the other, using the start indices contained in the
\texttt{nsnd\lbt dsp\lbt tab} array. In order not to have to reset the
flag array between the two passes, a node vertex $v$ will be accounted
for, for a given process $p$, only if
$\texttt{vnfl\lbt loc\lbt tax[}v\texttt{]} <
(\texttt{proc\lbt glb\lbt nbr} + p)$, and once the node vertex is
accounted for, it is flagged by setting
$\texttt{vnfl\lbt loc\lbt tax[}v\texttt{]}$ to
$(\texttt{proc\lbt glb\lbt nbr} + p)$.

Then, an all-to-allv data exchange makes the node adjacency data
available in the \texttt{nrcv\lbt dat\lbt tab} array of each process.

It is now necessary to make node adjacency available in $O(1)$
time. This is made possible through a hash table \texttt{hnodtab} of
type \texttt{Dmesh\lbo Dgraph\lbt Dual\lbt Hash\lbo Node}, which, for
each concerned node vertex, will point to the start of this node data
(that is, the node degree and node-to-element adjacency) in the
\texttt{nrcv\lbt dat\lbt tab} array. Since this hash table will be
static (that is, read-only and of immutable size) and must contain all
the local nodes, its maximum load capacity is set to $50\,\%$
(and not $25\,\%$ as usually done in \scotch\ degree-related hash
tables). Once this hash table array is allocated, the
\texttt{nrcv\lbt dat\lbt tab} is traversed to populate it.

\subsubsection{Creating the element-to-element adjacencies}

The last phase of the algorithm is the building of element-to-element
adjacencies. This is performed through a second hash table,
\texttt{helmtab}, of type
\texttt{Dmesh\lbo Dgraph\lbt Dual\lbt Hash\lbo Edge}. Since the
maximum degree of element-to-element adjacencies cannot be known in
advance, this hash table may be resized dynamically, and will be
loaded at $25\,\%$ capacity to minimize collisions. Its functioning,
including resizing, is described in Section~\ref{sec-type-hash-table}
of this manual.

The local distributed adjacency data for the dual graph will be placed
into the \texttt{vert\lbt loc\lbt tax} and \texttt{edge\lbt loc\lbt tax}
arrays. Hence, prior to building the element-to-element adjacency,
these arrays are allocated. Since the distributed graph will be
compact, \texttt{vert\lbt loc\lbt tax} is of size
$(\texttt{velm\lbt loc\lbt nbr} + 1)$. Since the number of
edges cannot be estimated in advance, the size of the 
\texttt{edge\lbt loc\lbt tax} array, starting from a plausible size,
may have to be dynamically increased during its filling-in, each time
by $25\,\%$ more.

For each local element, the preexisting element-to-node adjacency is
traversed and, for each of the neighbor nodes, the node-to-element
adjacency is traversed in turn, being read from
\texttt{nrcv\lbt dat\lbt tab} from the index provided by
\texttt{hnodtab}.

If the neighbor element is not yet present in \texttt{helmtab} for the
current local element, it is added to the element hash table, with a
neighbor count in the hash table equal to $(\texttt{noconbr} - 1)$,
since one common node has already been found. If the neighbor element
is already present in \texttt{helmtab} for the current local element,
and its neighbor count in the hash table is strictly greater than
zero, the neighbor count is decremented. If, in any of the two above
cases, the neighbor count reaches zero, the neighbor element is added
to the adjacency list of the current element in
\texttt{edge\lbt loc\lbt tax}; this latter array is enlarged whenever
full. It will be downsized to its exact final size once all the edges
have been created.

Once the \texttt{vert\lbt loc\lbt tax} and
\texttt{edge\lbt loc\lbt tax} arrays are complete, the
\texttt{dgraphBuild2\,()} routine is called, to finalize the
construction of the distributed dual graph.
