\documentclass[a4paper,%
11pt,%
DIV=12,
headsepline,%
headings=normal,
]{scrartcl}

\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[automark]{scrlayer-scrpage}
\usepackage{graphicx}
\usepackage{lmodern} 
\usepackage{url}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{booktabs}
\usepackage{listings}
\usepackage{subfig}
\usepackage{hyperref}
\usepackage{array,xcolor,colortbl} 

\hypersetup{
    colorlinks,
    linkcolor={red!50!black},
    citecolor={blue!50!black},
    urlcolor={blue!50!black}
} 

\lstset{
  basicstyle=\ttfamily\footnotesize,
  frame=single
}

\newcounter{curex}
\setcounter{curex}{0}
\newcommand{\exercise}[1]{\section*{Exercise #1}\setcounter{curex}{#1}}
\newcommand{\answer}[1]{\subsection*{Answer \arabic{curex}.#1}}

\begin{document}

\noindent
\vspace*{1ex}
\begin{minipage}[t]{.45\linewidth}
\strut\vspace*{-\baselineskip}\newline
\includegraphics[height=.9cm]{./figs/Inf-Logo_black_en-eps-converted-to.pdf}
\includegraphics[height=.9cm]{./figs/par-logo}
\end{minipage}
\hfill
\begin{minipage}[t]{.5\linewidth}
\flushright{
Research Group for Parallel Computing\\%
Faculty of Informatics\\%
TU Wien}
\end{minipage}
\vspace*{1ex}

\hrule 

\vspace*{2ex}

\begin{center}
{\LARGE\textbf{Parallel Computing}}\\
{\large{}%
  2022S\\
  Übungsblatt 1\\
}
\end{center}

\hrule 
\vspace*{1ex}

\noindent
1: First name, Last name, Matrikel\\
2: First name, Last name, Matrikel\\
3: First name, Last name, Matrikel

\vspace*{1ex}
\hrule 

\exercise{1}

\begin{lstlisting}
void mv(int m, int n, double M[m][n], double V[n], double W[m])
{
  int i, j;

  for (i = 0; i < m; i++) {
    W[i] = 0.0;
    for (j = 0; j < n; j++) {
      W[i] += M[i][j] * V[j];
    }
  }
}
\end{lstlisting}

\answer{1}

\begin{lstlisting}
void mv(int m, int n, double M[m][n], double V[n], double W[m])
{
  int i, j;

  par (i = 0; i < m; i++) {
    W[i] = 0.0;
    for (j = 0; j < n; j++) {
      W[i] += M[i][j] * V[j];
    }
  }
}
\end{lstlisting}

Using this solution, every parallel step uses write accesses only on its own local data of the output vector $W$, whereas concurrent read accesses are used to query data from the source vector $V$. As such, an CREW PRAM is required.

\answer{2}

\begin{lstlisting}
double reduce(int n, double M_i[n], double V[n], int actual);
void mv(int m, int n, double M[m][n], double V[n], double W[m])
{
  int i, j;

  par (i = 0; i < m; i++) {
    W[i] = reduce(n, M[i], V, 0);
  }
}

double reduce(int n, double* M_i, double* V, int actual) {
  // recursion tail, if indices coincide computation is trivial
  if (n == 1) {
    return M_i[0] * V[0];
  }

  // halve workload using a recursive call, with parallel for O(log n)
  // time steps
  double sum[2];
  par (int i = 0; i < 2; i++) {
    sum[i] = reduce(
            n / 2 + (i * (n % 2)),
            &M_i[(n / 2) * i],
            &V[(n / 2) * i],
            actual + (n / 2) * i
    );
  }

  return sum[0] + sum[1];
}
\end{lstlisting}

Because multiple read accesses to $V$ can happen simultaneously, but each write access to the elements of $W$ is done only once, we need a CREW PRAM in this example as well.

Each tail end of the recursion of the reduce function, which forms the basic computation to be done in this example, is executed only once.

Defining first the recurrence relation of the total work of the reduce function yields:

$$
W_{reduce}(n) = W_{reduce}(n/2) + W_{reduce}(n/2) + O(1) = 2 W_{reduce}(n/2) + O(1) = O(n)
$$

The total work then amounts to:

$$
W_{total}(n, m) = O(m) W_{reduce}(n) = O(m) O(n) = O(n m)
$$

With a final result of $O(n m)$, which matches the given sequential algorithm. The implementation is thus work-optimal against the given sequential algorithm.

\answer{3}

Because the issue requiring us to use a CREW PRAM are concurrent reads done to $V$ as a consequence of the outer \texttt{par} construct, a possibility is to expand $V$ to a matrix with a copy of $V$ for each parallel processor up to $m$.

This way, every processor has a private copy of $V$, thus eliminating the need of a CREW PRAM.

Another possibility would be to introduce a critical section on every access to $V$:

\begin{lstlisting}
return M_i[0] * V[0];
\end{lstlisting}

\exercise{2}

\begin{lstlisting}
for (i=0; i<n; i++) {
  int count = 0;
  for (j=0; j<i; j++) {
    if (a[j]<=a[i]) count++;
  }
  j++;
  for (; j<n; j++) {
    if (a[j]<a[i]) count++;
  }
  b[count] = a[i];
}
for (i=0; i<n; i++) a[i] = b[i];
\end{lstlisting}

\answer{1}

The algorithm sorts the array \texttt{a[n]} by going through each of the elements and counting for each element \texttt{a[i]} how many other elements \texttt{a[j]} are smaller. The count is then used as the index for the destination array \texttt{b[n]}, because the count accurately describes on which position the element must be placed for the array to be correctly sorted. That is of course, \texttt{b[count] = a[i]}.

In order for this sorting algorithm to be stable, we need to compare using \texttt{<=} when the compared element \texttt{a[j]} has a smaller index than \texttt{a[i]}. The property is thus sorting stability (that same-valued elements remain positioned in their original order).

\answer{2}

Inside the loop, all elements up to $n$ besides the current element \texttt{a[i]} are compared against \texttt{a[i]}, in addition to some constant work. The outer loop iterates for $n$ elements, thus forming a multiplication of the work inside the loop and the work outside the loop.

Afterwards, the destination array \texttt{b[n]} is copied in $n$ steps.

Therefore, we can describe the work of the algorithm as follows:

$$
W(n) = n ((n - 1) + O(1)) + n = O(n^2)
$$

\answer{3}

\begin{lstlisting}
void sort(int n, int a[n]) {
    int b[n];

    #pragma omp parallel for shared (a, b)
    for (int i = 0; i < n; i++) {
        int j;
        int count = 0;

        for (j = 0; j < i; j++) {
            if (a[j] <= a[i]) count++;
        }
        j++;
        for (; j < n; j++) {
            if (a[j] < a[i]) count++;
        }
        b[count] = a[i];
    }

    #pragma omp parallel for shared (a, b)
    for (int i=0; i<n; i++)
        a[i] = b[i];
}
\end{lstlisting}

\answer{4}

Assuming $n \geq p$, the parallelizable part (outer for loop) is executed in $O(\frac{n}{p})$ parallel timesteps, while the inner loop is executed in $O(n)$ sequential timesteps (refer to reasoning in previous answer). As such, the total asymptotic, parallel running time of the algorithm resolves to $O(\frac{n^2}{p})$.

\answer{5}

The relative speedup is given by comparing the algorithm against itself assuming $p = 1$:
$$
S_{rel}(p) = \frac{T_{par}(n, 1)}{T_{par}(n, p)} = \frac{n^2}{\frac{n^2}{p}} = p
$$

While the absolute speedup is given by comparing the algorithm against the best known sequential algorithm, which for sorting problems has been shown to be $n \log n$:

$$
S_{abs}(p) = \frac{T_{seq}(n)}{T_{par}(n, p)} = \frac{n \log n}{\frac{n^2}{p}} = \frac{p}{n} \log n
$$

\exercise{3}

\answer{1}

\texttt{a} stores the information which thread (by id) executed which workload of the $n$ tasks. \texttt{t} counts for a thread (where the thread id is the index) how many pieces of the $n$ tasks where executed.

\answer{2}

\begin{tabular}{rrrrrrrrrrrrrrrrrrrr}
  \toprule
 case /  $i$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18  \\
  \midrule
\texttt{static} & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 4 & 4 & 4 & 5 & 5 & 5 \\
\texttt{static,2} & 0 & 0 & 1 & 1 & 2 & 2 & 3 & 3 & 4 & 4 & 5 & 5 & 0 & 0 & 1 & 1 & 2 & 2 & 3 \\
\texttt{static,5} & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 3 & 3 \\
\texttt{static,6} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 3 \\
\texttt{dynamic,1} & 2 & 5 & 0 & 1 & 4 & 3 & 0 & 5 & 2 & 3 & 1 & 4 & 0 & 2 & 5 & 0 & 4 & 1 & 2 \\
\texttt{dynamic,2} & 2 & 2 & 4 & 4 & 3 & 3 & 1 & 1 & 5 & 5 & 0 & 0 & 2 & 2 & 0 & 0 & 2 & 2 & 3 \\
\texttt{guided,3} & 1 & 1 & 1 & 1 & 3 & 3 & 3 & 0 & 0 & 0 & 4 & 4 & 4 & 5 & 5 & 5 & 2 & 2 & 2 \\
\bottomrule
\end{tabular}\\

\begin{tabular}{rrrrrrr}
  \toprule
case / $t$             & $t[0]$ & $t[1]$ & $t[2]$ & $t[3]$ & $t[4]$ & $t[5]$ \\  
  \midrule
\texttt{static} & 4 & 3 & 3 & 3 & 3 & 3 \\
\texttt{static,2} & 4 & 4 & 4 & 3 & 2 & 2 \\
\texttt{static,5} & 5 & 5 & 5 & 4 & 0 & 0 \\
\texttt{static,6} & 6 & 6 & 6 & 1 & 0 & 0 \\
\texttt{dynamic,1} & 4 & 3 & 4 & 2 & 3 & 3 \\
\texttt{dynamic,2} & 4 & 2 & 6 & 3 & 2 & 2 \\
\texttt{guided,3} & 3 & 4 & 3 & 3 & 3 & 3 \\
\bottomrule
\end{tabular}

\answer{3}

A common problem with parallelization that can also occur in this case is false sharing. That is to say, the elements of \texttt{t} can lie inside the same cache block, forcing the cores to adhere to cache coherency protocols and thus incurring a performance penalty. The solution would be to align the values in memory in a way that prevents false sharing.

\exercise{4}

\begin{lstlisting}
i = 0; j = 0; k = 0; 
while (i<n&&j<m) {
   C[k++] = (A[i]<=B[j]) ? A[i++] : B[j++]; 
}
while (i<n) C[k++] = A[i++]; 
while (j<m) C[k++] = B[j++];
\end{lstlisting}

\answer{1}

For simplicity of argument, we will assume that elements \texttt{A[i]} are loaded first into the cache when the comparison \texttt{A[i] <= B[j]} is made. What will happen on the first iteration (when determining \texttt{C[0]}) is that the cache line holding elements \texttt{A[0..15]} are loaded into the cache line $L_0$, then element \texttt{A[0]} is loaded into a register. Next the cache line holding elements \texttt{B[0..15]} are loaded into the cache line $L_0$, evicting the previous cache line, then \texttt{B[0]} is loaded into a register.

For the best possible performance, we want to minimize the amount of cache evictions. This can be achieved by keeping all elements of \texttt{B[0..15]} in the cache and using the values \texttt{B[j]} over the values \texttt{A[i]} by ensuring that all elements of \texttt{B[0..15]} are smaller than all elements of \texttt{A[0..15]}. The cache line $L_0$ will remain until all elements \texttt{B[0..15]} have been copied.

Because the cache will initially not have the values of \texttt{A[0..15]} and \texttt{B[0..15]}, there will be enough compulsory misses to load both arrays entirely, leading to $\lceil\frac{n}{16}\rceil$ misses for each, totalling $2\lceil\frac{n}{16}\rceil$. Our constructed best case shows that another cache miss has to happen when the initially loaded cache line $L_0$ is evicted to load elements \texttt{B[0..15]}.

Thus, the total amount of misses results in:

$$
\text{misses} = 2\left\lceil\frac{n}{16}\right\rceil + 1
$$

\answer{2}

For this example we will introduce notation to illustrate how the worst case would look like. A cache load used to load a specific array element \texttt{A[i]} or \texttt{B[j]} will be denoted by \texttt{cl array[start-end] rs array[index]}, more specifically to denote that a cache line containing the element has to be loaded which array element specifically is loaded into a register. \texttt{st array[index]} will denote that the element is copied into the array \texttt{C}.

\begin{lstlisting}
cl A[0-15] rs A[0]
cl B[0-15] rs B[0]
st A[0]
cl A[0-15] rs A[1]
st B[0]
cl B[0-15] rs B[1]
st A[1]
cl A[0-15] rs A[2]
st B[1]
cl B[0-15] rs B[2]
st B[2]
...
\end{lstlisting}

The example illustrates that for each element that has to be copied to the array \texttt{C} a cache miss occurs and the respective cache line has to be evicted. When \texttt{A[0]} and \texttt{B[0]} have to be loaded, another compulsory miss occurs.

Thus, the total amount of misses results in:

$$
\text{misses} = 2n + 1
$$

For this to be achieved the values of the arrays \texttt{A} and \texttt{B} are equal at every index.

\exercise{5}

\answer{1}

Our solution to this problem is:

\begin{lstlisting}
int rank(double x, double X[], int n) {
  int L = 0;
  int R = n;
  while (L < R) {
    int i = (L + R) / 2;
    if (X[i] < x) {
      L = i + 1;
    } else {
      R = i;
    }
  }
  return L;
}

void merge(double a[], long n, double b[], long m, double c[]) {
    int chunk_size;
    #pragma omp parallel
    {
        #pragma omp master
        chunk_size = n/omp_get_num_threads();
    }

    #pragma omp parallel for schedule(static, chunk_size)
    for (int i = 0; i < n; i++) {
      c[i + rank(a[i], b, m)] = a[i];
    }

    #pragma omp parallel for schedule(static, chunk_size)
    for (int j = 0; j < m; j++) {
      //c[j + rank_non_strict(b[j], a, n)] = b[j];
      c[j + rank(b[j], a, n)] = b[j];
    }
}
\end{lstlisting}

The sequential algorithm for computing the rank that was chosen is binary search, which takes at maximum a logarithmic number of steps to find the correct rank of the element in an array.

Using this rank operation we can find for an element of the first array the correct position in the target array by adding as an offset how many elements inside the second array are smaller.

Because the rank operation takes a logarithmic amount of steps, and because we operate on $p$ elements of array $A$ and $B$ in parallel, the asymptotic running time amounts to:

$$
O \left(\frac{n}{p} \log m + \frac{m}{p} \log n \right)
$$

\answer{2}

\begin{figure}
    % first plots
    \centering
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I1runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I1speedup.png}}
    \caption{resulting plots of first test case}
    \label{fig:figs-ex5-1}

    % second plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I2runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I2speedup.png}}
    \caption{resulting plots of second test case}
    \label{fig:figs-ex5-2}

    % third plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I3runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex5/I3speedup.png}}
    \caption{resulting plots of third test case}
    \label{fig:figs-ex5-3}
\end{figure}

The various running times and speedup gains can be examined in figures \ref{fig:figs-ex5-1}, \ref{fig:figs-ex5-2} and \ref{fig:figs-ex5-3}. As can be seen, with increasing problem sizes, similar runtimes can be achieved by using more threads. This is most obvious when looking at the speedups, which are linear for all problem sizes and equally large.

\answer{3}

Because the rank operations used in our solution compares whether the current element of the first array is strictly smaller than the current element of the second array, equal values are ranked the same. By allowing for the elements of just one of the two arrays to be merged to ignore this strictness, comparing using smaller or equal to, we can let the elements of the other array be ranked higher by one position than they would normally be, avoiding this collision.

Thus the solution is to implement a flag or a second copy of the rank method where the comparison between the current elements is not strict and to use this second implementation for all elements of just one of the arrays (\texttt{B} for example).

\exercise{6}

\begin{lstlisting}
void merge_corank(double A[], int n, double B[], int m, double C[])
{
  int t; // number of blocks (threads)
  int i;
  
  int coj[t+1];
  int cok[t+1];
  
  for (i=0; i<t; i++) {
    corank(i*(n+m)/t,A,n,&coj[i],B,m,&cok[i]);
  }
  coj[t] = n;
  cok[t] = m;
  
  for (i=0; i<t; i++) {
    merge(&A[coj[i]],coj[i+1]-coj[i],
	  &B[cok[i]],cok[i+1]-cok[i],
	  &C[i*(n+m)/t]);
  }
}
\end{lstlisting}

\answer{1}

Because the \texttt{corank} method takes an amount of steps that is logarithmic in $n + m$, and because $t$ threads calculate the co-rank, the asymptotic complexity of the first for loop in \texttt{p} is as follows:

$$
O(t (\log(n + m)))
$$

The next for loop serves to merge the ranges of elements assigned to a specific thread sequentially. Essentially, it goes through all elements and copies them to their respective place in the target array. The resulting asymptotic complexity is:

$$
O(n + m)
$$

Because those two loops are processed one after another, the total (sequential) asymptotic complexity amounts to:

$$
O(t (\log(n + m)) + n + m)
$$

As one can see from the resulting term, $t (\log(n + m))$ must be in $O(n + m)$ to be comparable, as the final complexity term would reduce to $O(n + m)$. That can for example occur if $t$ is constant, as the term would become (in theory) asymptotically negligible.

\answer{2}

\begin{lstlisting}
#define MIN(x, y) (x) < (y) ? (x) : (y);
#define MAX(x, y) (x) > (y) ? (x) : (y);

void corank(int i, double A[], int n, int *j, double B[], int m, int *k) {
    *j = MIN(i, n);
    *k = i - (*j);
    int jlow = MAX(0, i - m);
    int klow = 0;
    int d = 0;
    int done = 0;

    do {
        if ((*j) > 0 && (*k) < m && A[(*j) - 1] > B[*k]) {
            d = (1 + (*j) - jlow) / 2;
            klow = *k;
            *j -= d;
            *k += d;
        } else if ((*k) > 0 && (*j) < n && B[(*k) - 1] >= A[*j]) {
            d = (1 + (*k) - klow) / 2;
            jlow = *j;
            *k -= d;
            *j += d;
        } else
            done = 1;
    } while (done == 0);
}

void merge(double a[], long n, double b[], long m, double c[]) {
    int p = omp_get_max_threads();

    int coj[p + 1];
    int cok[p + 1];

    #pragma omp parallel shared(coj, cok)
    {
        p = omp_get_num_threads();
	    int i = omp_get_thread_num();

	    coj[i] = cok[i] = 0;

        corank(
            i * (n + m) / p,
            a, n, &coj[i],
            b, m, &cok[i]);

        #pragma omp single
        {
            coj[p] = n;
            cok[p] = m;
        }

        int h = i * (n + m) / p;

        int j = coj[i];
        int jj = coj[i + 1];
        int k = cok[i];
        int kk = cok[i + 1];
        while (j < jj && k < kk)
            c[h++] = (a[j] <= b[k]) ? a[j++] : b[k++];

        while (j < jj)
            c[h++] = a[j++];

        while (k < kk)
            c[h++] = b[k++];
    }
}
\end{lstlisting}

\answer{3}

\begin{figure}
    % first plots
    \centering
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I1runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I1speedup.png}}
    \caption{resulting plots of first test case}
    \label{fig:figs-ex6-1}

    % second plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I2runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I2speedup.png}}
    \caption{resulting plots of second test case}
    \label{fig:figs-ex6-2}

    % third plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I3runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex6/I3speedup.png}}
    \caption{resulting plots of third test case}
    \label{fig:figs-ex6-3}
\end{figure}

The various running times and speedup gains can be examined in figures \ref{fig:figs-ex6-1}, \ref{fig:figs-ex6-2} and \ref{fig:figs-ex6-3}. This algorithm achieves a significantly higher speedup, compared to the last implementation, but the speedup isn't linear for large a large number of threads. This might be explained due to differences in workload distribution among the threads when working with a high number of threads.

Even if the speedup isn't linear, it should still be preferred over the implementation we have first seen, as the efficiency is significantly higher for a reasonable choice of thread count.

\answer{4}

As can be seen in the code of answer 5.2, the individual blocks of work are processed by multiple threads who each calculate their co-ranks to then sequentially process their block of work. Because the co-ranks are used to delimit which elements of the array are to be processed by the individuals threads, we need two co-rank values. Every thread thus uses the next co-rank value (in the auxiliary arrays \texttt{coj} and \texttt{cok}) to delimit their sequential processing, except for the last thread, which takes the values of \texttt{n} and \texttt{p} instead.

Because of this, a thread can not immediately continue with the part where the values of the arrays are merged sequentially into the target array, as it needs to wait for the calculation of the co-rank by the thread with its incremented thread number.

As it is needed to explicitly set the delimiting co-rank values for the last thread that calculates the last co-rank, it was chosen to set the last co-rank values to \texttt{n} and \texttt{p} explicitly while also using an \texttt{omp single} pragma. This acts as an implicit barrier, such that all threads have to synchronize after their calculation of the co-rank, before proceeding to sequentially merging their blocks of work.

The alternative solution is provided as \texttt{merge2}:

\begin{lstlisting}
void merge2(double a[], long n, double b[], long m, double c[]) {
    // replace this by a parallel merge algorithm
    //seq_merge1(a, n, b, m, c);

    #pragma omp parallel
    {
        int p = omp_get_num_threads();

        int j1, j2, k1, k2;
        #pragma omp for private(j1, j2, k1, k2) nowait
        for (int i = 0; i < p; i++) {
            corank(
                i * (n + m) / p,
                a, n, &j1,
                b, m, &k1);

            corank(
                (i + 1) * (n + m) / p,
                a, n, &j2,
                b, m, &k2);

            int h = i * (n + m) / p;

            while (j1 < j2 && k1 < k2)
                c[h++] = (a[j1] <= b[k1]) ? a[j1++] : b[k1++];

            while (j1 < j2)
                c[h++] = a[j1++];

            while (k1 < k2)
                c[h++] = b[k1++];
        }
    }
}
\end{lstlisting}

In this alternative solution, every thread operates on its own private indices \texttt{j1}, \texttt{j2}, \texttt{k1} and \texttt{k2}. Because every thread calculates the two co-rank values it requires individually, no synchronization has to take place. The threads can immediately go on to sequentially merge their blocks of work.

\exercise{7}

A parallel solution could look like:

\begin{lstlisting}
void merge_divconq(double A[], int n, double B[], int m, double C[])
{
  int i;

  if (n == 0) { // task parallelize for large n
    for (i = 0; i < m; i++) C[i] = B[i];
  } else if (m == 0) { // task parallelize for large m
    for (i = 0; i < n; i++) C[i] = A[i];
  } else if (n + m < CUTOFF) {
    merge(A, n, B, m, C); // sequential merge for small problems
  } else {
    int r = n / 2;
    int s = rank(A[r], B, m);
    C[r + s] = A[r];
    merge_divconq(A, r, B, s, C);
    merge_divconq(&A[r + 1], n - r - 1, &B[s], m - s, &C[r + s + 1]);
  }
}
\end{lstlisting}

\answer{1}

\begin{lstlisting}
int rank(double x, double X[], int n) {
  int L = 0;
  int R = n;
  while (L < R) {
    int i = (L + R) / 2;
    if (X[i] < x) {
      L = i + 1;
    } else {
      R = i;
    }
  }
  return L;
}

void merge_divconq(double a[], long n, double b[], long m, double c[], long cutoff) {
  int i;

  if (n == 0) { // task parallelize for large n
    for (i = 0; i < m; i++) c[i] = b[i];
  } else if (m == 0) { // task parallelize for large m
    for (i = 0; i < n; i++) c[i] = a[i];
  } else if (n + m <= cutoff) {
    seq_merge1(a, n, b, m, c);
  } else {
    int r = n / 2;
    int s = rank(a[r], b, m);
    c[r + s] = a[r];
    #pragma omp task
    merge_divconq(a, r, b, s, c, cutoff);
    #pragma omp task
    merge_divconq(&a[r + 1], n - r - 1, &b[s], m - s, &c[r + s + 1], cutoff);
  }
}

void merge(double a[], long n, double b[], long m, double c[]) {
  #pragma omp parallel
  #pragma omp master
  {
    if (n >= m)
      merge_divconq(a, n, b, m, c, (n + m) / omp_get_max_threads());
    else
      merge_divconq(b, m, a, n, c, (n + m) / omp_get_max_threads());
  }
}
\end{lstlisting}

\answer{2}

The cutoff determines how many recursion layers there are, because the cutoff point decides whether to divide and conquer once more or to process the remaining elements sequentially.

Too small a cutoff would lead to too many recursion calls that could impose a significant performance overhead, as the threads would be tasked with the creation of other new tasks instead of merging the arrays. Too large a cutoff would lead to sequentialism and we wouldn't be able to make use of the available threads.

A good initial choice would be a high enough constant value, such that for small numbers no recursion overhead is imposed. More ideal would be to try and match the task divisions with the number of threads. To do this, one could add the number of total elements and divide by the number of threads to get an acceptable cutoff value. This is imprecise though, because while the number of elements in array \texttt{a} is halved, the number of elements in array \texttt{b} is determined by the rank function, which may not return an equal split.

As a sidenote, because the array passed as \texttt{a} is always halved, while the other array is ranked, there is a general performance improvement to be made by always passing the larger array as array \texttt{a}. This is because the workload usually becomes more evenly split.

\answer{3}

\begin{figure}
    % first plots
    \centering
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I1runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I1speedup.png}}
    \caption{resulting plots of first test case}
    \label{fig:figs-ex7-1}

    % second plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I2runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I2speedup.png}}
    \caption{resulting plots of second test case}
    \label{fig:figs-ex7-2}

    % third plots
    \subfloat[\centering runtime plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I3runtime.png}}
    \qquad
    \subfloat[\centering speedup plot]{\includegraphics[width=.45 \textwidth]{figs/ex7/I3speedup.png}}
    \caption{resulting plots of third test case}
    \label{fig:figs-ex7-3}
\end{figure}

The various running times and speedup gains can be examined in figures \ref{fig:figs-ex7-1}, \ref{fig:figs-ex7-2} and \ref{fig:figs-ex7-3}. In this case, we use a more complicated scheme to tackle the problem, by using divide and conquer. In this case it is paramount to choose a good cutoff, as it can severely affect performance if not chosen carefully.

This is reflected in the results, where it can be seen that for all problem sizes, using $8$ threads lends itself particularly well for the chosen cutoff function in our solution. In general this way to tackle the problem is hard to tune and hasn't achieved better results than the last scheme for large problem sizes in our case.

A clear advantage however is that this solution doesn't require advanced mathematical concepts and thinking to conjure the solution and it is certainly more readable. The parallelization using OpenMP directives is also easier to get right, so this particular solution is a good way to parallelize the computation.

\pagebreak

\exercise{8}

\answer{1}

An implemented solution to this problem could look as follows:

\begin{lstlisting}
void mv(base_t **A, int nrows, int ncols, int nrows_a_loc, int ncols_a_loc,
        base_t *x, int nrows_x_loc,
        base_t *b, int ncols_b_loc)
{
    int comm_rank, comm_size, ret;

    // query communicator parameters
    MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &comm_size);

    // formula for dimensions of matrix mult:
    // h x i * i x j = h x j
    // we need to use the amount of matrix columns (total amount of
    // x vector rows!)
    base_t x_full[ncols];

    // actually only local b is to be computed
    // we need to use the amount of matrix rows (total amount of
    // b vector rows!)
    //base_t b_full[nrows];

    // assume there is no equal split among elements, like n/p for all p_i
    // except for the last process p_{comm_rank - 1}, such that counts are
    // not known without prior communication
    int send_counts[comm_size];
    ret = MPI_Allgather(
        // send buffer, count and type
        &nrows_x_loc, 1, MPI_INT,
        // recv buffer, count and type
        send_counts, 1, MPI_INT,
        // communicator
        MPI_COMM_WORLD
    );

    int recv_displacements_loc;
    int recv_displacements[comm_size];
    // perform exclusive prefix sum to get absolute displacements
    ret = MPI_Exscan(
        // send buffer
        &nrows_x_loc,
        // receive buffer
        &recv_displacements_loc,
        // count and type
        1, MPI_INT,
        // operation
        MPI_SUM,
        // communicator
        MPI_COMM_WORLD
    );

    // 0th exscan value is undefined, but should be 0!
    if (comm_rank == 0)
        recv_displacements_loc = 0;

    // because exscan does not sync all sums to all processes, we need to
    // perform an additional communication round
    ret = MPI_Allgather(
        // send buffer, count and type
        &recv_displacements_loc, 1, MPI_INT,
        // recv buffer, count and type
        recv_displacements, 1, MPI_INT,
        // communicator
        MPI_COMM_WORLD
    );

    // make call to allgatherv to gather x vectors for every process
    ret = MPI_Allgatherv(
        // send buffer, count and type
        x, nrows_x_loc, MPI_DOUBLE,
        // recv buffer, count(s), displacements and type
        x_full, send_counts, recv_displacements, MPI_DOUBLE,
        // communicator
        MPI_COMM_WORLD
    );

    // perform actual matrix vector computation
    for (int i = 0; i < ncols_b_loc; i++) {
        for (int j = 0; j < ncols; j++) {
            b[i] += A[i][j] * x_full[j];
        }
    }
}
\end{lstlisting}

\answer{2}

The complexity can again be computed by looking at the complexities of the sequental statements of the program. In this case the communication operations and the multiplication operations at the end contribute the biggest asymptotic complexity terms.

The first \texttt{MPI\_Allgather} operation synchronizes a single integer value between all processes, as such $n$ is $1$ and the complexity evaluates to:

$$
O(\log p)
$$

The next \texttt{MPI\_Exscan} operation computes the exclusive prefix-sum in order to evaluate the displacement values for the upcoming call to \texttt{MPI\_Allgatherv}. Because only one value is reduced to a sum, $n$ is again $1$ and the complexity evaluates to:

$$
O(\log p)
$$

Another call to \texttt{MPI\_Allgather} is made to then communicate the locally computed exclusive prefix-sum value from all process to all other processes. This is necessary, because every process only receives its own exclusive prefix-sum value. In our case, all processes need to be aware of the exclusive prefix-sum values of all other processes, thus the additional communication round.

An interesting tradeoff might be to instead compute the exclusive prefix-sum values for all processes locally within each process, as they have already synchronized the necessary information to do so.

The complexity of this additional communication round evaluates to:

$$
O(\log p)
$$

Next up, we have a call to \texttt{MPI\_Allgatherv}, which is necessary because the send and recv counts are not uniform in this task. In this call, we synchronize the contents of the distributed $x$ vector from all processes to all processes, but by specifying a vector of recv counts in each process. The problem size $n$, as it was assumed for this analysis that there is only a small difference between the number of local rows between the processes, amounts to $\frac{n}{p}$. Thus, the total complexity of this operation amounts to:

$$
O\left(\frac{n}{p} + \log p\right)
$$

Finally, everything is set up for all the processes to compute their local results of the result vector $b$. Because there are two loops that span the amount of local matrix rows in one dimension and the amount of total matrix columns in another, the total complexity evaluates to:

$$
O\left(\frac{m}{p} \cdot n \right)
$$

We finally have everything we need to compute the total complexity of our implementation of the \texttt{mv} function, we can write:

$$
\begin{aligned}
T_{par}(n, m, p) &= O(\log p) + O(\log p) + O(\log p) + O \left ( \frac{n}{p} + \log p \right ) + O \left ( \frac{m}{p} \cdot n \right ) \\
&= O \left ( \frac{n}{p} + \log p \right ) + O \left ( \frac{m}{p} \cdot n \right ) \\
&= O \left ( \frac{n}{p} + \log p + \frac{m}{p} \cdot n \right ) \\
&= O \left ( \frac{n}{p} ( 1 + m ) + \log p \right ) \\
&= O \left ( \frac{m n}{p} + \log p \right ) \\
\end{aligned}
$$

We can then begine to calculate the absolute speed-up against the best known sequential algorithm:

$$
\begin{aligned}
S_{abs}(n, m, p) &= \frac{T_{seq}(n, m)}{T_{par}(n, m, p)} \\
&= \frac{m n}{\frac{m n}{p} + \log p} \\
&= \frac{m n p}{m n + p \log p} \\
\end{aligned}
$$

We can also compute the parallel efficiency function:

$$
\begin{aligned}
E(n, m, p) &= \frac{T_{seq}(n, m)}{p T_{par}(n, m, p)} \\
&= \frac{m n}{m n + p \log p} \\
\end{aligned}
$$

\answer{3}

\begin{figure}
    % first plots
    \centering
    \subfloat[\centering runtime (1st test case) \label{fig:figs-ex8-1}]{\includegraphics[width=.45 \textwidth]{figs/ex8/I1runtime.png}}
    \qquad
    \subfloat[\centering runtime (2nd test case) \label{fig:figs-ex8-2}]{\includegraphics[width=.45 \textwidth]{figs/ex8/I2runtime.png}}

    % second plots
    \subfloat[\centering runtime (3rd test case) \label{fig:figs-ex8-3}]{\includegraphics[width=.45 \textwidth]{figs/ex8/I3runtime.png}}
    \qquad
    \subfloat[\centering runtime (4th test case) \label{fig:figs-ex8-4}]{\includegraphics[width=.45 \textwidth]{figs/ex8/I4runtime.png}}
    \caption{plots of various running times}
\end{figure}

The various running times for each of the test cases are shown in figures \ref{fig:figs-ex8-1}, \ref{fig:figs-ex8-2}, \ref{fig:figs-ex8-3} and \ref{fig:figs-ex8-4}.

The first data point on all of these plots represents the running time when the execution is kept local within a single process on a single node. When the communication overhead is too large, which is the case for low problem sizes. In general, too high a number of processes leads to inefficiencies that can reduce overall runtime performance.

In more concrete terms, we can see that the case with the lowest amount of distributed processes wins out for small problem sizes, while the the highest amount of distributed processes wins out for large problem sizes. There is a clear interplay between problem size and amount of processes and as such, it might be interesting to dynamically launch an appropriate amount of processes for the given problem size.

\exercise{9}

\answer{1}

A solution to this particular problem could look as follows (debug statements removed):

\begin{lstlisting}
void mv(base_t **A, int nrows, int ncols, int nrows_a_loc, int ncols_a_loc,
        base_t *x, int nrows_x_loc,
        base_t *b, int ncols_b_loc)
{
    int comm_rank, comm_size, ret;

    // query communicator parameters
    MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &comm_size);

    // we need to use the amount of matrix rows (total amount of
    // b vector rows!)
    base_t b_full[nrows];
    for (int i = 0; i < nrows; i++) {
        b_full[i] = 0;
    }

    for (int i = 0; i < ncols_a_loc; i++) {
        for (int j = 0; j < nrows_a_loc; j++) {
            b_full[j] += A[j][i] * x[i];
        }
    }

    // synchronize receive counts for all processes before using
    // collective reduce scatter operation
    int recv_counts[comm_size];
    ret = MPI_Allgather(
        // send buffer, count and type
        &ncols_b_loc, 1, MPI_INT,
        // recv buffer, count and type
        recv_counts, 1, MPI_INT,
        // communicator
        MPI_COMM_WORLD
    );

    // use reduce scatter to reduce all b_full vectors together and then
    // distribute the b_full vector with varying counts to all processes
    ret = MPI_Reduce_scatter(
        // send buffer
        b_full,
        // recv buffer, count and type
        b, recv_counts, MPI_DOUBLE,
        // reduction operation
        MPI_SUM,
        // communicator
        MPI_COMM_WORLD
    );
}
\end{lstlisting}

\answer{2}

Again we compute the asymptotic running time by first examining the parts different sequential parts of the algorithm and determining their combined complexity. The first term is contributed by the matrix multiplication that every process does for itself with its local values. Because all rows are examined, but only a local (but evenly distributed) amount is used by each process, the complexity reduces to:

$$
O(m \cdot \frac{n}{p})
$$

Next up we synchronize, like previously, all recv count values in an additional communication round amongst all processes using the \texttt{MPI\_Allgather} directive. As our data that is sent is only a single integer, the complexity reduces to:

$$
O(\log p)
$$

Finally, \texttt{MPI\_Reduce\_scatter} is used to first reduce the locally computed values of $b$ in a special vector containing all reduced values and later scatter the results such that each process ends up with the data that is relevant to them. Here the amount of data that is transferred is $m$, as such the complexity of the directive reduces to:

$$
O(m + \log p)
$$

Now we can compute the total complexity as follows:

$$
\begin{aligned}
T_{par}(n, m, p) &= O \left ( m \cdot \frac{n}{p} \right ) + O(\log p) + O(m + \log p) \\
&= O \left ( m \cdot \frac{n}{p} \right ) + O(m + \log p) \\
&= O \left ( m \cdot \frac{n}{p} + m + \log p \right ) \\
&= O \left ( m \left ( \frac{n}{p} + 1 \right ) + \log p \right ) \\
&= O \left ( \frac{m n}{p} + \log p \right ) \\
\end{aligned}
$$

As can be observed, the complexity reduces to exactly the same term we have already seen in answer 8.2. As such, we list again the results we have already seen previously:

$$
\begin{aligned}
S_{abs}(n, m, p) &= \frac{m n p}{m n + p \log p} \\
\end{aligned}
$$

$$
\begin{aligned}
E(n, m, p) &= \frac{m n}{m n + p \log p} \\
\end{aligned}
$$

Comparing the two asymptotic complexity computations, the complexity term that was shared in task 8 amongst different collective operations was $n$, while for this task it was $m$. That might make for an interesting trade-off, depending what kind of matrices one has to work with.

\answer{3}

\begin{figure}
    % first plots
    \centering
    \subfloat[\centering runtime (1st test case) \label{fig:figs-ex9-1}]{\includegraphics[width=.45 \textwidth]{figs/ex9/I1runtime.png}}
    \qquad
    \subfloat[\centering runtime (2nd test case) \label{fig:figs-ex9-2}]{\includegraphics[width=.45 \textwidth]{figs/ex9/I2runtime.png}}

    % second plots
    \subfloat[\centering runtime (3rd test case) \label{fig:figs-ex9-3}]{\includegraphics[width=.45 \textwidth]{figs/ex9/I3runtime.png}}
    \qquad
    \subfloat[\centering runtime (4th test case) \label{fig:figs-ex9-4}]{\includegraphics[width=.45 \textwidth]{figs/ex9/I4runtime.png}}
    \caption{plots of various running times}
    \label{fig:figs-ex9}
\end{figure}

The various running times for each of the test cases are shown in figures \ref{fig:figs-ex9-1}, \ref{fig:figs-ex9-2}, \ref{fig:figs-ex9-3} and \ref{fig:figs-ex9-4}.

For this variant, where reductions on columns of local data are used instead of gather operations with rows of local data, we can see a fairly similar picture. This was already reflected in the asymptotical complexity analysis, where we argued that performance might depend more on either $m$ or $n$. Because we only look at quadratic matrices, we should find that the runtimes of the previous solution and this one are comparable. This seems to be the case. Any additional runtime overhead that occurs in this case might be explained by MPI implementation details for the \texttt{MPI\_Reduce\_scatter} operation, as the reduce step might take additional communication rounds to fulfill its purpose.

\end{document}
