\documentclass{article}

\usepackage{graphicx}
\usepackage{listings}
\usepackage{hyperref}
\usepackage{cleveref}
\usepackage{chngpage}

\begin{document}

\title{Project in Advanced Multiprocessor Programming}
\author{
  lastname, name\\
  matr. number
  \and
  lastname, name\\
  matr. number
}
\maketitle

\section{Abstract}

This report provides a comprehensive analysis of the research paper ``A Lock-Free Algorithm for Concurrent Bags''.
The paper introduces a novel algorithm that employs a thread-local storage approach, significantly reducing synchronization overhead between threads.
The algorithm's unique features, including its lock-free nature, linearizability, and a notification system for verifying total emptiness of the bag, are critically examined.
The report extends the original paper's performance analysis to a 64-core machine, aiming to test the algorithm's scalability and efficiency on a more powerful system.
Another aim is to verify the claims of linearizability, lock-freeness and correctness of the proposed concurrent bag data structure.
We also aim to show the speedup that is achievable against a simple sequential data structure that is made concurrent via the use of coarse-grained locking.

\section{Introduction}

This report is set to critically analyze and evaluate the research paper ``A Lock-Free Algorithm for Concurrent Bags''.
The paper introduces an innovative algorithm that leverages a thread-local storage (TLS) approach, where each thread maintains its own unique linked list of arrays, significantly reducing the synchronization overhead between threads.

\subsection{Paper Implementation Details}

The algorithm proposed in the paper uses as an underlying data structure linked lists of arrays.
Each thread primarily operates on its own linked list of arrays, the size of which is determined by a pre-set capacity.
This approach allows each thread to execute the \texttt{Add(...)} and \texttt{TryRemoveAny()} operations without the need for global synchronization.
When a thread's own linked list is exhausted, it tries to steal elements from the linked lists of other threads.

Nodes are marked as empty on the fly using the Compare-and-Swap (\texttt{CAS}) operation. This is done by marking the pointers to the nodes, indicating to others that the node is empty and that they can help to unlink and subsequently delete the node.

To verify the total emptiness of the bag, a notification system is implemented.
When an \texttt{Add(...)} operation is initiated on a block, it resets a corresponding bit array to alert all subscribed threads about the impending insert.
After a comprehensive scan of all elements to confirm their null status, and continuous monitoring for any pending inserts, it can be confirmed that the bag is truly empty.

\subsection{Paper Claims and Benchmarks}

The paper makes two key claims about the algorithm: its lock-free nature and its linearizability.
The algorithm's performance has been benchmarked against several state-of-the-art concurrent implementations of common data structures, such as stacks and queues, on a 24-core machine.
The benchmarks used include variations of the well-known Producer-Consumer Problem and a randomized workload composed of a 50/50 split of additions and removals.

\subsection{Aims of this Report}

The aim of this report is to extend the authors' performance analysis to a 64-core machine, thereby testing the algorithm's scalability and efficiency on a more powerful system, while also determining if the algorithm can maintain its lock-free and linearizable properties.

To achieve this, we will provide our implementation in a framework that provides a similar setup to that of the paper.
After providing our own claims, we discuss our results and compare them against those of the paper.
Another aim is to show the speedup that is achievable against a simple sequential data structure that is made concurrent via the use of coarse-grained locking.

\section{Implementation}

During the implementation of the concurrent bag various design decisions have been made.

\begin{itemize}
\item{\textbf{Decision } The use of C11 atomics and OpenMP in-order to implement the data-structure.}
\item[]{\textbf{Reason } Both of us had prior experience in programming in C and little to none experience in programming in C++, one of us also worked with OpenMP before.}
\item{\textbf{Decision } Adopting the algorithms in the paper.}
\item[]{\textbf{Reason } Our task is to check the paper.}
\item{\textbf{Decision } We merged our own implementation with the implementation provided in the paper, we decided to use only the \texttt{block\_t} structure not both \texttt{block\_t} and \texttt{blockp\_t}.
We have also adopted our own macros and helper functions, for example to get and set marks of block pointers.}
\item[]{\textbf{Reason } Time limitation and frustration with debugging the concurrent program, we also thought we could check the claims of paper in a more detailed way.}
\item[]{\textbf{Tradeoff } We beliefe that you learn from mistakes, which are manifestations of false believes.
We have found a few whilst debugging.}
\end{itemize}

Our implementation is faithful to the original implementation.
This lets us compare results generated using our implementation more meaningfully against those of the paper, because the implementation provided in the paper was used to generate their results.

We have also implemented a simple coarse-grained locking based stack implementation by hand, that we make use of as a comparison data structure.
The stack was chosen for its simplified implementation and access semantics, as the stack only needs to maintain a pointer to the topmost element.

The only primitive provided by C11 atomics that is used in our implementation is the \texttt{atomic\_compare\_exchange\_strong} operation.
For this to work correctly, we have also annotated respective parts of the program with the \texttt{\_Atomic} type specifier.

As for OpenMP, we have kept our usage of this powerful framework simple, using it only where necessary to facilitate correct operation of our benchmark.
We use the framework to launch multiple threads, time the implementation, synchronize execution and to lock accesses to the simple stack that is our comparison data structure.

\section{Claims}

We have run out implementation close to 2 hours on the 64-core system without any crashes, making us believe that the implementation does not crash.

\subsection{Correctness}

Our initial step towards ensuring correctness involved checking that each thread successfully adds and removes its assigned workload.
This was achieved by letting each thread work through their assigned workload in a for loop, while also managing thread-local counters that we can then use to verify correct operation of the application.
After each test run, these counters were aggregated and we asserted that the number of successful \texttt{Add(...)} operations equals the number of successful \texttt{TryRemoveAny()} operations.

To further our pursuit of correctness, in the earlier stages of development, we strategically placed print statements within the code that are wrapped in a \texttt{TRACE} macro.
This macro serves to toggle the print statements, such that they can be optimized away when they are not disabled.
These statements offered real-time visibility into the operations and state of the bag, thereby facilitating manual verification of the algorithm's behavior across various scenarios.

To bolster our claim of correctness, there is an extended verification mode that can also be toggled via a preprocessor variable \texttt{\#define VERIFY}.
Because of the way our extended correctness checks operate, they affect performance results of our implementation.
To prevent these undesirable effects on performance, all correctness checks as well as assertions that do have an effect can be disabled via this preprocessor variable.

As part of this extended verification mode, we added and removed integers in the range from $0$ to $items-1$ from the bag.
We ensured that every item added was also removed by keeping an array of flags that indicate whether an individual element was handled correctly.
The workload was then distributed among the threads executing the \texttt{Add(...)} operation, with an array indicating the starting point of the items each thread should add.

%TODO: this is 
%Conversely, we maintained a \texttt{long} array that flagged if an item was removed from the array.
%Following the \texttt{TryRemoveAny()} operation, we first asserted that the item had not yet been removed by checking the bit in the long array.
%If the item was still present, the corresponding bit in the array was set.

\subsection{Lock-freeness}

Regarding the lock-free characteristic if we look at the definition in the book: \textit{A method is lock-free if it guarantees that some call always finishes in a finite number of steps.}

Since the implementation is free of loops, that are waiting for synchronization conditions to take affect and uses CAS operations in order to swap items in or out of the data-structure.
We can now check for every operation call fulfills this property.

For the add method we can provide an upper bound of steps being the operations to allocate a new block-node and adding the item to it.

Regarding the \texttt{TryRemoveAny()} method, as we will point out in \cref{section:complexity} it is possible to loop forever, but this is only possible when repeated add operations are carried out that are removed by other threads.
This is no problem for lock-freeness since some calls, in this case the \texttt{Add(...)} call and other \texttt{TryRemoveAny()} calls finish.
If we now consider the case where no other thread is executing an \texttt{Add(...)} call and a \texttt{TryRemoveAny()} call is attempting to steal an element, the upper bound of steps to finish the \texttt{TryRemoveAny()} call is given by the number of operations to finish looping $n$ times over the blocks and observing no change in the notification bits, where $n$ is the number of threads.

\subsection{Linearizability}

Regarding this property the book states: \textit{The usual way to show that a concurrent object implementation is linearizable is to identify for each method a linearization point where the method takes effect.}

The authors provide such linearization points:

\begin{quote}
\textit{The Add operation, takes effect at the write statement in line 6 of Algorithm 1. […]}
\end{quote}

\begin{lstlisting}
Algorithm 1 Add(item)
1: if threadHead has reached end of array then
2:     Allocate new block and add it in the linked list before threadBlock
and set threadBlock to it
3:     hreadHead <- 0
4: end if
5: threadBlock[threadHead] <- item
6: threadHead <- threadHead + 1
\end{lstlisting}

We do not agree with this linearization point, since the item is becoming visible to other threads already in line 5, where it is added to the array.
But overall we agree that the \texttt{Add(...)} method is linearizable with linearization point in line 5 of Algorithm 1.

\begin{quote}
\textit{The TryRemoveAny operation returning an item, takes effect at the successful CAS statement in line 10 of Algorithm 2 or line 7 of Algorithm 3. […]}
\end{quote}

At both lines listed, the CAS operations that if successful, swap out the item out of its own block or from the block that the thread is stealing from.
These indeed are the points at which the removal manifests itself on the data structure, since other threads can no longer observe the existence of the items in the bag.

\begin{quote}
\textit{The TryRemoveAny operation returning NULL, takes effect at the read statement of the set notification bit in line 7 of Algorithm 5 during one of the repetitions. […]}
\end{quote}

We agree with this linearization point, since at this point all pending add operations (where the notification bits have been cleared) have been carried out and have also been read by other threads (since in order to reach line 7 of Algorithm 5 the block must have been empty).

In order for the \texttt{TryRemoveAny()} operation to return NULL, there must have been no items in the part of the data structure managed by the thread itself.
In this case the thread attempts to steal, so we restrict our analysis to this case.
The \texttt{TryRemoveAny()} operation loops for $\#thread$ times, keeping track of the rounds that were executed.
At every iteration a thread could have added something.
An item is removed only when the \texttt{TryRemoveAny()} method comes to a thread's block where it can steal from.
If while the thread is searching for an item to steal, an item is inserted in some other part of the bag and removed again by some other thread, the round counter gets reset (by the \texttt{Add} operation).
Thus if an item is added, the thread begins searching the entire bag again, either finding the element and returning it, or looping again $\#thread$ rounds.
If the thread had completed $\#thread$ rounds, no item has been added in the meantime and we can say that the bag is empty, returning NULL.

\subsection{Complexity}
\label{section:complexity}

The \texttt{Add(...)} operation, which inserts an item into an array block, is $O(1$) because array index access takes constant time.
If a new block must be created, appending it to the block linked list is also an $O(1)$ operation.
Even in a multithreaded context, the use of atomic operations and Compare-And-Swap (CAS) mechanism ensures the \texttt{Add(...)} operation remains $O(1)$, making it highly efficient.

The time complexity of the \texttt{TryRemoveAny()} operation is largely influenced by the concurrent activities of other threads.
In an optimal scenario, it completes in constant time, while the worst-case requires an entire scan of the data structure.
If a thread almost completes this scan, and another thread performs an \texttt{Add(...)} then a \texttt{TryRemoveAny()} operation, the first thread must restart its scan.
This makes the worst-case time complexity for \texttt{TryRemoveAny()} potentially unbounded, as the operation could be caught in an indefinite restart loop.
While such a scenario is unlikely, it prevents us from defining a worst-case time complexity for \texttt{TryRemoveAny()}.

\section{Benchmark}

Using the bellow stated benchmarks we aim to show, how our concurrent bag implementation performs in comparison to the paper and in comparison to a simple sequential data-structure with locks.
We also aim to give outlook to the performance of the algorithm on a 64-core machine, extending the papers results.

\subsection{Typical Use Cases}

We have thought about how bags could be used and came up with two main use-cases: 

\begin{itemize}
\item{\textbf{Work-Stealing } In the field of parallel computing, the concurrent bag is ideally suited for work-stealing strategies.
This lock-free data structure serves as an efficient task pool facilitating task distribution, while mitigating deadlock or contention risks inherent in lock-based alternatives.}
\item{\textbf{Producer-Consumer model } For producer/consumer models, the concurrent bag serves as an effective shared buffer.
The lock-free design allows for concurrent, safe addition and removal of items (which may be requests or other data), enhancing system throughput, particularly in high-concurrency environments.}
\end{itemize}

\subsection{Our Benchmarks}

We have decided to stick mostly to the papers original benchmarks modelling the Producer-Consumer problem in various configurations.

There is one additional benchmark added to test whether our implementation has any bottlenecks or inefficiencies for a best case scenario, referred to as the pure operations benchmark.
This best case scenario for the \texttt{Add(...)} is modelled as a stream of additions distributed evenly among all threads and the metric is the amount of time necessary to process this workload.
Once all the additions have completed, we re-use this pre-populated data structure to test the best case scenario for the \texttt{TryRemoveAny()} operation, by issuing as many removals as there were additions before and testing how long it takes for all threads to process this workload.
The additions are considered independent of the removals.
We call this a best case scenario, because there is a potential for perfect parallelism, as all threads can try to handle its workload completely independently of one another, exploiting the hierarchy of the bag data structure in particular.
If the performance of this particular benchmark weren't great, it should reveal obvious and important flaws in our implementation.

\begin{itemize}
\item{\textbf{Pure Operation Benchmarking} This benchmark, as mentioned, measures the raw performance of \texttt{Add(...)} and \texttt{TryRemoveAny()} operations on empty or prefilled bags.}
\item{\textbf{$n/2$ Producer, $n/2$ Consumer Benchmarking} This benchmark recreates a scenario where task production and consumption are evenly matched.
It allows us to assess the effectiveness of the concurrent bag when dealing with a constant flow of tasks in a controlled environment, demonstrating its functionality in balanced producer/consumer systems.}
\item{\textbf{$n-1$ Producer, $1$ Consumer Benchmarking} This benchmark aims to simulate an extreme producer/consumer problem where a single consumer contends with multiple producers.
The findings from this scenario will underscore the bag's robustness in handling potential task overloads and preventing bottlenecks, highlighting its adaptability in diverse use-case scenarios.}
\item{\textbf{$1$ Producer, $n-1$ Consumer Benchmarking} This benchmark probes the concurrent bag's efficiency in distributing data amongst multiple tasks.
The results should shed light on the ability of the bag to high contention because all threads are operating on one threads linked list.}
\end{itemize}

The Producer-Consumer benchmarks have also been chosen in this way, to be able to compare our performance to the paper's.

\subsection{Sequential Baseline (Stack)}

A stack with coarse-grained locking was chosen as a baseline against the bag data structure, since the stack and the bag support fundamental operations of adding and removing items.
The coarse-grained locking strategy guarantees mutual exclusion and is simple to implement, making it a potentially quick solution for programmers needing thread-safe data structures.
Considering the order of additions and removals doesn't matter for use cases the bag itself is intended for, the stack is suitable for this task.

The stack was implemented from scratch with the \texttt{Push(...)} and \texttt{Pop()} operations renamed to \texttt{Add(...)} and \texttt{TryRemoveAny()} to fit the interface for our benchmark.
The coarse-grained lock was implemented using the openMP \texttt{\#pragma omp critical} whenever a thread accesses the stack.

Since this was the first thing we implemented we first struggled a bit refamiliarizing ourselves with OpenMP.

\section{Benchmarking Method}

Our benchmarking process is designed to accurately gauge the performance of both successful Add and TryRemoveAny operations, as well as the latter's failure rate when the bag is empty.
Here's a step-by-step breakdown of our method:

\begin{itemize}
\item{\textbf{Establishing Baseline } We initiate our benchmarks by capturing a timestamp with \texttt{omp\_get\_wtime()} to establish the start time.}
\item{\textbf{Initiating Workloads } We launch the threads, each assigned with a balanced amount of Add or TryRemoveAny operations.
The division of operations is uniform across all threads.}
\item{\textbf{Completion Check } Once the successful operation counter aligns with the number of assigned operations, indicating all threads have completed their tasks, we capture a final timestamp.}
\item{\textbf{Time Calculation } The total time taken to process the entire workload is calculated as the difference between the initial and final timestamps.}
\end{itemize}

For our \textbf{Pure Operations} benchmark, we evenly distribute operations across all threads.
As for the \textbf{$N$ Consumer, $\#threads-N$ Producer} benchmarks, the \texttt{Add(...)} operations are shared equally among the producer threads, while the \texttt{TryRemoveAny()} operations are allocated to the consumers.

We implement these benchmarks in C, optimizing with \texttt{-O3} level and compiling into binary shared library files (\texttt{*.so}).
We utilize a Python script that leverages the \texttt{ctypes} module to execute the compiled benchmarking functions, which offers an intuitive interface for our benchmarking needs.
This script was kindly provided by the Advanced Multiprocessor Course TU Wien Team and was subsequently customized to suit our requirements.

The benchmarks run on the target system Nebula via the workload manager slurm, with two Makefile targets \texttt{slurm-bench} and \texttt{slurm-small-bench} available to automatically submit the respective jobs.

Compared to the paper, which was tested against a dual Intel Xeon X5660 setup totalling 24 cores, with 6 cores per processor with support for 2 threads per core, the Nebula system employs a dual-socketed AMD EPYC 7551 setup totalling 64 cores, with 32 cores per socket.

To ensure robust and meaningful outcomes, each data point (time taken to process $k$ items) is derived from the average of several benchmark runs.
These runs were conducted for both the concurrent bag implementation and the simple stack, with thread counts ranging from 2 to 64.

Our benchmarks were split in the Makefile target to allow each individual benchmark to run as many repetitions as possible within the maximum 10 minute time limit afforded by slurm jobs on the target system Nebula.
This means that for each of the two implementations (cbag and simple), for each of the four benchmark setups, for each of the three workloads (item counts), a single task is submitted to slurm to run the benchmark as often as possible.
This leads to some benchmark results to have a higher variance, as some benchmarks are repeated 16 times, others 8, 4, 2 times or even just once.
Nevertheless, it is our belief that more benchmark runs are strictly better, so we feel justified in not using a uniform but low repetition count and instead allocating as many runs as is possible to benchmark using slurm.
It was also more important, in our view, to include benchmarks with very large item counts, even if they cannot be repeated as often.

Despite Python's typical limitation of a Global Interpreter Lock (GIL) which permits only one thread to execute Python bytecodes at a time, our benchmarking strategy successfully circumvents this constraint by employing Python as a mere wrapper to call our independently running C functions.
As a result, our C-based routines, including our benchmarking functions, operate uninhibited by Python's GIL, thereby enabling true multithreading within our benchmarks.\footnote{\href{https://stackoverflow.com/questions/67338017/does-calling-a-c-function-via-ctypes-in-python-release-the-gil-during-execution}{https://stackoverflow.com/questions/67338017/does-calling-a-c-function...}} \footnote{\href{https://docs.python.org/library/ctypes.html}{https://docs.python.org/library/ctypes.html}}

\section{Results and Discussion}

First we want to present comparison results in this section.
Subsequently we will discuss them.

\subsection{Paper Comparison}
\label{subsection:paper-comparison}

Since the benchmarks only provide the time in milliseconds to complete the additions and removals of a number of items to the concurrent bag, a conversion had to be carried out.
The results of the runs with 1 000 000 items has been chosen to be compared to the paper, since no larger benchmarks were made.
The conversion from $\frac{ms}{10^6 \;items}$ to $\frac{10^6\;items}{s}$ was achieved by using a factor of $10^3(\frac{10^{-3}s}{10^6 \;items})^{-1} = \frac{10^6\;items}{s}$.

\begin{figure}[ht!]
  \centering
  \includegraphics[width=1\linewidth]{res/by-hand.jpg}
  \caption{Rough extraction of values by hand.}
  \label{fig:by-hand}
\end{figure}

In order to be able to compare the results of our benchmark against the paper, the results were extracted by hand as as visible in \cref{fig:by-hand}.
Thus these values are not 100\% exact, but should provide sufficient accuracy to see performance trends of the papers implementation in comparison to ours.

In the tables bellow values marked with a * in the Paper column are extrapolated with similar gradient as our results and the value in the Ours column was similarly extrapolated.

The comparison between the paper and our implementation for the $n/2$ Producer, $n/2$ Consumer benchmark can be seen in \cref{table:prod-cons-comp} and is plotted in \cref{fig:prod-cons-comp}.

As for the comparison between the paper and our implementation for the $1$ Producer, $n-1$ Consumer benchmark, they can be found in \cref{table:1-prod-comp} and in \cref{fig:1-prod-comp}.

Finally, the comparison between the paper and our implementation for the $n-1$ Producer, $1$ Consumer benchmark can be found in \cref{table:1-cons-comp} and in \cref{fig:1-cons-comp}.

\newpage

\begin{table}[ht]
  \centering
  \begin{tabular}{l|l|l}
      \# threads & Paper & Ours \\
      \hline
      2  & 91 & 2.6 \\
      4  & 102 & 4.8 \\
      8  & 63 & 5.1 \\
      16 & 41 & 4.1 \\
      24 & 32 & 3.0* \\
      32 & 29* & 2.2 \\
      64 & 20* & 1.0 \\
  \end{tabular}
  \caption{\label{table:prod-cons-comp}$n/2$ Produer, $n/2$ Consumer throughput comparison (with units scaled by 1e6 [items/s]).}
  \vspace{1ex}
\end{table}

\begin{figure}[ht!]
  \centering
  \includegraphics[width=1\linewidth]{res/prod-cons.png}
  \caption{$n/2$ Producer, $n/2$ Consumer throughput comparison.}
  \label{fig:prod-cons-comp}
\end{figure}

\newpage

\begin{table}[ht]
  \centering
  \begin{tabular}{l|l|l}
      \# threads & Paper & Ours \\
      \hline
      2  & 81 & 2.6 \\
      4  & 101 & 1.7 \\
      8  & 45 & 1.1 \\
      16 & 20 & 0.8 \\
      24 & 17 & 1.0* \\
      32 & 15* & 0.7 \\
      64 & 7* & 0.2 \\
  \end{tabular}
  \caption{\label{table:1-prod-comp}$1$ Produer, $n-1$ Consumer throughput comparison (with units scaled by 1e6 [items/s]).}
  \vspace{1ex}
\end{table}

\begin{figure}[ht!]
  \centering
  \includegraphics[width=0.94\linewidth]{res/1-prod.png}
  \caption{$1$ Producer, $n-1$ Consumer comparison.}
  \label{fig:1-prod-comp}
\end{figure}

\newpage

\begin{table}[ht]
  \centering
  \begin{tabular}{l|l|l}
      \# threads & Paper & Ours \\
      \hline
      2  & 89 & 1.9 \\
      4  & 77 & 6.7 \\
      8  & 73 & 6.9 \\
      16 & 46 & 8.1 \\
      24 & 53 & 8.0* \\
      32 & 50* & 7.9 \\
      64 & 43* & 6.8 \\
  \end{tabular}
  \caption{\label{table:1-cons-comp}$n-1$ Produer, $1$ Consumer throughput comparison (with units scaled by 1e6 [items/s]).}
  \vspace{1ex}
\end{table}

\begin{figure}[ht!]
  \centering
  \includegraphics[width=0.98\linewidth]{res/1-cons.png}
  \caption{$n-1$ Producer, $1$ Consumer comparison.}
  \label{fig:1-cons-comp}
\end{figure}

\subsubsection{Discussion}

The raw benchmarks results, which show all execution times for all benchmark instances of the simple stack and concurrent bag benchmarks, can be found together at the end of this document.
They were aggregated at the end of the report because of the sheer number of figures.

Across all benchmarks, we observe a consistent trend of either inverse exponential growth of the form $(e^-T)$ or constant performance after a certain number of threads.
This pattern holds true for both the $n-1$ Producer, $1$ Consumer case and the $1$ Producer $n-1$ Consumers case.
Despite these similarities, the original implementation consistently outperforms ours by at least a factor of 6.6.
Given the slowdown of our implementation after 32 threads, we speculate that the original implementation would maintain a higher level of performance.
Specifically, we predict that the original implementation would achieve a performance of approximately 43e6 items per second for the $1$ Consumer case, 7e6 items per second for the $1$ Producer case and 20e6 items per second for the $n/2$ Producer, $n/2$ Consumer case at 64 threads.

We have several hypotheses as to why the original implementation outperforms ours:

\begin{itemize}
\item{The original implementation includes an intelligent memory management scheme, specifically for the deletion of block nodes, which could contribute to its superior performance.}
\item{It's possible that the original implementation allocates memory beforehand, thereby not accounting for the time taken by malloc and free operations.
We considered changing our implementation to test this, but in the end we decided against it, because a realistic producer and consumer scenario would also have to deal with on-the-fly creation and destruction of items.
It does remain an important limitation to keep in mind and is a point which could be elaborated upon.}
\item{The original implementation may be optimized for the target architecture, where as our implementation is not optimized at all beyond any compiler optimizations.}
\end{itemize}

\subsection{Simple Stack Comparison}

In order to compare the concurrent bag with the simple stack the benchmark the plots depicted in figures X-Y were generated.
We then tried to approximate the time increase for the given workload with higher thread counts.
As such after a certain thread count, we observe clear constant, linear and exponential time increases as the number of threads increases.
We then tried to fit the data to the following functions using the Wolfram Fit function:

$$
t_{Linear}(n)=t_0+c\cdot n
$$

$$
t_{Exponential}(n)=t_0e^{c\cdot n}
$$

In order to calculate the relative speedup of the concurrent bag in respect to the simple stack regarding the various benchmarks the following formula was used:

$$
S(n)=\frac{t_{stack}(n)}{t_{bag}(n)}
$$

\subsubsection{1 Producer N-1 Consumers}

The simple stack as-well as the concurrent bag display exponential increases in the time they took to finish transfer of 1e6 items beginning from from 2 threads.
The fit functions did not converge.
So we calculated the relative speedups by hand.
At 32 threads the concurrent bag is $21$ times faster and at 64 threads $62$ times faster.

\subsubsection{1 Consumer N-1 Producer}

Regarding this benchmark the simple stack took a constant 1100 ms for a workload of 1e6 items after 8 threads, whereas the concurrent bag had a insignificant linear increase, which for sake of simplicity was assumed to be also linear at 140 ms after 4 threads.
Resulting in a speedup of $S(n)=7.85$.

\subsubsection{Producer Consumer 50/50}

A linear increase is observable for the simple stack after 16 threads and a slight exponential, which for sake of simplicity was assumed to be linear, after 8 threads.
The calculated relative speedup $S(n)=\frac{3044.5+14.9 \cdot n}{67.79+9.13 \cdot n}$ which equals $6.13$ at 64 threads.

\subsubsection{Pure operations - Add}

A clear linear growth is observable for the simple stack after 4 threads and for the concurrent bag after 32 threads.
With a calculated relative speedup of $S(n)=\frac{760+3.9 \cdot n}{22.7+0.21 \cdot n}$, which equals $28$ at 64 threads.

\subsubsection{Pure operations - TryRemoveAny}

Regarding this benchmark, no attempts have been made to calculate a speedup, since the items in the pre-filled bag are distributed amongst all threads thread local nodes of arrays.
We predicted this to be a negative exponential growth.
Since with when doubling the number of threads the time should half plus a sequential overhead.
For the simple stack a linear increase is observable after around 4 threads, which gets clear after 16.

\subsubsection{Pure operations combined}

Both implementations seem to be linear for smaller workloads, we assumed this to also be true for the 1e6 item workload.
So the simple stack  would show linear growth after 4 threads and the concurrent bag after 32 threads.
The calculated relative speedup after fitting is: $S(n)=\frac{32.75+0.14 \cdot n}{1712.35+7.81 \cdot n}$, equalling $53$ at 64 threads.

The speedup functions computed from the fitted functions compare quite well to the actual speedups calculated from the runtimes.
For example looking at the combined pure operations $S(64) = 53$ whereas the actual is $\frac{2184.1}{41.7} \approx 52.32$.

\subsubsection{Discussion}

Comparing our concurrent bag implementation against the simple stack however displays the welcome result of quite significant improvements.
It is no wonder that the simple stack implementation struggles, in particular as the number of threads goes up, for the coarse-grained locking approach is an incredible impediment when there is contention.

This is most obvious when looking at the $1$ Producer, $n-1$ Consumer benchmark, because this is the benchmark where the highest amount of contention occurs, especially when coupled with mutual exclusion access for unsuccessful \texttt{TryRemoveAny()} operations.
One can see the differences in \cref{fig:cbag-1-prod} and \cref{fig:simple-1-prod}.

In fact, all benchmarks display strictly better performance of the concurrent bag implementation compared to the simple stack implementation.
Another extreme example that shows the strengths of the concurrent bag are the Pure Operation benchmarks, the embarrassingly parallel nature of which gets fully exploited by the concurrent bag, whereas the simple stack sequentializes.
These results can be seen in \cref{fig:cbag-add-tra} (concurrent bag combined performance)/\cref{fig:simple-add-tra} (simple stack combined performance), \cref{fig:cbag-add} (concurrent bag \texttt{Add(...)} performance)/\cref{fig:simple-add} (simple stack \texttt{Add(...)} performance) and \cref{fig:cbag-tra} (concurrent bag \texttt{TryRemoveAny()} performance)/\cref{fig:simple-tra} (simple stack \texttt{TryRemoveAny()} performance).

In the case of the $n-1$ Producer, $1$ Consumer benchmark, we can see both the concurrent bag and simple stack stagnating past a certain number of threads (around 4 threads).
Yet the concurrent bag is still faster by a considerable factor.
This is a reasonable result, because the contention issue becomes less drastic for the simple stack as the \texttt{TryRemoveAny()} operations have a much higher probability to succeed, while for the concurrent bag the single consumer must rely on stealing from threads that are constantly adding new items.

The last case of $n/2$ Producer, $n/2$ Consumer might be most interesting of all.
The performance of the simple stack seems to level off as the number of threads is increased.
For the concurrent bag on the other hand, the trend takes on a curious shape.
Adding more threads past a certain point appears to decrease performance, likely because threads end up searching longer for items to steal.

\section{Conclusion}

While we have not managed to replicate the exact performance numbers of the original paper, the figures presented in \cref{subsection:paper-comparison} show very similar trends.
This is to be expected, because of inevitable differences in the respective experimental setups, hardware environments, and more.

Our comparison shows the immense benefits a concurrent data structure such as the concurrent bag can have, in particular for embarrassingly parallel problems but also for dataflows where contention might become an issue.

We conclude that a concurrent bag is a valuable data structure to have as part of a developer's repertoire.

\section{Outlook}

We have thought about verification of linearization points using assertions, but in the end opted against this because of time constraints and because it would likely bring about changes in the implementation that could invalidate our results thus far.
Of course this remains a possibility for future work.

Another interesting approach to verification would have been a counter that keeps track of the actual number of items in the bag.
This would destroy the concurrency because accesses to this counter would require mutual exclusion, but using such a counter we could verify supposed invariants within the execution of the operations.
This counter can be atomically incremented together with the linearization point of the \texttt{Add(...)} operation (when writing the array element).

Before the successful removal of an item we could assert the counter to be greater or equal to 1, either as part of a removal from one's own list or while stealing an element.

But we could also verify the linearization point for an empty bag:
A thread local array 2D with size of the $\#threads \cdot \#threads$ would be used, to store the current number of items in the bag after reading the set notification bit.
After returning NULL it should be asserted, that at least in one position that array was set to 0, indicating the number of items in the bag at that point in time were zero.

To make sure that the operations are executed atomically, a global lock would be used whenever such operation together with a read or write to that variable is carried out, ensuring that if a threads adds an item and freezes before incrementing the counter, no other thread could try to add/remove items to/from the bag.

It might also be interesting to implement the same random 50\%/50\% benchmark presented in the paper.
We decided against including such a benchmark, for we would have had to change the experimental setup to aggregate performance statistics over a timed window to measure the throughput per second more accurately.

Another interesting measure would have been the average time for an element to become processed from start to finish, but to implement this it would have been necessary to keep track of \texttt{Add(...)}/\texttt{TryRemoveAny()} call pairs on the same item, measuring the time it took without distorting actual performance of the data structure.
It was determined that this is not so easy, but it would be an important measure for use cases such task distribution of incoming web requests for example.

\section{Notes}

It should be noted that the entire undertaking of this project has been distributed evenly between the two members that make up our group.

\section{Raw Results}

In this last section, we present the raw results generated by our benchmarking framework.

\newpage

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_add_tra_avg_plot.pdf}
    \caption{Concurrent Bag combined performance of \texttt{Add(...)} and \texttt{TryRemoveAny()} operations.}
    \label{fig:cbag-add-tra}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_add_tra_avg_plot.pdf}
    \caption{Simple Stack combined performance of \texttt{Add(...)} and \texttt{TryRemoveAny()} operations.}
    \label{fig:simple-add-tra}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_add_avg_plot.pdf}
    \caption{Concurrent Bag performance of \texttt{Add(...)} operations.}
    \label{fig:cbag-add}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_add_avg_plot.pdf}
    \caption{Simple Stack performance of \texttt{Add(...)} operations.}
    \label{fig:simple-add}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_tra_avg_plot.pdf}
    \caption{Concurrent Bag performance of \texttt{TryRemoveAny()} operations.}
    \label{fig:cbag-tra}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_tra_avg_plot.pdf}
    \caption{Simple Stack performance of \texttt{TryRemoveAny()} operations.}
    \label{fig:simple-tra}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_prod_cons_avg_plot.pdf}
    \caption{Concurrent Bag performance of $n/2$ Producer, $n/2$ Consumer benchmark}
    \label{fig:cbag-prod-cons}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_prod_cons_avg_plot.pdf}
    \caption{Simple Stack performance of $n/2$ Producer, $n/2$ Consumer benchmark}
    \label{fig:simple-prod-cons}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_1_prod_avg_plot.pdf}
    \caption{Concurrent Bag performance of $1$ Producer, $n-1$ Consumer benchmark}
    \label{fig:cbag-1-prod}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_1_prod_avg_plot.pdf}
    \caption{Simple Stack performance of $1$ Producer, $n-1$ Consumer benchmark}
    \label{fig:simple-1-prod}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\begin{figure}[hp!]
  \begin{adjustwidth}{-10cm}{-10cm}
  \centering
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/cbag_1_cons_avg_plot.pdf}
    \caption{Concurrent Bag performance of $n-1$ Producer, $1$ Consumer benchmark}
    \label{fig:cbag-1-cons}
  \end{minipage}
  \hspace{0.02\textwidth}
  \begin{minipage}{.8\textwidth}
    \centering
    \includegraphics[width=\linewidth]{../plots/simple_1_cons_avg_plot.pdf}
    \caption{Simple Stack performance of $n-1$ Producer, $1$ Consumer benchmark}
    \label{fig:simple-1-cons}
  \end{minipage}
  \end{adjustwidth}
\end{figure}

\end{document}
