
\section{Optimizations}
\label{sec:optimization}

%\subsection{}
Our initial fully functional implementation of the LMT Skeleton algorithm did not possess any optimization. Hence the application had a high polynomial run time that has proven unwieldy even for modest data sets of a few hundred points. According to \cite{LargeSubgraph} a non-optimized implementation of the LMT algorithm runs in $O(n^6)$ time.

The Java garbage collection mechanism may degrade the algorithm's performance significantly over time. In order to minimize this problem, the garbage collector is invoked explicitly by default before starting the triangulation algorithm. This behavior may be switched off by setting the flag \code{controller.Globals.}\code{USE\_EXPLICIT\_GARBAGE\_COLLECTION} to false. Java's parallel garbage collector\footnote{The parallel garbage collector may be enabled by providing the JVM argument \code{-XX:+UseParallelGC} on application start.} distributes the overhead more evenly over the time of the application's execution, alleviating much of the problem. Unfortunately, while not degrading performance over time, in this case the parallel garbage collector seems to decrease overall performance measurably. 

%diamond property from: Drysdale, McElfresh, Snoeyink: On Exclusion Regions for Optimal Triangulations 
% via: Giri Narasimhan,Michiel Smid: Geometric spanner networks
\subsection{Diamond Property for Minimum Weight Triangulations}
\begin{figure}[h]
\includegraphics[width=14cm]{images/diamondProperty.pdf}
\caption{Diamond Property: The left-hand edge is a candidate for an MWT since at least one of its two adjacent isosceles triangles with angle $\beta$ is empty. Both diamond triangles of the right-hand edge are non-empty, hence the right-hand edge cannot be a member of an MWT.}
\end{figure}

For Euclidean minimum weight triangulations\footnote{The diamond property as described here is applicable for minimum Euclidean weight triangulations, i.e. the sum of the Euclidean lengths of all edges is minimized. It may or may not be applicable for other heuristics with other values for the diamond angle.}, the diamond property has been proven to be a criterion to exclude a good number of potential edges from the set of candidate edges. For each candidate edge $e$ the two adjacent isosceles triangles $\Delta_{left}$,  $\Delta_{right}$ with the angle $\beta$ adjacent to $e$ are to be considered. It has been shown for $\beta = \sfrac{\pi}{8}\ $, that $e$ cannot be a member of an MWT if \emph{both} $\Delta_{left}$ and $\Delta_{right}$ are not empty \cite{LargeSubgraph}. In other words, any edge without at least one empty triangle of $\Delta_{left}$ or $\Delta_{right}$ can safely be removed from the list of candidate triangles, significantly reducing the size of the problem. \cite{Drysdale2001} have later shown that the Diamond Property holds for an angle of $\beta = \sfrac{\pi}{4.6}\ $ resulting in a much larger diamond area.



\begin{figure}[h]
\includegraphics[width=12cm]{images/diamondPropertyDelaunay.pdf}
\caption{Diamond Property for Delaunay Triangulations}
\end{figure}

\subsection{Diamond Property for Delaunay Triangulations}
One interesting known property of Delaunay triangulations is the emptiness of the circumcircle of each triangle. For each edge $e$ of any Delaunay triangulation let $c$ be the circle of diameter $|e|$ with the endpoints $v_1$, $v_2$ of $e$ on the boundary. Let $c_l$ and $c_r$ be the two semi-circles formed by bisecting $c$ by $e$.

\paragraph{Lemma} At least one of the semi-circles $c_l$, $c_r$ must be empty.\ \cite{Drysdale2001}
\paragraph{Proof} Assuming that both $c_l$ and $c_r$ are non-empty, an empty circle with $v_1$ and $v_2$ on its boundary cannot exist. Therefore $e$ cannot be an edge of a triangle with an empty circumcircle.

\paragraph{}The lemma can be weakened for the diamond property by using the two triangles with the edge $e$ as the base and $c$ as the circumcircle of both diamond triangles. Hence the diamond property holds for Delaunay triangulations for an angle of $\beta = \sfrac{\pi}{4}\ $.

%\TODO{Is the Diamond property applicable for longest base to height ratio, and if yes: what's the angle??? Experimental results suggest it's somewhere around pi/4...}


\subsection{Data Structures}
Candidate edges as well as candidate triangles can both be generated when needed during execution of the LMT algorithm. Another approach is to precompute the sets of edges and triangles beforehand. The downside is an increase of space complexity to $O(n^3)$.

We have decided for this project to precompute the sets of candidate edges and triangles since that allowed for approaching the optimization problem independently from the actual implementation of the LMT algorithm. A data structure \emph{FaceSet} has been implemented holding all faces (vertices, edges, triangles) necessary for the triangulation algorithms. This FaceSet is initialized with all points to be triangulated, then all candidate edges are precomputed. This step incorporates the Diamond Property if applicable. Finally all candidate triangles are generated.

\subsubsection{Triangle Lists}
Each of the remaining candidate edges has to be checked against combinations of adjacent candidate triangles for local minimality until either the edge has been found not to be locally minimal for any triangle combination, or a combination of triangles has been found to satisfy the criteria for local minimality. The former case triggers the removal of the edge and its adjacent triangles. In the latter case the edge will remain in the candidate edges set and may have to be re-checked during the next iteration of the algorithm. None of the combinations that failed the test for local minimality have to be considered again. The last combination checked is particularly interesting since this is the one combination having passed the local minimality test for this edge during the last iteration. These same two triangles will still cause the local minimality test for this edge to pass, provided both are still members of the candidate triangles. If that is the case, the local minimality test for that edge takes constant time. None of the failing triangle combinations have to be tested more than once.

In order to keep track of the two last positively tested triangles, two linked lists of incident triangles have been added to each edge for this project: trianglesLeft and trianglesRight. The linked list implementation used maintains an index to the last node accessed. If a triangle at an index less than the last retrieved one is removed, the last accessed index is decremented by one. Otherwise the last accessed index remains unchanged. Therefore, if the last retrieved triangle is deleted, the next unchecked triangle moves into the vacated slot.

Two nested loops iterate over the trianglesLeft and trianglesRight lists until a pair satisfies the local minimality criteria with trianglesLeft in the outer loop\footnote{See Triangulation.minimalityMetrics.AbstractMinimalityMetric.localMinimumExistsForEdge()}. Once trianglesLeft has reached past the end of the list, there is no quadrilateral for which the edge is locally minimal; hence the edge can be removed from the candidate edges. Once trianglesRight has reached the end, the inner loop terminates, causing the outer loop to access the next triangle of trianglesLeft.

The linked list used for trianglesLeft and trianglesRight is designed to allow for removal of triangles by other parts of the algorithm without causing problems for the aforementioned iteration. Since the next element is returned using the list's last accessed index, the list may have reached its end by removing elements. This case is handled transparently by the nested loops since the list will just return \code{false} for \code{hasNext()}. If this happens for trianglesLeft, the outer loop terminates, causing the edge to be removed from the candidateEdges. In case of trianglesRight, the inner loop terminates and the outer loop advances by accessing the next of the left-hand triangles.

If the last accessed triangle of the leftTriangles is removed, the rightTriangles list has to be moved back to the head of the list in order for the full set of possible pairings to be tested\footnote{See datatypes.Edge.removeTriangleFromAdjacentTriangles()}.


\begin{figure}[ht]
\includegraphics[width=13cm]{images/trianglesLeftRight.pdf}
\caption{Incident Triangles Lists}
\end{figure}

\subsubsection{Edge List}
Edges that do not intersect any other candidate Edge are members of a minimum weight triangulation, hence these can be removed from the candidate edges and added to the triangulation. Triangles with an edge that has been removed and added to an MWT have to remain in the Face Set since they may still be relevant in determining whether another edge is locally minimal.

The remaining candidate edge pairs have to be checked for intersection. However, the intersection test does not have to be performed more than once for each pairing. Again, for each edge $e$, other candidate edges $e_{cand}$ are tested for intersection until either an edge has been found to intersect, or until all other edges have been investigated without finding an intersecting edge. If an intersection has been found, the edge in question cannot be added to the MWT yet. The intersecting edge may be removed in a later iteration of the LMT algorithm, therefore it makes sense to test this particular pairing again as the first pair in the next iteration. If the intersecting candidate edge $e_{cand}^i$ is still a member of the candidate edges, the intersection test for $e$ with $e_{cand}^i$ terminates in constant time.

Since all candidate edges are held in a linked list inside the FaceSet, a pointer \code{firstEdgeToCheckForIntersection} is maintained to the first intersecting edge found for each edge\footnote{See datatypes.list.NodeImpl.setFirstEdgeToCheckForIntersection(). datatypes.Edge extends NodeImpl.
\lstinputlisting
[numbers=none,
firstline=88,
firstnumber=88,
lastline=93]{../src/datatypes/list/NodeImpl.java}}
. That way the algorithm can pick up and continue testing for intersection using the first intersecting edge without the need to re-test any of the previously investigated non-intersecting edges. If a \code{firstEdgeToCheckForIntersection} is deleted due to not being locally minimal, the \code{firstEdgeToCheckForIntersection} is set to the next untested edge. In order to do so, each edge $e$ maintains a list of \code{stakeholders} containing references to all edges $e_{stakeholder}$ that point to $e$ for their first edge to check for intersection. If $e$ is removed from the candidate edges, all stakeholder edges are notified to set the \code{firstEdgeToCheckForIntersection} to the next untested edge\footnote{See datatypes.list.NodeImpl.remove(). datatypes.Edge extends NodeImpl.
\lstinputlisting
[numbers=none,
firstline=33,
firstnumber=33,
lastline=34]{../src/datatypes/list/NodeImpl.java}
\vspace{-4mm}
\lstinputlisting
[numbers=none,
firstline=38,
firstnumber=38,
lastline=46]{../src/datatypes/list/NodeImpl.java}}. If all edges not yet tested for intersection for a given edge $e$ have been removed, the \code{firstEdgeToCheckForIntersection} points to null and the intersection test determines $e$ not to be intersected, adding it to the LMT skeleton.

\subsubsection{KD Tree}
Large parts of this project's implementation depend on efficiently locating points. Candidate triangles as well as the triangles for the diamond test have to be tested for emptiness. In order to avoid having to test all points for each triangle, a kd-tree has been used for this project as a data structure for the set of points. Worst case time performance of an orthogonal range search in a kd-tree is $O(\sqrt{n} + k)$ with $k$ being the number of reported points. Storage complexity is $O(n)$ \ \cite{deBerg2008}. Other data structures such as Range Trees or Cutting Trees allow for faster range search but the project team was able to re-use an existing implementation of a kd-tree from a previous project. The query time is better than $O(\sqrt{n} + k)$ if the range query is relatively small and does not intersect too many of the ranges of the kd-tree's nodes. Integrating the kd-tree has resulted in a significant speedup in building the FaceSet data structure\footnote{It would be interesting to find out how much of a difference using a range tree would be.}.

We have experimented with the kd-tree search algorithm in order to improve performance. One experiment was to use triangular query ranges instead of rectangular ones. That has allowed for the usage of a query method that returns upon finding the first point within the range rather than reporting the set of points within the range. The idea was that the query can terminate as soon as the first match is found rather than continuing to collect all matching points. Disappointingly the performance improved only marginally. Instead, a rectangular query range is used with every point within that range being checked whether it is inside the triangle. The query terminates as soon as the first point inside the triangle is found, returning true immediately. The return value is false in case the triangle is empty.

\begin{figure}[h]
\includegraphics[width=10cm]{images/kdSearchRangePretest.pdf}
\caption{Rectangular Range Query With Pretest: First, points are tested within a smaller pretest range. Only if none of these points are inside the triangle the full query range has to be used.}
\end{figure}

Another experiment was to adjust the rectangular range query to incorporate a pretest. Instead of using a rectangular range encompassing the full triangle, a smaller query range that is mostly within the triangle is used initially. If there is a point within that smaller pretest range, the chance is very high that it is also inside the triangle. Only if the pretest does not find a match, the full range query encompassing the triangle has to be executed. The pretest as implemented has approximately doubled the query speed for random sets of points. The pretest query used is the orthogonal axis-aligned rectangle with the point not adjacent to the longest edge of the triangle and the center point of the longest edge as two of the vertices. This does not seem to be ideal since this range is not necessarily fully within the triangle and may even degenerate to a zero-width or zero-height rectangle. As an alternative using the largest square inside the triangle's incircle has been tried, but calculating this range was computationally too expensive, thereby negating any query time gains.



\begin{figure}[h]
\hspace{-2cm}
\includegraphics[width=18cm]{images/MWT_benchmarks_graph.pdf}
\caption{Benchmark Results for LMT (shortest edges): 2x 2.8 GHz Intel Xeon E5462, MacOS X 10.6.8. The version used for the presentation was r111.}
\end{figure}

\subsection{Concurrency}
Three parts of the LMT algorithm have been shown experimentally to be computationally expensive\footnote{For an overall view of the execution times, we inserted code that samples timestamps before and after the relevant section of the algorithm. In order to isolate functions that are promising candidates for optimization, we used NetBeans profiler.}: generating the candidate edges, testing for local minimality, and testing for intersection. Generating edges and intersection testing have both been adapted for concurrent execution in order to take advantage of modern multi-core processors. Testing for local minimality is still executed in a single thread because in order to parallelize this part of the algorithm, extensive locking would have been necessary, potentially degrading performance significantly\footnote{We did not verify this assumption experimentally.}.

In order to avoid the high cost to generate new threads using Java, a cached thread pool provided by the Executors package has been used. Instead of terminating and garbage collecting finished threads, this thread pool keeps finished threads alive for re-use, thus avoiding incurring the cost of generating new threads.

For intersection testing, the list of candidate edges to be tested is divided evenly according to the number of available processor cores. Each thread then tests all edges of the assigned set against all other candidate edges. In order to remove non-intersecting edges from the candidate edges, a locking scheme would have been necessary. Instead, the non-intersecting edges are collected into lists during the parallel part of the algorithm and removed once all threads have been finished\footnote{This approach also reflects the way the doubly linked list of edges is designed. In order to access blocks of edges by parallel threads, the list is indexed on the first access to an edge by index. This indexing takes $O(n)$ time since the list must be fully traversed. Subsequent indexed access takes constant time, until the list is mutated. Therefore the indexed list access is kept very fast by postponing the deletion of the edges.}. This avoids costly synchronizing and therefore efficiency is improved considerably.

For edge generation, no locking mechanism has been necessary since there is only reading access to the concurrently used points list. Each thread generates its own internal list of edges, which are concatenated upon termination of the concurrent tasks.
