\section{Evaluation}
\label{sec:evaluation}
Our evaluation have two aims: (1) to demonstrate the performance of
our implementation against the state-of-the-art XML database BaseX on
a single computer and (2) to explore the scalability of our
implementation for processing very large XML documents on multiple
computers in a parallel manner.

\subsubsection{Datasets and XPath Queries} 

Table~\ref{tab:datasets} shows the statistics of the XML datasets used in the
experiments. For the experiments using a single EC2 instance, two XML datasets
are used: DBLP and xmark100 (with factor 100\footnote{The factor determines the
file size of an XMark generated document. It is nearly linear: 1 = 110MB, for
example xmark100 with the factor 100 is about 11GB, while xmark2000 with the
factor 2000 sized 220GB.}). For the experiments using mutiple EC2 instances,
xmark2000(with factor 2000) and UniProtKB is are used. The UniProtKB dataset is
well-balanced and has has a large number of children to its root, while XMark
datasets are not well-balanced for they have only six children to their root.
Table~\ref{tab:exp1_queries} shows the queries: XQ1 and UQ1 for long queries
with nested predicates; XQ2, DQ1, DQ2, UQ2, UQ4 and UQ5 for queries with
backward axes; the rest is for order-aware queries.

		
		
\begin{table}
	\centering
	\small
	\caption{Statistics of XML dataset.}
	\label{tab:datasets}
	\begin{tabular}{c|c|c|c|c}
		\hline
		Datasets & dblp & xmark100 & xmark2000 & uniprot \\
		\hline \hline
		Nodes & 43.13M & 163.1M & 3.26B & 7.89B \\
		\hline
		Attributes & 10.89M & 42.26M & 845M & 9.25B \\
		\hline
		Values & 39.64M & 67.25M & 1.34B & 1.49B \\
		\hline
		Total & 93.66M & 272,67M & 5.45B & 18.64B \\
		\hline
		\# of distinct tags & 47 & 77 & 77 & 82 \\
		\hline
		Depth & 6 & 13 & 13 & 7 \\
		\hline
		File size $($GB$)$ & 1.78 & 10.95 & 220 & 358 \\
		\hline
	\end{tabular}
	\vspace{10px}
	\caption{Queries used in the experiments.}
	\label{tab:exp1_queries}
	\begin{tabular}{c|c|l}
		\hline \hline
		Name & Dataset & Query  \\
		\hline
		XQ1 & xmark & /site/closed\_auctions/closed\_auction[annotation/ \\
		&&description[text/keyword]]\\
		\hline
		XQ2 & xmark & /site//keyword/ancestor::mail \\
		\hline
		XQ3 & xmark & /site/open\_auctions/open\_auction  \\
		&&/bidder[1]/increase\\
		\hline
		XQ4 & xmark & /site/people/person/name/following-sibling::emailaddress \\
		\hline
		XQ5 & xmark & /site/open\_auctions/open\_auction[bidder\\
		&&/following-sibling::bidder]/reserve\\
		\hline
		DQ1 & dblp & /dblp//i/parent::title\\
		\hline
		DQ2 & dblp & //author/ancestor::article \\
		\hline
		DQ3 & dblp & /dblp//author/following-sibling::author \\
		\hline
		DQ4 & dblp & //author[following\textemdash sibling::author] \\
		\hline
		DQ5 & dblp & /dblp/article/title/sub/sup/i/following::author \\
		\hline
		UQ1 & uniprot & /entry[comment/text]/reference[citation \\
		&&/authorList[person]]//person\\
		\hline
		UQ2 & uniprot & /entry//fullName/parent::recommendedName \\
		\hline
		UQ3 & uniprot & /entry//fullName/following::gene \\
		\hline
		UQ4 & uniprot & //begin/ancestor::entry\\
		\hline
		UQ5 & uniprot & //begin/parent::location/parent::feature/parent::entry \\
		\hline
	\end{tabular}
\end{table}

\begin{figure}
	\centering
	\includegraphics[width=.8\linewidth]{figures/parsing1.png}
	\caption{Memory consumption and parsing time on dblp datasets}
	\label{fig:parsingTime}
\end{figure}

\begin{table}
	\centering
	\caption{Evaluation by 32 EC2 instances}
	\label{tab:multieval}
	\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
		\hline \hline
		Dataset	&	\multicolumn{5}{|c|}{xm2000}     & \multicolumn{5}{c}{unirpot}       \\
		\hline
		Loading (s)	&	\multicolumn{5}{|c|}{210}     & \multicolumn{5}{c}{379}       \\
		\hline
		Memory (GB)	&	\multicolumn{5}{|c|}{173}     & \multicolumn{5}{c}{560}       \\
		\hline
		Query	& QX1      & XQ2     & QX3      & QX4      & QX5      & UX1      & UX2      & UX3      & UX4      & UX5      \\
		\hline
		Time (ms) & 5,951 & 819 & 1,710 & 1,168 & 3,349 & 2,573 & 2,408 & 1,324 & 5,909 & 6,220\\
		\hline
	\end{tabular}
\end{table} 


\subsubsection{Experiment Configuration} 

There were 32 m3.2xlarge instances on Amazon EC2, which were equipped with 
E5-2670 v2 (Ivy Bridge), 30 GB of memory and 2 X 80 GB of SSD, 
running Amazon Linux AMI 2016.09.0. 
Our prototype was implemented in Java 1.6, running on 64-Bit JVM (build 25.91-b14).
 
\subsubsection{Comparison with BaseX}
To compare with, we selected BaseX 8.5.3 (released on August 15, 2016)
as our opponent. It was tested under two configurations: one was to put
all the XML data and index sets on memory (\emph{BXon}), and the other
was to put only the index sets on memory (\emph{BXoff}). In order to
eliminate the influence of printing time for output, we simply apply
count function to queries, e.g. “count(XQ1)”; then use the time for
evaluating this query as execution time. We also set the parameter
INTPARSE true to use the internal XML parser that is faster, more
fault tolerant and supports common HTML entities out-of-the-box.

\subsubsection{Evaluating Queries on 1 EC2 Instance}
We first conducted the experiment on 1 EC2 instance to investigate the querying
performance in comparison with BaseX using the queries listed in
Table~\ref{tab:datasets} on XMark (with factor 100) and dblp. The experimental
results as shown in Figure~\ref{fig:allresults} clearly demonstrates that our
approach outperforms BXon and BXoff in most tests except for XQ4. In most case,
our implementation achieves 2-6 times faster than BaseX. In some extreme case,
we can achieve 100s times faster than BaseX (DQ1). The reason in this case is
because we group nodes with the same tag name, avoiding evaluating unnecessary
nodes and the parent-child relationship can be determined in constant time. We
also notice that BXon is 2-3 times faster than BXoff. We believe this is simply
because it loads all data into memory. This can be learnt from
Figure~\ref{fig:parsingTime} (1). It also shows that Bxon takes more memory than
ours for all the datasets. Even we deduct the size of original XML datasets, it
still exceeds our approach. As of the parsing time, we can see that in
Figure~\ref{fig:parsingTime} (2). Our approach is nearly twice faster than BXon
and four times faster than BXoff in both datasets.

\subsubsection{Evaluating Queries on 32 EC2 Instances}

In this experiment, we investigated the querying performance on processing very
large XML documents by using 32 EC2 instances. We used UniProtKB and XMark(with
factor 2000) as experiment data. The results are shown in
Table~\ref{tab:multieval}. With the design of 31 bytes for a node, the memory
consumption should be 157 GB and 537 GB for 0.545 billion and 1.86 billion nodes
respectively. The experiment results show the consumptions are 173 and 560 GB,
much close to the computation. We believe the overheads result in some
intermediate data generated during the parsing phase. The parsing times in Table
3 are relatively short considering the very large data sizes. The querying is
also very fast, such as XQ1 to XQ5 that took just a few seconds. The throughput
of most queries is about 1 GB/s. The best throughput of One study, PP-Transducer~\cite{OgTP13},
achieved the throughput of 2.5 GB/s at most with 64 cores. Although it is
faster, the queries we can process are more expressive than that, which does not
support order-aware queries.

\begin{figure}[t]
	\centering
	\includegraphics[width=.99\linewidth]{figures/query.png}
	\caption{ Execution time of queries on XMark100) and dblp datasets}
	\label{fig:allresults}
\end{figure}

% /* XQ1 */ "/site/open_auctions/open_auction/bidder/increase", 
% /* XQ2 */ "/site//keyword",
% /* XQ3 */ "/site//keyword/parent::text", 
% /* XQ4 */ "/site//text[./keyword]",
% /* XQ5 */ "/site/people/person[./profile/gender]/name",
% /* XQ6 */ "/site/people/person/name/following-sibling::emailaddress",
% /* XQ7 */ "/site/open_auctions/open_auction[./bidder/following-sibling::annotation]/reserve", };















