%!TEX root = main.tex

\section{Experimental Evaluation}
\label{sec:exp}

We present experiments demonstrating the effectiveness of our techniques for hidden attribute extraction and 
alignment (Section~\ref{sec:eval-metadata}), followed by experiments on labeling the extracted attribute 
columns (Section~\ref{sec:col-label}).

\subsection{Data Set} 

We obtained a corpus of $130$ million Web tables that were filtered from a collection of $14B$ billion raw 
HTML tables crawled from the Web. From the corpus, we performed the simple stitchable table identification by 
grouping the tables based on their sites and the automatically detected header rows. For the experiments, we 
sampled $20$ large groups, each of which have more than $1000$ tables, from $10$ different
websites\footnote{Examples: www.century21.com, www.britishcycling.org.uk.}. 
For each group, we further sampled $10$ individual tables for evaluation, which is conducted based on golden 
attribute values obtained through human evaluators judging from the table contextual context.

\subsection{Hidden Attribute Extraction and Alignment}
\label{sec:eval-metadata}

In this section, we investigate the quality of hidden attributes extraction and alignment. We perform the 
quality analysis from two perspectives, cell-wise accuracy and column-wise accuracy, where the former 
evaluates the extraction while the latter evaluates the alignment. We also empirically show that different 
heuristics for segmentation are complementary and work the best in concert. 

\smallskip
\noindent
{\bf Methodology:} We evaluate our approach by comparing different combinations of candidate segments as 
described in Section~\ref{sec:segment}. The parameters in Eq.~\ref{eq:pen} are determined via a grid search 
where each parameter can vary from $0.1$ to $1.0$ in an increment of $0.1$. We report the performance numbers 
via a leave-one-out experiment.

\smallskip
\noindent
{\bf Evaluation Metrics:} For cell-wise accuracy, we evaluate how accurate the identified segments are. 
Adopting the standard Precision/Recall/F1 measures,  we deem a predicted segment as a true positive (TP) if 
the prediction matches a labeled segment and a false positive (FP) otherwise. A false negative (FN) is a 
labeled segment that none of the predictions matches. Precision and recall are computed using
$\frac{\#TP}{\#TP+\#FP}$ and $\frac{\#TP}{\#TP+\#FN}$, respectively, and F1 is the harmonic mean of the two. 
The same metrics apply to column-wise accuracy evaluation except that we only deem a column correct when all 
the segments (across rows from different individual tables) from a column are correct.

\subsubsection{Experimental Results}

{\em Cell-wise:} The cell-wise performance is reported in Table~\ref{tbl:msa}. As expected, alignment with 
candidate segments generated by a single heuristic ({\tt SEP} or {\tt LCS}) works moderately on identifying 
hidden attributes correctly, however, the coverage leaves much to be desired. The best strategy involves all 
three segmentation heuristics, {\tt SEP+LCS+WK}, which achieves an F1 $15\%$ higher than the second best, 
{\tt LCS+WK}. This large gain of F1 measure is obtained by significantly improving the recall: 
{\tt SEP+LCS+WK} has $35\%+$ higher recall than other strategies.  This is consistent with our expectation 
because leverage all available heuristics brings in a large and diverse set of segmentation candidates.

\begin{table}[ht]
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
Segments & Precision & Recall & F1 \\ \hline
SEP & 0.458 & 0.260 & 0.332 \\ 
LCS & 0.630 & 0.478 & 0.543 \\
SEP+LCS & 0.551 & 0.484 & 0.516 \\
LCS+WK & 0.650 & 0.516 & 0.575 \\ 
SEP+LCS+WK & 0.627 & 0.703 & 0.663 \\
\hline
\end{tabular}
\caption{\label{tbl:msa}\small Performance on hidden attribute extraction with different combinations of segmentation heuristics: {\tt SEP} is the separator-based heuristics in Section~\ref{sec:segment}, {\tt LCS} is the Longest Common Subsequences heuristics, and {\tt WK} refers to the wikification-based heuristics.}
\end{center}
\end{table}

More importantly, we note that the candidate segments from syntactic ({\tt SEP} and {\tt LCS}) and semantic 
({\tt WK}) heuristics are complementary. Comparing {\tt SEP+LCS} against {\tt SEP+LCS+WK} or {\tt LCS} 
against {\tt LCS+WK}, both precision and recall are improved thanks to the addition of the Wikification-based 
segments. The syntactic heuristics work well in many cases. For instance, assume two sequences ``Location: 
Seattle, WA'' and ``Location: Portland, OR''. The segment ``Location:'' is identified as the common segment 
and therefore the segments ``Seattle, WA'' and ``Portland, OR'' are correctly identified as potential hidden 
attributes. However, in the example of ``Springfield High School'' and ``Jacksonville Elementary School'', 
the syntactic heuristic will naively recognize the segment ``School'' as the common separator because it 
largely ignores the semantic meaning of the whole phrase, while Wikification-based segmentation will 
recognize ``Jacksonville Elementary School'' as a single entity. In particular, the Wikification heuristic 
prevents over-segmenting a phrase, as well as helps segmenting large text chunks into meaningful phrases 
where punctuation  is not available. For example, ``American Airline (AA) \#1430'' is a single segment by 
syntactic heuristics while a semantic heuristics can break it up into the airline name and the flight number.  

{\em Column-wise:} We further examine if correct hidden attributes are created column-wise. The cell-wise 
evaluation shows the performance of sequence labeling while the column-wise evaluation measures the quality 
of the entire alignment. We evaluated only the best strategy {\tt SEP+LCS+WR} on the same set of
Precision/Recall/F1 numbers (the other strategies are significantly worse). In summary, our method generated 
$62$ attribute columns across $20$ table groups, $24$ of which are column-wise correct (i.e. every cell of 
the column matches the labels), and $31$ labeled columns were missed, resulting in the F1 measure of $0.41$. 
If we relax the correctness condition by allowing one wrong cell in a column, the F1 measure will be improved 
to $0.547$. Looking at the table groups individually, $7$ out of $20$ groups have a perfect match between the 
predictions and the human labels.  We note that the content of the wrongly generated columns due to 
segmentation errors could still be useful in the searching scenarios.

\subsection{\bf Hidden Attribute Column Label Prediction}
\label{sec:col-label}

In this section, we examine the effectiveness of automatically labeling the types for automatically extracted 
attribute columns, using a straight-forward column labeler based on the cell values. Given a provided 
{\tt isA} database as described in Section~\ref{sec:problem}, each cell value is matched to a list of zero or 
more types. A type is assigned to a column if and only if at least $t\%$ of its cell values have that type. 
Each predicted label is then manually marked as {\tt vital}, {\tt ok}, or {\tt incorrect} depending on how 
accurate and appropriate the label is for the column, as done in \cite{venetis2011recovering}. To 
compute precision, a predicted label is scored $1$ if it is marked as {\tt vital}, $0.5$ if marked as 
{\tt ok}, or $0$ otherwise. On average, there are five relevant labels for a given column.  Thus, for 
fairness, if there is no relevant label marked for a given column, we assume that there are five missed 
relevant labels for computing the recall.

We vary the threshold $t$ from $0.05$ to $1$ in an increment of $0.05$ to draw a precision-recall curve 
(shown in Figure~\ref{fig:col-label}). We observe a performance comparable to~\cite{venetis2011recovering}. 
The precision ranges from $0.4$ to $1.0$ and the maximum recall is $0.89$. We are thus confident that this 
provides a good overview of the hidden attribute columns.

\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{pics/pr.pdf}
%\vspace{-0.1in}
\caption{\label{fig:col-label}\small Precision/Recall diagram of the label
predictions for the hidden attribute columns.}
%\vspace{-0.1in}
\end{center}
\end{figure}
