%!TEX root = main.tex

\section{Hidden Attribute Extraction}
\label{sec:metadata}

In this section, we focus on the main technical challenge of table stitching: extracting hidden attributes 
for the tables from unstructured context on the page. We  discuss the sources of context
(Section~\ref{sec:source}) and the techniques of extracting the implicit attributes
(Section~\ref{sec:extract}). We also describe a few heuristics to provide candidate segmentations that 
are crucial for the extraction (Section \ref{sec:segment}).

\subsection{\bf Sources of the Context}
\label{sec:source}

%Usually the identified candidate tables are not directly stitch-able, because critical meta-data (i.e., used in the selection conditions when generating the respective views/tables from the underlying big table) are described in the tables' context instead of the tables' content. We need to extract these meta-data from table context before merging the tables. 

The hidden attributes may be embedded in different sources of context. For example,

\begin{itemize}

\item {\bf Web page title}: A title is used to succinctly describe the
  main content of a Web page with necessary details. When the main
  content of a Web page is a table, its title is often useful for the
  extraction.

\item {\bf Text surrounding the table}: The description of a table
  often appears in text 
before the table\footnote{The description also appears after the table but in the initial 
experiments we empirically observe little useful text after the table.}. We can extract such text by looking 
for the closest text node to the table in the page's DOM tree.

\end{itemize}

In addition, Web page URLs, HTML caption tags, and navigational menus on Web pages can also serve as 
useful sources of context, and our technique described below could similarly apply to those. As an initial 
study, we chose to focus on the title and surrounding text first, which are empirically the two most important 
sources.

\begin{comment}
\begin{algorithm}[ht]
\SetAlgoLined
\KwIn{Web page DOM Tree ${T}$, a node $n$ in $T$ that represents the Web Table DOM node}
\KwOut{The surrounding text $t$ of the Web table if any.}
initialization: $p = n$, $found=false$, $t = null$ \;
\While{not $found$}{
 \eIf{hasPrevSibling$(p)$}{
  \eIf{prevSibling$(p)$ != `table' \\\hspace{5mm} \emph{and} prevSibling$(p)$ != `form' \\\hspace{5mm} \emph{and} prevSibling$(p)$ != `script' \\ \hspace{5mm} \emph{and} prevSiblingText$(p)$ not empty}{
   t = prevSiblingText$(p)$\;
   found = true\;
  }
  {
   p = prevSibling$(p)$ \;
  }
 }
 {
  p = parent$(p)$\;
 }  
}
\Return{$t$}
\caption{\label{algo:surround}Surrounding Text Extraction}
\end{algorithm}
\end{comment}

\subsection{\bf Extraction by Segmentation and Alignment}
\label{sec:extract}

The context of a table is simply a piece of unstructured text, thus the main challenge is to accurately 
identify the set of hidden attributes from those sequences and align them across all tables in the group so 
that the values are semantically coherent.  Note that many attribute values are presented as phrases, instead 
of isolated tokens. For instance, the hidden attribute ``Benton County, AR'' have the most semantic value 
only when all three tokens are present: individual tokens such as ``Benton'' or ``County'' are not useful, 
while the token ``AR'' represents too large an area.

We tackle this task as a sequence labeling problem~\cite{lafferty2001conditional} where the text is 
represented as a sequence of $n$ tokens $T=\langle t_1, \dots,
t_n\rangle$. If a  segment (i.e., a consecutive 
subsequence of tokens), represents a meaningful phrase, it will be labeled as a useful attribute value. 
However, our problem is different from traditional sequence labeling problem since the extracted segments for 
different tables in the same group need to be aligned so that they can be filled into the implicit columns of 
the resulting union table.

Our solution is inspired by a similar problem in computational biology, namely finding common genetic motifs from 
different DNA sequences~\cite{gusfield1997algorithms}. To identify genetic mutations within a group of 
individuals, scientists compare those individuals' DNA sequences at the same time using the \emph{Multiple 
Sequence Alignment} (MSA) technique.  The context of our tables can be considered as the DNA sequences, and 
thus we can adapt the MSA technique to identify and align useful hidden attribute segments.

However, there are a few further challenges. First, MSA is not directly applicable. In DNA sequence 
alignment, the basic unit for alignment is individual nucleobases, namely A, T, C and G. In our case, the 
segments are the basic units, but we only have tokens as input. We have to solve the segmentation and the 
alignment problem holistically. Second, we do not have enough training data to apply any existing supervised 
segmentation method. It is extremely hard to manually label table context data from the whole Web since each 
site has different characteristics. Therefore, we turn to unsupervised methods. In particular, we design a 
suite of segmentation heuristics. Each heuristic captures some
characteristics of the hidden attributes, but may miss others. 
However, the heuristics work surprisingly well collectively. More 
concretely, to select the segments, we adopt the following strategy. If segments by the a particular 
heuristic tend to align across sequences, then that particular heuristic is more likely to be correct.  We 
next introduce the segment-based multiple sequence alignment method. (The heuristics for 
generating candidate segments will be detailed in Section~\ref{sec:segment}.)

Before diving into the details of alignment of $n$ sequences, we first
look at the pairwise case when $n=2$ (Algorithm \ref{algo:psa}). We
have two sequences $T_1$ and $T_2$ (we abuse the symbol $T$ here to
also represent table context). Their candidate segments ($S_1$ and
$S_2$ respectively, where each element segment is represented by
begin and end token positions) are generated by a particular
heuristic. In addition, empty segments are added to allow null values
(i.e., gaps) for alignment. Similar to other dynamic programming
algorithm, we divide the whole problem into subproblems. How well the
segments from each sequence align depends on how well these two
segments match {\em and} how well the subsequences immediately before
each segment are aligned. Let $|T_1| = n_1$ and $|T_2| = n_2$.  We
maintain a 
chart $C$ of a size $(n_1+1) \cdot (n_2+1)$, where each
chart entry $C(i,j)$\footnote{We use $C(i,j)$ to represent both the
  chart entry and the alignment score for that entry.} stores the
score for the best alignment between the subsequences $T_1^{1 \dots
  i}$ and $T_2^{1 \dots j}$, as well as the last aligned segment for
each subsequence. The algorithm runs two outer loops ranging from the
smallest subproblems to the final whole problem. At each subproblem
$(i,j)$, we enumerate all pairs of the candidate segments that end
with token $T_1^i$ and $T_2^j$ respectively. Note that $S_l^i, l \in
[1,2],$ is defined as a set of candidate segments from $S_l$ that end
at $T_l^i$. For each pair, a segment matching score is computed as
follows:
\begin{equation}
\label{eq:pen}
score(s_1,s_2) =
\begin{cases}
 \lambda_{h} & \text{if both $s_1$ and $s_2$ are generated} \\
             &  \text{     by the same heuristic $h$;} \\
 \lambda_{gap} & \text{if $s_1$ or $s_2$ is an empty segment;} \\
 0 & \text{otherwise.}
\end{cases}
\end{equation}
The sum of this segment matching score and the best alignment score of the immediate previous subproblem is used to update the chart entry. 

\begin{equation}
\label{eq:update}
C(i,j) \leftarrow \max(C(i,j),  score(s_1, s_2) + C(i-|s_1|, j-|s_2|))
\end{equation}

In the end, we can extract the aligned segments by tracing back from the chart entry $C(n_1, n_2)$.

\begin{algorithm}[ht]
\SetAlgoLined
\KwIn{Two sequences of tokens $T_1$ and $T_2$ of size $n_1$ and $n_2$
 and two sets of candidate segments $S_1$ and $S_2$ respectively.}
\KwOut{The best alignment of segments in $T_1$ and $T_2$.}
%$<m_1^1, \cdots, m_1^M>$ and $<m_2^1, \cdots, m_2^M>$.}
Initialization: A chart $C$ of size $(n_1+1)\cdot (n_2+1)$. \\
\lFor{$i \leftarrow 0$ \KwTo $n_1$}{$C(i, 0) = i \times \lambda_{gap}$}\;
\lFor{$j \leftarrow 1$ \KwTo $n_2$}{$C(0, j) = j \times \lambda_{gap}$}\;
\For{$i \leftarrow 1$ \KwTo $n_1$, $j \leftarrow 1$ \KwTo $n_2$}{
 \For{$s_1 \in S_1^i$, $s_2 \in S_2^i$} {
  Update the chart at $C(i,j)$ according to Eq. \ref{eq:update} \;
  % newScore = penalty($s_1$, $s_2$) \\ \hspace{20mm} + $C(i-s_1.len, j-s_2.len)$ \;
  % \If{newScore $> C(i,j)$}{
  %  $C(i,j)$= (newScore, $s_1$, $s_2$) \; 
  % }
  }
}
\caption{\label{algo:psa}Pairwise Segment Alignment}
\end{algorithm}

In principle, we can  use dynamic programming in a similar fashion for
multiple ($n>2$) sequences to compute an optimal 
sum-of-pairs score, where the optimal alignment will have the best score summing over all pairs of pairwise 
alignment scores. Unfortunately, these computations are exponential in the number of sequences. Previous 
literature has proved that finding the optimal MSA when the number of sequences is a variable is
NP-complete~\cite{wang1994complexity}. Therefore, we approximate the solution by iterative pairwise alignment 
(similar to~\cite{barzilay2003learning}). In particular, we maintain a 
profile of current pairwise alignment which can also be viewed as a sequence. Each element of this
pseudo-sequence is, instead of a single token, a distribution of different segments in this alignment slot. 
When aligning with another original sequence, the algorithm remains the same except that the {\tt score} 
function is overloaded for what we call a {\bf profile slot}. Specifically, a profile slot $ps$ is a set of 
segment-probability pairs, $\{(s_i, p_i)\}$, and its alignment score with a segment $s_j$ is defined as a 
weighted sum of the scores between each segment in the slot and the segment $s_j$:

\begin{equation}
\label{eq:pen-profile}
score(ps,s_j) = \sum_{(s_i, p_i) \in ps} p_i \cdot score(s_i, s_j)
\end{equation}

Finally we can recover the aligned segments from the profile by reading each slot. We further post-process 
the results by filtering out useless segments if they do not provide distinguishing information for the 
stitched tables. Specifically, a slot will be removed if it meets two conditions: 1) all the segments of this 
slot have the same value; and 2) the value cannot be used as hidden attribute labels. The first condition rules out the slots such as preprosition words or website names that are not specific to the tables. However, a constant value sometimes could be used as a hidden attribute label, e.g. a slot with a constant value ``Area:'' would be an appropriate attribute label for the slot next to it. To prevent a reasonable attribute label from being removed, we further check if the value is present in a pre-existing attribute label database where the string values are ordered by the numbers of their appearances in the table headers from a corpus of millions of WebTables (\cite{cafarella2008uncovering}\footnote{We re-build the attribute database from our own Web table corpus and only consider the top 5K frequent attribute names.}).  

%{\bf from Cong: the above does not make much sense to me, especially the second condition, Xiao to take a look.}

\subsection{Heuristics for Candidate Segments}
\label{sec:segment}

We now discuss how to generate candidate segments via several heuristics. We adopt three diverse heuristics 
that treat the text in different ways ranging from purely syntactic to semantic interpretations. 

\smallskip
\noindent
{\bf Punctuation/Tag Separators:} When pieces of text are not organized in a 
grammatical sentence on Web pages, they are either separated by obvious punctuations, e.g., commas, vertical bars, or 
highlighted by different HTML styles, e.g. font colors, font size. In the first case, we use the segments of 
tokens between two punctuation marks. For style highlights, the HTML tags are sufficient for the segmentation.

\smallskip
\noindent
{\bf Longest Common Subsequences (LCS):} Assuming some contextual texts are automatically generated using 
templates, another heuristic is first detecting the common segments from the context and use them to separate 
the larger sequence. The remaining segments that have different values across sequences are extracted as the 
hidden attribute segments. This problem has long been approached as Longest Common Subsequences where 
each subsequence is a common segment in our context as a separator. We start by comparing a pair of sequences 
of tokens, which can be efficiently solved by dynamic programming~\cite{bergroth2000survey}. The method 
proceeds by iteratively and greedily comparing to the next sequence for LCS. Note that an LCS problem can be 
seen as a degraded MSA problem with the tokens being the aligning elements and the scoring function being 
binary on string matches ($1$ for the matches).

%The complexity is proportional to the number of the sequences and the square of the length of the longest 
%sequence. \footnote{http://link.springer.com/chapter/10.1007\%2F3-540-56024-6\_18}

%\begin{algorithm}[]
% \SetAlgoLined
% \KwIn{Two sequences of tokens, $T1, T2$}
% \KwOut{Subsequences of tokens that are the same }
% initialization\;
% \caption{\label{algo:lcs}Pairwise LCS; not finished}
%\end{algorithm}

{\bf Wikification:} Wikification~\cite{milne2008learning,ratinov2011local} is a technique of linking words or 
phrases in any text to a corresponding Wikipedia article. If a segment can be wikificated, it will likely to 
be meaningful and useful for understanding the table. We applied a homegrown wikification tool to all 
the contextual text and extracted the segments identified as Wikipedia entities as candidate segments for 
alignment.

