

\subsection{Example runs}
\begin{figure*}
  \centering
  \subfloat[Original state]{\label{fig:predict1}\includegraphics[scale=0.4]{figures/predict1}}                
  \hspace{2mm}
  \subfloat[First prediction]{\label{fig:predict2}\includegraphics[scale=0.4]{figures/predict2}}
  \hspace{2mm}
  \subfloat[Second prediction]{\label{fig:predict3}\includegraphics[scale=0.4]{figures/predict3}} \\
  \subfloat[Third prediction]{\label{fig:predict4}\includegraphics[scale=0.4]{figures/predict4}}
  \hspace{2mm}
  \subfloat[Fourth prediction]{\label{fig:predict5}\includegraphics[scale=0.4]{figures/predict5}}
  \caption{The evolution of a prediced graph with four consecutive predictions.The dashed lines are the unknown true graph. The blue nodes and edges are correspond to the initial input graph, green represents a prediction that exists in the true graph whereas red represents a predicted node or edge absent in the actual true graph.}
  \label{fig:predictions}
\end{figure*}

To illustrate the method, five different states of a prediction sequence are shown in figure \ref{fig:predictions}. The complete unknown graph $G$ is shown in black dashed lines. The starting initial graph is shown in blue. A predicted edit operation existing in $G$ is shown in green and if it does not exist in $G$, then it is shown in red.

In figure \ref{fig:predict1}, the partial graph only consists of the vertex ``F LAV''. The prediction algorithm is then applied to produce
figure \ref{fig:predict2}. The next likely edit operation is to add a corridor and connect it to the ``F LAV''. Next in figure \ref{fig:predict3}, we can see the result of executing the prediction algorithm upon the previous graph consisting of ``F LAV'' and ``CORR''. Given that we have observed ``F LAV'' and ``CORR'', the algorithm suggests that it is plausible to have a male lavatory connected to the corridor as well.


%\begin{figure*}[hbpt]
%  \begin{center}
%  \includegraphics[scale=0.7]{figures/MathematicaPlots/PredictionPlot}
%  \end{center}
%  \caption{Comparsion between two prediction algorithms.}
%  \label{fig:comparsion}
% \end{figure*}

%\missingfigure{The new naive vs. method prediction plot}

\begin{figure*}[hbpt]
  \begin{center}
  \includegraphics[scale=0.7]{figures/MathematicaPlots/PredictionPlot2NBFvsFullBF}
  \end{center}
  \caption{Comparsion between the two methods over 50,000 partial input graphs. The blue dashed line and the solid red line correspond to the methods explained in section~\ref{sec:method1} and section~\ref{method2} respectively.}
  \label{fig:comparsionFullBrute}
 \end{figure*}


\begin{figure*}
  \centering
  \subfloat[The partial graph]{\label{fig:EditOpGraph1}\includegraphics[scale=0.85]{figures/EditOpGraph}}                
  \hspace{2mm}
  \subfloat[Probability distribution]{\label{fig:EditDist}\includegraphics[scale=0.6]{figures/MathematicaPlots/EditOpDist}}
  \caption{The discrete probability distribution for the edit operations of a partial graph.}
  \label{fig:distribution}
\end{figure*}

As another example, the input graph in figure~\ref{fig:EditOpGraph1} results in the discrete probability distribution shown in figure~\ref{fig:EditDist}. Since this partial graph consists of only
two vertices, the only edit operations considered are those of adding a new vertex. On the horizontal axis, the different edit operations are shown as $A \rightarrow B$, where $A$ is some existing vertex
of the partial graph and where $B$ is the vertex which should be added and connected to $A$. Note that edit operations with a probability below 0.02 are not shown. In this case, $A$ can only take the values of ``JAN CL'' or ``M LAV''. Note that as expected the $corridor$ vertex has the highest probability of being connected to another vertex by a large margin.   
\subsection{Quantitative evaluation}
We have compared the results of the two methods. To measure the performance of the algorithms for varying graph sizes, we have selected 2000 partial graphs randomly from the dataset, for each graph size between one and 25. In total 50000 different partial graphs were processed. The selection process works as the following. First we pick a random graph from the dataset $D$. Then for a given graph size $s = \{1 ...25\}$, we pick at random $s$ connected vertices which form an input graph. Then, we iterate this process until 2000 partial graphs are selected. Finally, the graphs from which random partial graphs was picked from are excluded from the training dataset (multiple partial graphs may come from the same graph). 

We counted the number of correct predictions for each algorithm in the test set. The number of correctly predicted graphs is divided by the total number of partial graphs to get a percentage of correct prediction.
In figure \ref{fig:comparsionFullBrute}, the result of this test is shown.  The main algorithm is shown in in red, while the naive algorithm is shown in dashed blue.
For smaller graph sizes, the performance is almost equivalent.
However for larger graphs, the performace of the naive algorithm decreases dramatically compared to the main algorithm. This shows the advantage gained by extracting frequently occuring functional parts.
%\begin{figure}[hbpt]
%  \begin{center}
%  \includegraphics[width=0.95\linewidth]{figs/FourRoomExpectedCosts}
%  \end{center}
%  \caption{We show the simulated expected costs for the example of
%    four rooms using the three policies vs. $n$, the depth of the policy's
%    search over actions.}
%  \label{fig:sim}
% \end{figure}


