
\section{Experiments}
\label{sec:experiments}

In this section we present the results of our work. In all the experiments
we use the same task: given \textbf{two houses} (the source house
and the target house), find the mapping between the sensors in the
first and the ones in the second. Once this mapping is found, we compare
the result with a hand-made mapping, used as a ground truth.

The \textbf{precision} measure is then computed as the number of correct
matches over the total number of sensors in the target house.

We made use of the Kasteren's variable mapping dataset \cite{tim2008} for carrying
out our experiments.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DARIO

\subsection{Direct Sensor Mapping}

\subsubsection{Sensor profile}

In our first experiment we consider the \emph{House C} as the source
domain and \emph{House B} as the target domain, making use only of
the information given by the sensor profile, as it is described in
\ref{sub:comparing-sensors}. Each $a\in S$ is thus mapped into the $\alpha\in T$ for which
the KL divergence between the sensor profiles is less.

As a result, we obtained a \textbf{precision} of 0.18 (4 sensor matched
out of 22).

\subsubsection{Relational profile}
\label{sec:exprelational}

The next step is to integrate the information provided by the relational
profile. We did that the way described in \ref{sub:comparing-relational}, just summing up relational
and sensor profiles distances, taking the minimum for each source
sensor.

There is a slight improvement in the \textbf{precision} measure, which
raises to 0.22 (6 sensors matched out of 22).

It is worth to point out that the increment in precision is not due
to the addition of two more correct matches: what happened is that
two correct mappings turned into wrong ones, and \textbf{four} mappings
that were not correct in the previous experiment turned into correct
mappings. This encourages us to think that a more \emph{intelligent}
way of using the information we have available may improve the results
even more.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DARIO


\subsection{Indirect mapping: Merged sensors}

We carried on the experiment implementing the automatic metafeatures
detection, as it is described in \ref{sub:metafeatures-construction}. What we did this was to group
the sensors in the two houses into metafeatures, and then map those
metafeatures the same way we did with the sensors, making use of both
the statistical and relational profiles.

In this case we obtained a \textbf{precision} of 0.31 (4 groups matched
out of 13).


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% BENJAMIN

\subsection{Indirect mapping: Individual Profiles}
\label{sec:experiments_individual}

In this experiment we evaluate the matching of meta-features represented by
separate statistical sensor profiles, using the algorithm described in section
~\ref{sec:individual_profiles}. We use only one mixture component for each
sensor model, which seemed to give better results in this case.

We also evaluate the one-to-one heuristic described in section~\ref{sec:onetoone_heuristic}
under two conditions: once without a one-to-one
mapping between meta-features in the data set, and once with such a mapping.

In order to focus on the mapping algorithm and the heuristic itself, we used as
meta-features the one predefined in the data labels instead of the clusters as
computed by our algorithm.

As for data sets used in the experiments, we used the following configurations.

\begin{itemize}
\itemsep0em
\item houseA $\rightarrow$ houseB
\item houseB $\rightarrow$ houseA
\item houseA $\cup$ houseB $\rightarrow$ houseC
\item houseB $\cup$ houseC $\rightarrow$ houseA
\end{itemize}

When taking the union between the sensor firings of two houses we keep the
sensors of shared meta-features separated and simply regard them as additional
sensors of the new ``generalized'' house. This is also the case when using
shared meta-features.

We report in table~\ref{tab:mfmapindividual} the results of all eight experiments.

\begin{table}[h]
\centering

\begin{tabular}{| c | c | c | c |} \hline

source & target & normal & heuristic \\ \hline \hline

%   houseB -> houseA (normal)
%    ok: 6/9 (67%)
%   houseB -> houseA (one-to-one heuristic)
%    ok: 8/9 (89%)

houseA & houseB & 0.67 & 0.89 \\ \hline

%   houseA -> houseB (normal)
%    ok: 3/7 (43%)
%   houseA -> houseB (one-to-one heuristic)
%    ok: 3/7 (43%)

houseB & houseA & 0.43 & 0.43 \\ \hline

%   houseC -> houseAB (normal)
%    ok: 5/10 (50%)
%   houseC -> houseAB (one-to-one heuristic)
%    ok: 5/10 (50%)

houseAB & houseC & 0.50 & 0.50 \\ \hline

%   houseA -> houseBC (normal)
%    ok: 5/7 (71%)
%   houseA -> houseBC (one-to-one heuristic)
%    ok: 5/7 (71%)

houseBC & houseA & 0.71 & 0.71 \\ \hline

\end{tabular}

\caption{Experimental results for meta-feature mapping with individual
profiles.}

\label{tab:mfmapindividual}

\end{table}

The first thing that can be observed from the results is that the one-to-one
heuristic works only in one case, when using House A as a training set and House
B as a test set. One reason for this might be that the predefined meta-features
of the datasets are not fully shared among all houses. On the other hand, in the
case it gives an improvement it helps to eliminate two classification errors,
which suggests that it should be investigated further.

The second important observation that can be made is based on the fact that the
layouts and inhabitant behavior of House A and B are quite similar, while House
C is different in both regards. When using such heterogeneous data for training
as House B and C, the best results can be obtained (last row of
table~\ref{tab:mfmapindividual}), as compared to when only House B is used for
training (second row).

Overall it can be seen that the results are not as high as one would expect.
Although meta-feature mapping based on individual profiles does in a way
consider the context of a sensor, in these experiments it is done only based on
statistical sensor profiles. This indicates that a stronger grasp of relations
between sensors might be needed to boost performance.

