
\section{Direct Sensor Mapping}
\label{sec:directmapping}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DARIO

In the previous section we saw how we model the sensor behaviour in
our domain: we do that with the \textbf{sensor profile}, representing
the \textquotedblleft{}shape\textquotedblright{} of the activations
of a certain sensor during the day, and the \textbf{relational profile},
representing the relation between two sensors in the same domain.

In this section we will discuss how to make use of this information,
that is, how do we actually \textbf{compare the sensors} across the
domains, retrieving the final mapping.


\subsection{Comparing the sensor profiles}
\label{sub:comparing-sensors}

The first and most important measure when we compare two sensors is
given by the similarity of their\textbf{ sensor profiles}.

We compute this similarity as the \textbf{Kullback-Leibler divergence}
between the two distributions.



It is worth to mention that no way has been found to compute the KL
divergence for GMMs \textbf{analytically} \cite{hershey2007}; in order to do
that, we make use of the jMEF framework\footnote{http://www.lix.polytechnique.fr/~nielsen/MEF/}, which provides
a Java implementation of the \textbf{Monte Carlo} approximation for
this measurement.

Once we have a model for each sensor $a\in S$ and $\alpha\in T$
in the source and target domain respectively, and a way to compute
the similarity between two models, the problem of determining the
most likely correspondence between the profiles is a $n^{2}$ problem,
where $n$ is the number of sensors in one domain (assuming, for simplicity,
that the two domains have the same number of sensors); in fact, what
we do is just combine each sensor in $S$ with all the ones in $T$,
keeping for each $a\in S$ the match with the lowest divergence.

Note that, as of now, two sensors $a,b\in S$ may be mapped to the
same $\alpha\in T$. Despite this, in \ref{sub:n-to-n-associations} we have spot out the
reasons for which it seems reasonable for the mappings to be carried
out 1:1. Experiments have been run forcing this condition to be true (see \ref{sec:onetoone_heuristic} and \ref{sec:experiments_individual}).


\subsection{Comparing the relational profile}
\label{sub:comparing-relational}

In our implementation, we haven\textquoteright{}t worked out a definitive
way of \textbf{integrating} the information given by the relational
profile with the one from the sensor profile: what we present here
has to be considered as a temporary solution, which leaves some room
for further improvements.

While the sensor-to-sensor comparison was a $n^{2}$ problem, here
we have a $n^{4}$ problem: we have to compare each possible couple
of sensors $a,b\in S$ with each possible $\alpha,\beta\in T$. We
do that the \textbf{same fashion} we did for the single profiles:
computing the KL divergence between $r(a,b)$ and $r(\alpha,\beta)$,
for each possible value of $a,b,\alpha$ and $\beta$. Note that in
this case we don\textquoteright{}t need any approximation, since the
relational profiles are simple Normal distributions, for which the
KL measure can be computed analytically.

Once this measure has been computed for all the sensors, we obtain
a \textbf{4-dimensional space}, in which each point $(a,b,\alpha,\beta)$
contains a numerical value which tells us how similar the relation
between $a$ and $b$ is to the relation between $\alpha$ and $\beta$.
At this point, to make the reasoning easier, we swap the axis of the
space, so that each point is given by $(a,\alpha,b,\beta)$, representing
the probability of $a$ mapped in $\alpha$, when $b$ is mapped in
$\beta$; that is, $p(a\rightarrow\alpha\wedge b\rightarrow\beta)$.

The \textbf{final score} for a sensor, in the current model, is obtained
adding the sensor profiles distances to the corresponding rows of
the relational distances spaces we have just computed, and taking,
for each sensor in the first domain, the minimum in this final matrix.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% BENJAMIN

\subsection{One-to-one Heuristic}
\label{sec:onetoone_heuristic}

Matching sensors by always choosing the candidate with the smallest divergence
can lead to sub-optimal or wrong matches when the training data is not
sufficiently generic, or in other words, if overfitting occurs.

Consider the following example, which is directly drawn from our use cases.
We want to match the sensor ``bedroom-door'' in House B (the target domain) with
an appropriate sensor in House A (the source domain). By looking only at the
statistical profile, the best candidates are ``microwave'' with a KL-divergence
of $0.29$ and ``hall-bedroom-door'' with a KL-divergence of $0.91$. The correct
match has a higher divergence score than the wrong one, and by the smallest
divergence strategy the wrong choice is made.

The reason for this mismatch is overfitting, although not in the original task
of classifying human activities, but of classifying sensors in the target domain
in terms of sensors in the source domain. This is understandable in this area,
because different people behave differently in the household and thus sensors
that should correspond have divergent profiles.

We propose the following heuristic when matching sensors in order to overcome
this problem. It is based on the assumption that the ideal mapping between the
two domains is one-to-one, hence the name.

\begin{enumerate}
\itemsep0em
\item Compute the best candidates for all sensors in the target domain.
\item Identify among all established matches the one with the lowest divergence
      score and remove the matched candidate from the pool of candidates.
\item Compute a new candidate for all remaining matches involving the candidate
      removed in the previous step.
\item Repeat the previous two steps until all sensors in the target problem are
      mapped to sensors in the source problem.
\end{enumerate}

The example in figure~\ref{fig:heuristic} shows how this heuristic works. First
the match $S_2 \rightarrow S_A$ is identified; the candidate $S_A$ is then
removed from the candidate set, and the correct match $S_1 \rightarrow S_B$ is
established.

The main intuition and assumption behind this heuristic is that at any point of
the mapping process, the match with the lowest score among all has the highest
probability of being the correct one.

From figure~\ref{fig:heuristic} it can also be seen that the problem at hand
cannot be regarded as a straightforward minimization problem, because the
mapping minimizing the total divergence is the wrong mapping - caused by
overfitting. This underlines the importance of a context for individual
features, which is discussed in section~\ref{sec:indirectmapping}.

\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{heuristic.jpg}
\caption{Divergence scores between four hypothetical sensors. The circled scores
indicate the expected mapping.}
\label{fig:heuristic}
\end{figure}

We used this heuristics for the experiment described in section
\ref{sec:experiments_individual}.

