
\section{Indirect Sensor Mapping}
\label{sec:indirectmapping}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% KOEN

Existence of a direct variable mapping to transfer knowledge is a utopia for
many domains. There are many reasons why a direct mapping would fail. One
obvious reason is that there could be a difference in the number of variables
across domains. The impossibility of directly mapping variables is resolved by
the use of meta-features.

Meta-features are sets of variables. Mapping is then performed at the level of
these meta-features. Construction of meta-features is domain specific and far
from a trivial task as we will see.

Meta-features were used before in our domain. They were manually constructed and
mapped by hand. These experiments focussed on transfer learning only (no
variable mapping). We have been looking for ways to construct these
meta-features automatically. Grouping sensors into meta-features for this domain
was never done automatically before.

\subsection{Meta-feature Construction}
\label{sub:metafeatures-construction}

Meta-features in our domain are sets (groups) of sensors. 

In order to group sensors one should start by deciding what kind of groups we
want to obtain, in other what is an appropriate similarity measure to cluster sensors
into meta-features.
The whole point of variable mapping for this domain is to
monitor human activities.

The meta-features should therefore make it possible to recognize activities. For
example if we would group all sensors with even id number together, it would
probably be very hard to recognize activity patterns (assuming that sensors have
random id's). In previous experiments sensors were grouped together manually,
they grouped the sensors according to there involvement with a particular
activity.

If grouping of sensors is activity based this can be seen as a first step in
recognizing activities. Creating meta-features has therefore some overlap with
the actual activity recognition that will be performed in a later stadium. This
overlap could lead to over fitting. But as long as the meta-feature are
consistently constructed by the same algorithm in both source and target domain
we don't expect problems. 

We want to create meta-features of the type; kitchen, toilet use, bedroom etc.
A first intuition could be to compare statistical profiles (section \prettyref{sub:Sensor-Profile}) within
one domain and use a similarity measure (KL divergence) to compute similarity.
But is 'statistical similarity' a property of all members of a meta-feature? No
it isn't, take for example the 'kitchen meta-feature'. Both the 'microwave
sensor' and the 'fridge sensor' should belong to this meta-feature. If we have a
look at there statistical profile we notice that the 'fridge sensor' fires very
often for a short time. The 'microwave sensor' in turn fires maybe once every
three days for a longer duration. By looking at the statistical profile the
fridge and toilet have more in common than the fridge and microwave. Conclusion;
the statistical profile is not a good indicator for meta-feature construction.

We found that physical location of the sensors in a house is a strong indication
that a sensor couple belongs to the same meta-feature. Unfortunately this
information is not (directly) recorded in the data. Therefore we thought of ways to
estimate the physical distance between sensors.

\begin{description}
 \item[Assumption]  Sensors that are physically together sometimes fire closely after
each other. For sensors that are located far from each other, time between
activation is never below a certain threshold.
\end{description}

A potential problem with this assumption is noise. Another problem when linking
domains is that the size of houses differs and that some people move faster than
others (students vs. elderly). We can overcome these problems by setting two
parameters; $\alpha$ and $\beta$. Their value depends on noise, house and
inhabitant. With more training data we expect that these parameters could be
learned automatically. For this experiment however we set them manually and
experimented with different values.
\begin{description}
 \item[Rule] : If at least $\alpha$ times the 'in between time' is less than $\beta$, the
sensors are grouped together.
\end{description}

Once we obtained a meta-feature label for every sensor there are multiple
options of how to use these labels for automated mapping across domains. We will
discuss two approaches; Merged profiles (classical approach) and individual
profiles. We also experimented with automatically mapping manually constructed
meta-features across houses (see section \ref{sec:exprelational}).

Results of the automatic meta-feature algorithm in House B with $\alpha=2$ and $\beta=4$.
Note that some sensors were not grouped, they are regarded singleton meta-features.
\begin{itemize}
\item {\bf microwave}, {\bf frontdoor}, {\bf dishwasher}, {\bf pans-cupboard}, {\bf washingmachine}
\item {\bf metafeature1}  \{ {\it hall-toilet-door, hall-bathroom-door, toiletflush, hall-bedroom-door} \}
\item {\bf metafeature2}  \{ {\it cups-cupboard, fridge, plates-cupboard, freezer, groceries-cupboard} \}
\end{itemize}


\subsection{Meta-feature Matching} 

\subsubsection*{Merged Profiles}

For every meta-feature (set of sensors) the information of all the individual
members (sensors) is merged. The set 'fires' if any of its members 'fires'. The
set is therefore a union of its members, this gave the best performance
according to \cite{union}.

Implementing meta-features this way is how it is usually performed in the field
of transfer learning. If the meta-features are not constructed in a meaningful
manner we are likely to loose vital information for activity recognition later
on. 

After construction of the meta-features and merging, meta-features are
intrinsically similar to 'normal sensors'. They have information about when they
start firing and when they end (the system cannot distinguish a meta-feature
from a single sensor). This way we used the same techniques as in the case of
direct sensor mappings (statistical/relational profile) to map meta-features of
different houses.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% BENJAMIN

\subsubsection*{Individual Profiles}
\label{sec:individual_profiles}

The alternative to represent a meta-feature by a single distribution over the
input space is to leave the original sensor models (statistical profiles)
intact.

\begin{figure}[t]
\small
\begin{tabbing}[c]
fo \= fo \= fo \= fo \= \kill
\textbf{function} clusterDivergence (clusterSmall, clusterBig)\\
  1\> avgMinDivergence := 0 \\
  2\> {\bf for $s_S$} in clusterSmall {\bf do} \\
  3\> \> minDiv := $\infty$ \\
  4\> \> {\bf for $s_B$} in clusterBig {\bf do} \\
  5\> \> \> curMinDiv := KL($s_S,s_B$) \\
  6\> \> \> {\bf if} curMinDiv $<$ minDiv {\bf then} \\
  7\> \> \> \> minDiv := curMinDiv \\
  8\> \> \> {\bf endif} \\
  9\> \> {\bf endfor} \\
  10\> \> avgMinDivergence := \\
    \> \> \> avgMinDivergence + minDiv \\
  11\> {\bf endfor} \\
  12\> avgMinDivergence := \\
   \> \> avgMinDivergence / length(clusterSmall) \\
  13\> {\bf return} avgMinDivergence
\end{tabbing}
\label{fig:algClusterDivergence}
\caption{Given two sets of sensor profiles, this algorithm computes the minimum
average divergence from the smaller cluster towards the bigger one.}
\end{figure}

At this point meta-features cannot be compared anymore by a straight-forward
application of the KL-divergence measure and a method to compare set of
distributions needs to be devised. What comes to mind is to use a method similar
to techniques for comparing Gaussian Mixture Models (e.g. \cite{hershey2007}).
Because of limited time we devised the following ad-hoc method, also described
in figure~\ref{fig:algClusterDivergence}.

Given are two clusters of sensors, represented by the respective statistical
profile. In a first step we compute the mapping from the smaller cluster onto
the bigger one that minimizes the divergence between all pairs. This is not
necessarily injective, meaning that one sensor in the larger cluster can
correspond to multiple sensors in the smaller one. In a second step we take the
average divergence score of the identified mappings and regard it as the
divergence between the two clusters.

The implicit assumption in the algorithm is that the larger cluster is used as
the classifier; it possibly contains multiple models for one sensor type, e.g.
multiple ``fridge'' models, or a larger number of sensor types than what is
usually found in a household, e.g. ``microwave'' and ``stove'', where only the
better equipped kitchens have both items. In practice this means that one
collects for each meta-feature of interest a number of sensor models, possibly
coming from different domains - here: houses.

An experimental evaluation of this heuristic is given in section
\ref{sec:experiments_individual}.

