\section{Sensor Profiling}

\label{sec:sensorprofiling}

As we said before, the problem we are trying to solve is to map a
set of sensors in the \textbf{source} domain to another set of sensors
in the \textbf{target} domain.


\subsection{N-to-N associations}
\label{sub:n-to-n-associations}

Before delving into the way we actually associate one to another sensor,
it is worth to point out that several reasons exist why we \textbf{cannot
assume} the final mapping to be 1:1, that is, to associate each sensor
in the source domain with one and only one sensor in the target domain:
\begin{itemize}
\item The \textbf{number of sensors} in the target domain may differ from
the source\textquoteright{}s one; this is the case when, for example,
the target house has a garage, while the source one does not.
\item As we saw in the previous section, multiple sensors in the source
domain may be \textbf{clustered} in one meta-feature, which may or
may not exist in the target domain; as an example, this is often the
case when sensors are placed on both the bathroom door and the sink\textquoteright{}s
water tap. In this case it is likely that we want those two sensor
just to identify the bathroom event. However, this event may correspond
to a single sensor in the target domain, or there may be a direct
mapping with just one of the sensors that make up the meta-feature.
\item The \textbf{opposite} holds as well: a single sensor in the source
domain may be mapped to multiple sensors in the target domain.
\end{itemize}
It is clear at this point that what we want to obtain in the end is
a \textbf{N-to-N} mapping from the source to the target domain.

However, a solution for the \textbf{one-to-one} mapping problem would
be acceptable as well: the more general problem can be solved grouping
the sensors before the mapping. This is the idea behind our work:
measuring the overall likelihood for all the possible combinations
of sensors in the two domains, making use of a 1:1 similarity function
(this will measure the similarity, across the domains, of groups of
one or more sensors each), keeping the combination which revealed
more effective. For this reason, later on we will only consider the
problem of mapping a sensor to another one, by \textquoteleft{}sensor\textquoteright{}
meaning either an actual sensor, or a combination of them.

Since this general idea is often not feasible, due to the high number
of possible groupings, in \ref{sub:metafeatures-construction} we will discuss an algorithm for the automatic grouping of the sensors. Even though we have not implemented this idea, a similar algorithm could be used to generate a set of most likely groupings, in order to reduce the number of the possible combinations, and thus
improving the efficiency of the system.


\subsection{Modeling the sensor behaviour}

Here comes the core part of the system, the one where, given two domains
$S$ and $T$, the source and the target houses respectively, and
two sensors $a\in S$ and $\alpha\in T$, we want to compute the similarity
between $a$ and $\alpha$ , that is, $p(a\text{\ensuremath{\rightarrow}}\alpha)$.
We will do this considering two factors:
\begin{enumerate}
\item The \textbf{profile} of each sensor. By profile we mean the behaviour
of the sensor during the day (how many times does it activates a day?
at what time? for how long?). The basic idea about how sensor is likely
to be mapped in can be given by the similarity between the profiles
of and . This aspect will be discussed in \ref{sub:Sensor-Profile}.
\item The \textbf{temporal relation} between sensors within a domain.
It is easier to introduce the concept of temporal relation by means of an example: say
that the sensor of the bathroom door is likely to activate always
1 hour after the sensor of the kitchen door. This is information we can use to correlate
both the bathroom door and kitchen door between source and target domain.
If we have found a mapping between the bathroom doors across domains we
can use the relational profile to aid in mapping the kitchen doors across domains.
\end{enumerate}
We will discuss in \prettyref{sub:Sensor-Profile} the way we model
the sensors profiles; in \prettyref{sub:Relational-Profile} we will
discuss the temporal relation between couples of sensors within a domain.

The actual way we use to compare the sensor profiles will be described
later on, in section \ref{sec:directmapping}.


\subsection{Sensor Profile\label{sub:Sensor-Profile}}

As we have already mentioned, the goal of the sensor profile definition
is to \textbf{model the behaviour} of a single sensor across the day,
in a way that can be compared with other profiles.

A Starting time/duration plot
seemed to be particularly representative of the \textquoteleft{}shape\textquoteright{}
of the sensors\textquoteright{} activations during the day. Furthermore,
we decided to extract some other \textbf{features} from the sensor
data\footnote{In the case of sensor groups, the sensor data is just
the union of the data from the sensors in the group.}; a classifier
can be trained and used to associate the sensors. The following are
all the features we are taking into account.
\begin{itemize}
\item Activation time
\item Duration of the event
\item Number of activations per day
\item Weekday/workday
\end{itemize}
We decide to use a \textbf{Gaussian Mixture Model} (GMM) to model
those features, rather than a simple Normal distribution, because
the data of each sensor may not be uniform across the day; for instance,
it would not be much effective to consider the average activation
time for a kitchen sensor, because it is likely to have two peaks
of activity, one at lunch and one at dinner. On the contrary, the
GMM allows us to model situations similar to this with a higher grade
of precision. \prettyref{fig:fridgeA} shows the data points of the
fridge sensor in the activation/duration space, fit with a Mixture
of Gaussians.

\begin{figure}
\begin{centering}
\includegraphics[width=8cm]{dario_fridgeA}
\par\end{centering}

\caption{Activation/duration plot for the \textbf{fridge} sensor in house A,
fit with a 2-components Mixture of Gaussians. Two clusters of activation
are evident at lunch time and dinner time.\label{fig:fridgeA}}


\end{figure}


The way these profiles are \textbf{compared} across the domains is
described in \ref{sub:comparing-sensors}.

Here we have to point out an issue which emerged during our work;
this issue regards the EM algorithm, which proven often to produce
inaccurate results, and in general very different fittings depending
on the initialization, which is random. \prettyref{fig:fridgeA_wrongfit}
shows the same sensor of \prettyref{fig:fridgeA}, fit in a different,
and less precise, way.

\begin{figure}
\begin{centering}
\includegraphics[width=8cm]{dario_fridgeA_wrongfit}
\par\end{centering}

\caption{Incorrect MOG fitting for the fridge in house A.\label{fig:fridgeA_wrongfit}}
\end{figure}


Several \textbf{causes} may contribute to the occurring of this phenomenon;
first of all many sensors have a small number of datapoints, which
may not be enough for a precise fitting. Furthermore, the initialization
of the MOG is done randomly: it might be useful to optimize the inizialization
to the space we are working with. Last, there might be problems with
the EM implementation we worked with, which was coded by Michael Chen.\footnote{For a followup-project a simple discretization could be more effective than EM and perhaps Levenshtein edit distance could be used to compare temporal patterns (suggestions by M.W. van Someren)}

\subsection{Temporal relations\label{sub:Relational-Profile}}

Working out the relations is a bit more difficult, since we want
to compare \textbf{four sensors} at time, and no more spotting out
the direct similarities between sensors in different domains, but
rather looking for relations between sensors in the same domain,
which are similar across the domains. Thus, in this scenario we have
two sensors $a,b\in S$ from the source domain, and two sensors $\alpha,\beta\in T$
from the target domain. Given a function $r(s_{1},s_{2})$, which
models the relation between the sensors $s_{1}$ and $s_{2}$ as a
statistical distribution, we want to measure how similar is $r(a,b)$
to $r(\alpha,\beta)$. The more they are similar, the more likely
will be for $a$ to be mapped in $\alpha$, and $b$ in $\beta$ at
the same time.

We decided to model the $r$ function between two sensors $x,y\in D$
(any couple of sensors belonging to the same domain) with a Normal
distribution, which parameters are found by maximum likelihood over
the set of activation distances between $x$ and $y$ data points
in the training set.

\begin{figure}
\begin{centering}
\includegraphics[width=8cm]{dario_relational}
\par\end{centering}

\caption{Plot of the distances in activation time between the toilet door and
the toilet flush in the same house. We can notice that there is a
peak shortly after the zero, meaning that in the most of the cases
the toilet flush activates shortly after the toilet door.\label{fig:relational-profile}}


\end{figure}


\prettyref{fig:relational-profile} shows an example of such data
points, with their relative curve, spotting out the relation between
the toilet door and the toilet flush in house A. Each \textbf{data
point} represents the time interval between an activation of the toilet
door and an activation of the toilet flush. As we can see (and imagine),
the most of the times happens that the toilet flush activates shortly
after the door: this is represented by the high number of data points
in the zone right after the 0.

The \emph{relational profile} of those two sensor will thus be given
by the mean and variance of the blue data points. In the figure this
is represented by the \textbf{green curve}, which is the Normal distribution
over such mean and variance.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% KOEN

\subsection{Day Profile}
\label{sub:Day-Profile}

What unit of time we can expect to find many recurring patterns in a human life?
An obvious answer to this question is 24 hours, one day.

Possibly, recurring patterns could be also find in the order of weekdays,
saturdays, sundays, seasons, decades etcetera. Our data consisted of less than
30 days per house. The only timeframe we could reasonably recognize patterns was
on daily basis (or parts of days).

We expected that modeling an average day would reveal mappings across houses.

\begin{description}
 \item[Assumption]  The 'average day' of inhabitants of different households is similar.
\end{description}

We expected to find a profile such as: Bedroom door, toiletdoor, toilet flush,
shower, fridge, frontdoor etcetera. If we get a similar average day in different houses this could 
assist in mapping the sensors.

Our approach was to list the first activity each day. We took the most occuring
activity. Then we took all days again and listed the most frequent activity
performed right after this first activity. We repeated this until all days
ended. This resulted in an 'average day'.

\begin{description}
 \item[House A] 1-bedroomdoor  2-bathroomdoor 3-toiletflush 4-bathroomdoor
			5-toiletflush 6-bathroomdoor 7-toiletflush 8-bathroomdoor 9-toiletdoor 10-frontdoor
 \item[House C] 1-couch 2-toiletdoor 3-toiletflush 4-couch 5-frontdoor 6-couch 7- fridge 8-couch
\end{description}

Conclusion, this is not the result we hoped for. For these houses the average
day is quite different. The main reason for this would be that the number of
analyzed days is to small. Just 30 entries is a very small number in machine
learning. But the results could maybe already contribute to other approaches as
a bias, see following section.

We expect that with more data and an enhanced algorithm (for example allowing
branching during the day and assigning probabilities to activities) this
technique could be more fruitful.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DARIO

\subsection{Combining Profiles}

Sections \ref{sub:Sensor-Profile}, \ref{sub:Relational-Profile}
and \ref{sub:Day-Profile} showed three different ways to get information
out of the sensors data set. In this work we did not have the time to work out an
effective way of combining this different information into an unique profile.
What we do is computing the \emph{scores} for sensor matching with
the different criteria and integrate this information instead. The
way we do this is explained in \ref{sec:directmapping}.

