% THIS IS SIGPROC-SP.TEX - VERSION 3.1
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
% APRIL 2009
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@hq.acm.org
%
% For tracking purposes - this is V3.1SP - APRIL 2009

\documentclass{acm_proc_article-sp}
\usepackage{listings}

% Vielen Dank P. Kozeny !!!

 \makeatletter
 \let\@copyrightspace\relax
 \makeatother

\begin{document}

\lstset{ 
	language=C++, 	               	% the language of the code
	breaklines=true,
	basicstyle=\fontsize{6}{6}\selectfont,
	numbersep=5pt,                    	% how far the line-numbers are from the code
	showstringspaces=false,       	% underline spaces within strings
	frame=single,                          	% adds a frame around the code
	tabsize=2,                             	% sets default tabsize to 2 spaces
}



\title{Distant Speech Recognition in Smart Homes Initiated by Hand Clapping within Noisy Environments.}
%Format\titlenote{(Does NOT produce the permission block, copyright information
%nor page numbering). For use with ACM\_PROC\_ARTICLE-SP.CLS. Supported by
%ACM.}}
%\subtitle{[Extended Abstract]
%\titlenote{A full version of this paper is available as
%\textit{Author's Guide to Preparing ACM SIG Proceedings Using
%\LaTeX$2_\epsilon$\ and BibTeX} at
%\texttt{www.acm.org/eaddress.htm}}}
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
Florian Bacher\\
       \affaddr{Alpen-Adria-Universit\"at Klagenfurt}\\
       \email{fbacher@edu.uni-klu.ac.at}
% 2nd. author
\alignauthor
Christophe Sourisse\\
       \affaddr{Alpen-Adria-Universit\"at Klagenfurt}\\
       \email{csouriss@edu.uni-klu.ac.at}
}
% There's nothing stopping you putting the seventh, eighth, etc.
% author on the opening page (as the 'third row') but we ask,
% for aesthetic reasons that you place these 'additional authors'
% in the \additional authors block, viz.
%\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group,
%email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
%(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
\date{15 December 2011}
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.

\maketitle
\begin{abstract}
Due especially to the potential improvement brought into the life of elderly or disabled people, the smart home domain
has become a major field of research in information
and communication technologies. One of the several ways to interact
with a smart home is to control it by using voice commands. However, a major challenge within this 
approach is to decide which noises to ignore and which to react to.
Therefore, the goal was twofold: to perform the future work suggested by Fleury et al \cite{distant-sr}, including designing a methodology to evaluate the possibility of recognizing voice commands initiated by hand claps, as well as to capture a set of specific commands, as performed by 20 speakers, in a noisy setting. The results of this experiment show that our implementation works
well for relatively quiet daily situations but as soon as the background noise level gets too high,
an accurate detection of commands can not be guaranteed anymore.
\end{abstract}

% A category with the (minimum) three required fields
%\category{H.4}{Interactive Systems}{Miscellaneous}
%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures,
%performance measures]

%\terms{Theory}

\keywords{smart home, keyword detection, clap recognition, ASR, distant speech} % NOT required for Proceedings

\section{Introduction}
The work of the French team of the ``Sweet Home Project'' \cite{distant-sr}, constitutes the basis of our paper. Indeed, this team has developed a piece of software responding to 
the same needs as ours: ``designing a new smart home system based on audio technology''. But the difference lies in the fact that they did not take into account the 
possibility of having some disturbing additive noise while doing the experiments. What we intended to 
do is to evaluate whether or not such a piece of software can also work in a hostile environment, i.e. within some specifically noisy conditions. That is why we took special care of their work, and especially of the study referenced in \cite{distant-sr}.\\ 
References \cite{spdet,sprecex} describe in more details the experiments conducted by the same group of French scientists, to test a piece of software in charge of identifying distress situations in a smart flat "by analyzing the 
sounds and also by recognizing a distress sentence". These data were useful for us, especially 
regarding to the way they designed and conducted their experiments.\\
Finally, a paper of Rouillard et al. contains \cite{rouillard:shcom} some crucial information which helped us designing the 
piece of software in charge of recognizing the claps and the predefined word which had to be detected. 
The work of Moncrief et al. \cite{anxsh} also helped us choosing the appropriate background noises for our experiment.\\
In the next section, we are going to describe the experiment's setup and the methodology we used for our experiment.
Section 3 contains information about the used technology and the implementation of the software. In section 4, we discuss the
experimental results and, in section 5, we reflect on the outcomes of the experiment.


\section{Experiment description}

\subsection{Experimental Settings}
The experiment was performed in a 3 m $\times$ 3 m office provided by the Alpen-Adria-Universit\"at Klagenfurt. Figure \ref{room-layout} shows the layout of the furniture.\\
\vspace{-0.4cm}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{Room-layout.png}
\caption{Layout of the furniture.}
\label{room-layout}
\end{figure}

\vspace{-0.4cm}
The sounds were picked up by two microphones hidden in this office. They were placed on each part of the medial axis, about 0.5 m away from each wall. The chosen number of microphones and their disposition is directly inspired by \cite{vacher:atiwr} and related experiments: ``The closest projects seem to have focused mainly on one-room microphone array''. Since our aim is to test our system's accuracy within noisy environment, we decided to focus ourselves onto such configuration, which limits the number of microphones to 2, as described by the ``Sweet-Home Project'' team on the schema of their experiment. Technical features concerning these microphones are described in following Section 3.\\
We tried as much as possible to limit the external sources of background noise, to make that the only ones interfering within our experiment would be precisely the ones we chose to introduce. However, our office was not fully isolated: on one side, there was a secretaries' office; on the other, a class room with computers. Noises of chairs moving or pieces of conversations could sometimes be heard, but at a too low sound level to interfere with our recognition. By the way, such noises remain very close from those we wanted to use as background noises for the experiment. We registered some samples of these external noises, to be able to identify it in the audio files we recorded (see next subsection 2.2), and performed some pilot testing to make sure that our system was working as expected.

\subsection{Experimental protocol}
20 persons (10 men and 10 women) participated in a 2-phase experiment to:
\vspace{-0.4cm}
\begin{enumerate}
\item test the ability of our system to recognize some double hand-claps and pronounced words;
\item create a database of records of uttered words corresponding to a set commands of a smart-home system.
\end{enumerate}
\vspace{-0.2cm}
The average age of the participants is $25.5 \pm 11$ years: all people are between 19 and 28 years old, except one person who is 72 years old. However, we obtained for this person some data which are quite similar to what we got for other people. Without this person, the standard deviation about the age would be $3.5$ years. Moreover, our participants come from $9$ different countries, mostly in Europe, speaking between 3 and 4 different languages (native language included). This last data is quite interesting for us in the perspective of creating a database of records, because it increases the diversity of the learning set we intend to construct.\\
The first phase of the experiment (Phase 1) aims at recognizing a word uttered by our participants as being a command to be executed by the Smart Home system, and that within different use cases. Therefore, participants had 
to perform a double hand-clap to reach attention of the system. When the system identifies a double-clap, it then switches into a listening mode and stays in this mode for a few seconds until it identifies a sound as being a word of the database, or until the timeout is reached. In both cases, the system finally goes back to a waiting mode, until it identifies another double clap. To make things simpler, we wanted our system to recognize only the word ``Jeeves'', which should be considered as the label of the unique command it can execute.\\
Phase 1 consisted in following four small scenarios of activities within the office where the experiment was run, while we both were present in this office to supervise the recognition system and the running of the experiment alike.
Twice during each of these scenarios, participants had to make the system recognize the command ``Jeeves'', which means: perform a double hand-clap, followed by the utterance of this word. The participants did not know when they had to perform a double hand-clap, actually we agreed with them on a visual signal given by us when we wanted them to do so. Moreover, we asked them to do an audible double clap, as if they wanted to reach the attention of an assistance or in our case of a computer, but with clearly specifying that they did not have to orient their claps toward the computer or another specific place in the office.\\
Here is a description of each of our $4$ scenarios:
\vspace{-0.4cm}
\begin{enumerate}
\item Initially, the system is running and the participant stays outside the room. Whenever he wants, the participant enters, closes the door, and then walks around the office for a few seconds. Double hand-claps and word utterance were performed just after the entrance into the office, and after having walked around for a few seconds.
\item The scenario begins inside the office, the participant has to walk around for a few seconds. Then, he has to open a drawer, take something (like a bottle of water) out of it, close the drawer and put it on the desk. Double hand-claps and word utterance are performed just before opening the drawer, and once the object has been placed on the desk.
\item At the beginning of the scenario, the participant is walking around the room, for a few seconds as for the last scenario. Then, he has to move a chair in front of the desk, and sit down. Double hand-claps and word utterance were performed before taking the chair, and once the participant is sitting down.
\item The participant is initially sitting down in front of the desk and an extract of a French radio show is run during 3 minutes. This show extract includes some hand clappings and people laughing. Double hand-claps and word utterance are performed after 1 minute, and after 2 minutes.
\end{enumerate}
\vspace{-0.2cm}
Note that the instants fixed in the fourth scenario to perform the command order were chosen only because of the fact that there was not anybody applauding at these moments. This way, all participants perform the experiment within the same conditions. Moreover, we avoid situations where we cannot know which one of the double-clap performed by the participant or of the applause from the record was identified by the system to be a double-clap.\\
For each participant, we noted on a separate sheet paper which ones of their double-clap and word utterance attempts were recognized by the software, and which ones were not. We also counted the number of times when the software identified a noise which it had not to identify, and took notes about participants when they did not perform the double-clap or word utterance correctly (e.g. when they did too quiet claps, or did not utter ``Jeeves'' distinctly).\\
The second phase of the experiment (Phase 2) was designed to register a list of words, uttered by our participants to create a database corpus. This database will be used in future experiments and has the purpose to train a neural network for speech recognition in smart homes. Such registration was performed within 2 steps. We asked participants to, first, utter each word of the list once; and, second, to utter each word of the list 10 times.\\
This list was composed of the 15 following words, which could be used to control a smart home environment: ``Jeeves'', ``On'', ``Off'', ``Open'', ``Close'', ``More'', ``Less'', ``Next'', ``Previous'', ``Select'', ``Deselect'', ``Set'', ``Reset'', ``Connect'', ``Disconnect''. A reference record of utterance of all these words was proposed to each of our participants, in case of pronunciation difficulties.\\
The whole procedure of the experiment took between 30 and 40 minutes by participant, depending on whether quickly they understood how they should perform it, phase 1 and 2 taking each one more or less 15-20 minutes. During all our experiment, we told people to act in a natural way,  as we wanted, like the French team of the ``Sweet-Home Project'', to test a ``natural man-machine interaction'' \cite{vacher:atiwr}. Especially, for this Phase 2, we asked people to utter each word in a natural way, without trying to say it with a perfect British or American accent for example, our reference record being given only in case people really do not know how to pronunciate such or such word. Moreover, always for this Phase 2, we told our participants when they had to utter each word, by pointing a finger on them. This way, we were sure that people took the time to pronounce each word clearly, without trying to perform it as fast as possible to finish it.

\section{Implementation}
In this section, we will describe the libraries and technologies we used to
implement the double clap/speech recognition software.
For the experiment we implemented a software which listens for sonic events 
which occur in the room during each of the previously described scenarios in real time and reacts if it detects one of the predefined commands. In our case
this commands were the double claps and the subsequent utterance of the word ``Jeeves". 
The reason why we chose this combination of commands is that we tried to make the way of
communication with our software similar to the way one could call a butler. The double clap therefore has the purpose
of getting the ``attention" of the software and by subsequently calling the name ``Jeeves" it recognizes that the user intends to 
give a command to the software.\\ 
In order to capture all sonic events in the room where the experiment has been
conducted we used two dynamic microphones (Sennheiser BF812, Sennheiser
e8155) with cardioid polar patterns\footnote{There is no technical reason why we chose two different microphones but these were
the only ones which we were able to organize for the experiment.}. 
These were connected to the PC via a
``Line6 UX1'' audio interface. As the used microphones are originally 
intended for live recordings with high background
noise levels it was problematic to get sufficiently loud volume levels from
everywhere in the room. For this reason the microphone's signals had to be
preprocessed before their signals went into the speech
detection software. In order to preprocess the signal we used the software ``Pod
Farm v2.5'' which provides microphone preamps which were used to raise the
signals to sufficient levels. After being amplified, the signals also went
through a high pass filter with a cut-off frequency of 300Hz in order to
decrease mains hum and other unwanted low frequency background noise\\ The
recognition software has been implemented in C\# with the \texttt{System.Speech.Recognition} 
library, as proposed by Rouillard et al. \cite{rouillard:shcom}. This library allows the direct access 
to Window's speech recognition engine. As we didn't have much time to implement the software, this library 
was a good choice because of its ease of use. Listing
~\ref{lst:asr} shows how this library can be used to recognize predefined
words.  

\lstset{label=lst:asr}
\begin{lstlisting}
using System.Speech.Recognition;
speechRecognizer =
	new SpeechRecognitionEngine(
		new System.Globalization.CultureInfo("en-US"));
speechRecognizer.LoadGrammar(new Grammar(new GrammarBuilder("Jeeves")));
speechRecognizer.SpeechRecognized +=
	new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
speechRecognizer.SetInputToDefaultAudioDevice();
speechRecognizer.RecognizeAsync(RecognizeMode.Multiple);

static void recognizer_SpeechRecognized(object sender, 
	SpeechRecognizedEventArgs e)
{
	Console.WriteLine(e.Result.Text + ": Yes, master?");
}       
\end{lstlisting}

In order to detect double claps, the software periodically calculates the
average noise level, using the \texttt{AudioLevelUpdated} event of the created
\texttt{SpeechRecognitionEngine} object. Everytime the signal level exceeds a
threshold which is defined relative to the average noise level, the software
detects a peek. If there are exactly two noise peeks within a certain timeframe
(0.4 - 1s) the software classifies them as a double clap. 
Everytime such a double clap has been detected the actual speech recognition 
engine is activated for 3 seconds. If the word "Jeeves" is uttered within this period,
the engine recognizes it and gives an appropriate feedback.


\section{Results}
For each part of the command recognition, i.e. the double-clap recognition and the word utterance recognition, we classified the input data into 4 categories:
\vspace{-0.4cm}
\begin{itemize}
\item True positive: there was an attempt from the participant (to make respectively a double-clap or a ``Jeeves'' utterance), well performed, and it was recognized;
\item True negative: there was an attempt from the participant, not well performed, and it was not recognized;
\item False positive: there was not any attempt from the participant, but the system recognized something;
\item False negative: there was an attempt from the participant, well performed, but it was not recognized.
\end{itemize}
\vspace{-0.2cm}
We had to raise performance constraints on both double-claps and word utterances made by our participants to decide if they could be classified as true positives (resp. false negatives if the software did not recognized them), or retrograded to true negatives (resp. false positives). Double-claps should be audible, and a short silence should be easily heard between each clap of the double-clap. Word utterances should be audible too, and the word ``Jeeves'' should be easily recognized.\\
Such rating is due to the fact that we had some participants who executed really quiet and shy hand-claps, making that the software simply did not recognized it as hand-claps. Some others did not wait enough between the two hand-claps, so that it was recognized as a double hand-clap. On the other hand, for the speech recognition, some people had a particular way to pronounce the word ``Jeeves'': some said it like ``Cheese", or could not make the first letter ``J'' being heard, which gave something like ``Yeeves''. Figure \ref{global-results} gives statistics about the global results of our experiment following these criteria.\\
\vspace{-0.3cm}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{new-results1.png}
\caption{Global results of the Clap and Speech Recognition experiment.}
\label{global-results}
\end{figure}

\vspace{-0.4cm}
We can  notice that we got a good rate of correct classifications, i.e. of hand claps and uttered words which were recognized when they had to be or not recognized when they did not have to be (or, in other words, of true positives and true negatives): $70\%$ for the clap recognition, and $86\%$ for the speech recognition. In the same time, the rate of false positives is really low in both cases: only $10\%$  and $7\%$ respectively for the clap and speech recognition.\\
However, we also have to deplore a relatively high rate of false negatives, especially in the case of clap recognition ($20\%$). Actually, this can be directly linked to the fact that our software requires a threshold to perform the double-claps recognition. Depending the way people are clapping, e.g. if they clap with straight hands or with letting a cavity within their hands, the sound curve displayed on the computer, and so the threshold to be used, will not be the same.\\
Then, if we analyze our experiment scenario by scenario, as proposed in Figure \ref{detailed-results}, a clear difference can be noticed between the first 3 scenarios and the last one, especially in the case of the speech recognition: $90\%$ of correct classifications on average for the word utterance recognition in 3 firsts scenarios (resp. $75,0\%$ for the double-clap recognition), and only $73\%$ in the fourth scenario (resp. $58,3\%$). This can be explained by the difference of background noises used: in the 3 first scenarios, the background noises are of the same kind (footfalls, opening/closing doors, moving a chair, etc.), whereas in the last scenario, it was a matter of laughs and applauds from a TV show, which were much likely to generate some false positives, especially in the case of the clap recognition.\\
\vspace{-0.3cm}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.5]{new-results2-2D.png}
\caption{Detailed results for each scenario.}
\label{detailed-results}
\end{figure}
\vspace{-0.3cm}

\section{Conclusion}
This paper presents a new idea of how to initiate speech recognition in human computer interaction, namely the use of double claps to reach the attention of a smart home, as well as a study of the influence a noisy environment can have regarding such recognition method. Both clap and speech recognition could thus be key technologies in a smart home environment, enabling a human-machine interaction between the user and his home, but can be useless despite that in case of inefficiency within the background noise configuration of each home, including day-life noises like footfalls, doorfalls, TV or radio shows, etc.\\
The conducted experiments gave us some crucial information about the feasibility of integrating of such technologies into a smart home environment. We first can see that the speech recognition was quite successful, for the following reasons: low rate of false positive, except in the case of background noise environment like a TV or radio show (which would be, by the way, one of the most extreme cases of use for such technology). Second, the results of the double hand-clap recognition are encouraging, but not yet satisfying: a quite higher rate of false positives ($10\%$) than for the speech recognition ($7\%$), but above all, $20\%$ of false negatives.\\
Despite that, these two problematics (false positives and false negatives) seem to be fast overcome. Actually, the rate of false negatives is not the most important issue to solve: if someone, from time to time, has to double-clap twice (or respectively utters a command twice) to be heard by the system, that is not a real problem. There would be a problem in case of a higher rate of false positives detections. If a command is executed whereas the user did not want to, it can be problematic, especially in the case of distress situation detection, e.g. if firemen are called and come on place whereas there is not any problem: such exceptions should only hardly happen.\\
Next step of this experimental process could be to perform this experimental protocol in a real smart-home context, as the French team of the ``Sweet-Home Project'' did. Given a whole flat, within a multi-room context, and probably with using a more mature double-claps detection technology, our results could be quite different.

\section{Acknowledgements}
We would like to particularly thank John NA Brown for his help and advices given during all the process of this work, as well as the Alpen-Adria-Universit\"at Klagenfurt which provided us a room to run our experiment.


%
% The following two commands are all you need in the
% initial runs of your .tex file to
% produce the bibliography for the citations in your paper.
\bibliographystyle{abbrv}
\bibliography{sigproc}  % sigproc.bib is the name of the Bibliography in this case
% You must have a proper ".bib" file
%  and remember to run:
% latex bibtex latex latex
% to resolve all references
%
% ACM needs 'a single self-contained file'!
%
%APPENDICES are optional
%\balancecolumns
%\appendix
%Appendix A

%\balancecolumns
% That's all folks!
\end{document}
