%%% verslag.tex ---
%% Author: xtroce@museum
%% Version: $Id: mitschrift03.tex,v 0.0 2012/02/13 12:22:49  Exp$

\documentclass[12pt,a4paper]{article}
%\special{papersize=210mm,297mm}

%% use a better geometry for A4 paper
\usepackage[a4paper,top=3cm,bottom=3cm,left=2cm,right=2cm]{geometry}

%% package for importing other latex files to this one
\usepackage{import}

%% package to use graphics in latex
\usepackage[dvipdf]{graphicx}

%%package for drawing IPA sound font
\usepackage{tipa}

%%package for multiple figures in one figure
\usepackage{subfigure}

%% package for multiple rows in a table
\usepackage{multirow}

%%forcing figures on specific locations
%% to do this use [H] in the position specifier of the figure
%\usepackage{float}
%\restylefloat{figure}


%% package to include matlab code into latex
%% to include use \lstinputlisting{MATLABFILE}
%% to include a whole file
%\usepackage{mcode}

%% Package to colorize table output
%\usepackage{colortbl}

%% To write Umlaute in latex without encoding
%\usepackage[ansinew]{inputenc}

%% german language package
%\usepackage{ngerman}

%\usepackage[debugshow,final]{graphics}
%\usepackage{setspace}

%% for new packages use the ~/texmf/tex folder
%% run texhash afterwards



\title{Comparison of Robust and linear algorithms for formant detection in
  PRAAT}
\author{Sebastian Dr\"oppelmann\\ Osewa Redencio Jozefzoon}
\date{\today}

%\doublespacing
\bibliographystyle{plain}%Choose a bibliograhpic style

\begin{document}

%%Titlepage
\begin{titlepage}
\maketitle
\begin{center}
% \subimport{/home/xtroce/.emacs/}
%\includegraphics[scale=0.2]{logo_uva.ps}
\end{center}
\thispagestyle{empty}
\end{titlepage}

%%TOC
\tableofcontents
\newpage

%%%%##########################################################################

\section{Abstract}
Linear Prediction  is one  of the most  commonly used tools  in speech
analysis and also other fields  where signal analysis is important. In
our paper we try to compare two different methods of linear prediction
with each other  to get the strengths and weaknesses  of them.\\
In the paper \emph{On Robust Linear Prediction of Speech} Chin-Hee Lui
claims that the Robust Linear Prediction (RLP) algorithm achieves more
accurate  results   than  the  conventional   Linear  Prediction  (LP)
procedures. Chin-Hee Lui states that  this accuracy is achieved by the
manner in which the RLP, in contrast to the conventional LP, processes
the weight  of the prediction  residuals according to a  maximum error
value.  The  RLP provides  a less biased  estimate for  the prediction
coefficient with less variance.

This paper  will examine this  claim by contrasting the  difference in
achievement by  the LP and  the RLP. For  this we will first  test the
performance of both algorithms with synthesized vowels. Then we will
also try  to measure  the performance with  real speech. The  tool for
this research will be a sound analyzing program called PRAAT, in which
the LP and the RLP will be used to analyze synthetically generated and
real voices.

\subsection{The Methods}
The  conventional linear prediction  model is  used to  \emph{guess} a
linear function to given measurement  samples over a certain period of
time. For each  sample the prediction is calculated by  the sum of the
previous signal  values influenced by  a predictor coefficient.  It is
then  possible to  calculate an  error value  between the  real signal
value and the guessed value.  This error value can be minimized to give
us the best fitted 1D function for the samples.\\

With the robust version of linear prediction the idea is to weight the
outlying  values  differently  instead  of  just  taking  the  squared
values.  This   returns  a  better   representation  for  non-gaussian
functions with sharper outliers. The weighting itself is done by 

\section{Synthesized Vowels}
The idea  was to compare the  two algorithms in a  situation where the
original formants of the vowels  are known.  This makes it possible to
compare the  results of  the algorithms with  the original  input, and
through that  compare how much  the original formants differ  from the
ones  returned by the  algorithms. Doing  that for  multiple generated
vowels makes it possible to compare the general performance of these
two  algorithms.\\
To make  it more precise  we first generated artificial  vowels, where
the frequencies  of the formants  is known.  This generation  was done
through the use  of the Pols \& Van Nierop (1973)  table, in which the
first  three formants of  50 dutch  males and  25 dutch  females where
measured.   This table,  provided by  PRAAT, was  taken to  generate a
standard male and female voice by calculating the mean of each formant
for each  sex.  The input values  for generation can be  seen in Table
\ref{tab:inputgeneration}.  For  the 4th and  5th formant we  used the
fixed values  of 3500 and 4500  Hz.  Then these  vowels where analyzed
with both algorithms.

%% import of the input table
\subimport{scripts/tables/}{pvn.tex}

To generate more  data and to simulate different  voices we varied the
pitch  frequency,  the bandwidth  and  the  maximum  frequency of  the
created  vowels. The  pitch  is  the base  frequency  of voices  which
depends on  the length of  the areal tube  of the speaker. For  men it
usually lies around 100Hz, for women around 200Hz and for children
around 300Hz. We used these three values to variate the pitch.\\
The bandwidth determines the width of a peak of a formant.  Usually it
lies around 10-50Hz for the first  three formants. For the 4th and 5th
formant it  is usually  bigger. For our  generation the  bandwidth was
varied  from 8\%  to  21\% of  the  pitch frequency  for  the first  3
formants, the 4th and 5th formant used a fixed bandwidth of 10\% of
their value.\\
The  maximum frequency  at last  defines  the upper  boundary for  the
algorithms  to  search  for  formants.   This  is  important,  as  the
algorithms guess the steps of the  formants in between 0 and the upper
boundary. Usually men's voices have a maximum frequency around 4500 Hz
and female voices around 5500 Hz, so we varied these values from 4500
to 5500 Hz in steps of 500 Hz.\\
Furthermore we  used two types of  generation for the  vowels. One was
the  pulse-train method the  other the  phonation method  which should
generate  more realistic  vowels. We  then fixed  some of  the varying
values to see the performance of the algorithms according to the
remaining varying values.\\
For the analysis  one varying value was used as  function of the other
fixed values.  This way we  where able reduce the number of variations
to get a better overview  without loosing information.  This way it is
also possible to see the performance in different configurations of
the generation.\\
For  the varying maximum  frequency and  the varying  source frequency
measurements we used a bandwidth of  11\% of the pitch. To measure the
bandwidth we used  100 Hz as pitch frequency for males  and 200 Hz for
females, and  we set the  maximum frequency to  4500 Hz for  males and
5500  Hz  for females.   The  number  of  formants the  algorithm  was
searching  was fixed to  five formants  since this  was the  amount of
formants used  for the generation.  All these  measurements where done
with both creation methods.  At  the end we compared the formants that
were returned from  the algorithm with  the ones used as input.\\

For  the  generated  vowels  the  overall performance  of  the  robust
algorithm   was   better   than    the   performance   of   the   Burg
algorithm. Especially  the results using the  phonetic generation show
that  the  performance  of  the  robust  algorithm  is,  indeed,  more
robust. The  varying pitch frequencies  for both sexes  and generation
methods can  be seen in  the example Figures  \ref{subfig:syn_ex1} and
\ref{subfig:syn_ex2}.   As  you  can  see  in the  images  the  Robust
algorithm, which is  drawn in green, is much  closer to the originally
generated  vowels  than  the  Burg  algorithm  measurements  drawn  in
red. Also  you can  see in Figure  \ref{subfig:syn_ex2} that  with the
phonetic generation method the results for the Burg algorithm produces
F2 values  which are way lower than  they are supposed to  be. This is
also true for the results with varying maximum frequencies shown in
Figures \ref{subfig:syn_ex3} and \ref{subfig:syn_ex4}.\\
The lower F2 values returned for the phonation method are generated by
the  fact that  the Burg  algorithm sometimes  measures  the important
formants inaccurately. Instead of  the formant values searched for the
algorithm returns a formant with  a very high bandwidth, which lies in
between two wanted formants.  In other words the algorithm takes a not
so  important very wide  peaked formant  and uses  it for  the formant
which a much narrower peak which should have been returned. This error
shifts the anticipated measurements,  which then results in values for
certain formants which  is too low.  This could  be corrected by using
an error evaluation  depending on the bandwidth and  formant value and
deleting the formants with a high error shifting the later formants to
the designated places.  We did not implement such an error correction
since this would go beyond the scope of this paper.\\
The  full  set   of  resulting  figures  can  be   found  in  Appendix
\ref{sec:App_fig}.

%% example pictures
\subimport{scripts/pics/}{examplepics.tex}

\section{Real speech}
After analyzing the  algorithm in the domain of  synthesized vowels we
where interested  how good the  performance of these algorithms  is in
real speech. As it is impossible to know the exact formant frequencies
of spoken language, we used recordings of spoken vowel-consonant-vowel
pairs where the  recording was done with two  different microphones at
the same  time. One  of the  microphones was a  high quality  one, the
other a  low quality table  microphone.  These recordings  where taken
from the IFA spoken language corpus \cite{ifacorpus}. We analyzed both
recordings with  both algorithms,  to see how  big the  difference for
each algorithm was between the two recordings. Ideally there should be
no difference in the values of the formants, since both recordings are
the same and differ only in quality.  Through this comparison we hoped
to get some insight over the performance of the two algorithms.\\

For the comparison  of the two algorithms we used  an error value that
expresses   the  difference   between  the   recordings  of   the  two
microphones.   To do  this we  used  a logarithmic  error function  to
compensate  the higher  loss of  the higher  formants. At  the  end we
multiplied all  results with 10  to scale them  up and make  them more
readable.   These   error   functions    can   be   seen   in   Figure
\ref{fig:errorval}. For  comparison we used  a function for F1  and F2
separately, for F1 and F2 combined, and F1, F2 and F3 combined.\\

\begin{figure}
\centering
\subfigure[All Values] {
$\sqrt{(\log_{10}(F1_{fm}) - \log_{10}(F1_{hm}))^2 + (\log_{10}(F2_{fm}) - \log_{10}(F2_{hm}))^2 + (\log_{10}(F3_{fm}) - \log_{10}(F3_{hm}))^2}$
\label{subfig:threeerror}
}
\subfigure[First two formants] {
$\sqrt{(\log_{10}(F1_{fm}) - \log_{10}(F1_{hm}))^2 + (\log_{10}(F2_{fm}) -  \log_{10}(F2_{hm}))^2}$
\label{subfig:twoerror}
}
\subfigure[One Formant] {
$|\log_{10}(F1_{fm}) - \log_{10}(F1_{hm})|$
\label{subfig:oneerror}
}
\caption{Error Functions}
\label{fig:errorval}
\end{figure}
When comparing the distance of the two recordings, the results seen in
table \ref{subfig:rv_fem} showed that the robust algorithm is slightly
worse with  the 20, 28 and 60  year old women, but  has better results
for the 40 year old woman.  Also the distance is larger for almost all
measurements in  F1, F2,  F1 and  F2 combined and  in the  first three
formants.  On some vowels the Robust algorithm performs better but the
overall performance of the robust algorithm is slightly worse
considering the distance of the two measurements.\\
For  males the  difference is  much smaller  and the  Robust algorithm
gives better results  around half the time whereas  the Burg algorithm
supersedes    the    Robust    method    the   other    half    (table
\ref{subfig:rv_male}). On  the first  formant the robust  algorithm is
always  as  good  or  even  better  on  the  \textbf{a},  whereas  the
\textbf{u} is often closer with the Burg algorithm.  The error span of
the robust algorithm is almost always worse on the F2 with respect to
male voices.\\

%% realvoice measurement error values
\subimport{scripts/tables/}{endresults_realvoice.tex}

To visualize the  performance of the two algorithms  we used the image
provided  in  Figure  \ref{fig:rv_lines}.   In  this  figure  the  two
recordings are connected by a line to make the difference visible. The
robust algorithm is shown in green, the Burg algorithm is shown in red
just  as  in the  previous  figures.   The  average of  the  different
microphones  is  plotted as  letters  in  the  image, where  the  Burg
algorithm  has a  bigger  letter  than the  robust  algorithm.  It  is
possible to  see that even with  the slight disadvantage  of a greater
distance of the vowels, both methods perform quite well finding the
frequency the vowels are supposed to have.\\

%% image of the results with line connections
\subimport{scripts/pics/}{rv_lines.tex}

When comparing the percentage of the smaller distance according to the
total  amount  of measurements  the  Burg  algorithm  has the  smaller
distance 54\% of the time for female voices whereas for male voices it
only has  a smaller  distance 34\% of  the time. The  robust algorithm
achieved 46\% for  female and 66\% for male  voices accordingly.  More
specifically,  for  the  prediction   of  the  vowels  \textbf{a}  and
\textbf{o} the Burg algorithm  appears to predict more accurately, i.e
has a smaller difference between  the prediction of the vowels for the
two different  microphones, and the robust algorithm  is more accurate
for the \textbf{i} and \textbf{u} (in case of the female voices).  For
the  prediction of  the vowels  \textbf{a} and  \textbf{i}  the robust
algorithm appears to predict more accurately and it is as effective as
the Burg algorithm  for the prediction of the  \textbf{u}. You can see
the results of the measurements in Table \ref{tab:rv_statistics}.\\
The average difference for each method is calculated by summing up the
biggest and smallest difference  between the measurements and dividing
that by two. The Significance measure is calculated by determining the
probability of this average difference occurring randomly.

%% realvoice measurement statistic values
\subimport{scripts/tables/}{statistics_realvoice.tex}

\appendix
\section{Appendix}
\label{sec:App_fig}
\subimport{scripts/pics/}{pics.tex}


%%%%##########################################################################
\bibliography{verslag.bib}
\end{document}
