\documentclass[12pt,A4Paper,twoside,openright]{report}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\usepackage{listings}
\raggedbottom                           % try to avoid widows and orphans
\clubpenalty1000%
\widowpenalty1000%
\brokenpenalty10000\relax
\lstset{ %
language=Java,                % choose the language of the code
basicstyle=\footnotesize,       % the size of the fonts that are used for the code
numbers=left,                   % where to put the line-numbers
numberstyle=\footnotesize,      % the size of the fonts that are used for the line-numbers
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=single,           % adds a frame around the code
tabsize=2,          % sets default tabsize to 2 spaces
captionpos=b,           % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={\%*}{*)}          % if you want to add a comment within your code
}
\renewcommand{\lstlistingname}{Code Sample}
\newcommand{\degrees}{\ensuremath{^\circ}}

\parindent 0pt
\parskip 6pt

\begin{document}

\thispagestyle{empty}

\rightline{\large\emph{Matthew Thomson}}
\medskip
\rightline{\large\emph{Gonville and Caius College}}
\medskip
\rightline{\large\emph{mt521}}

\vfil

\centerline{\large Computer Science Part II Dissertation}
\vspace{0.4in}
\centerline{\Large\bf Dolphin: Networking using sound}
\vspace{0.3in}
\centerline{\large May 2013}

\vfil

{\bf Project Originator:} Matthew Thomson

\vspace{0.1in}

{\bf Project Supervisor:} Oliver R. A. Chick

\vspace{0.5in}

{\bf Director of Studies:}  Graham Titmus and Peter Robinson

\vspace{0.5in}

{\bf Overseers:} Peter Robinson and Robert Watson

\vfil
\newpage
\cleardoublepage

\setcounter{page}{1}
\pagenumbering{roman}

{\huge\bf Proforma}
\bigskip

{\large
\begin{tabular}{ll}
Name:               & {Matthew Thomson}                       \\
College:            & {Gonville and Caius College}            \\
Project Title:      & {Dolphin: Networking using sound}       \\
Examination:        & {Computer Science Tripos Part II Dissertation, May
2013} \\
Word Count:         & {Some}     \\
Project Originator: & {Matthew Thomson}             \\
Supervisor:         & {Oliver R. A. Chick}          \\ 
\end{tabular}
}
\medskip

{\Large\bf Original aims of the project}
\medskip

To implement a new data-link layer for Android that uses sound as the
transmission device. It should be able to encode data and decode reliably,
functioning on a mobile device without access to the Internet. It should be
possible to use this system without prior knowledge of how it works, by simply
sending data to it and receiving data back again.
\medskip

{\Large\bf Work completed}
\medskip

The success criteria of the project have been met and all the core features
implemented successfully. Evaluating the system with various examples of test
data shows it is capable of working with 100\% accuracy. I have also implemented
an extension which allows full duplex communication between the devices without
extending the time required to send a transmission.
\medskip

{\Large\bf Special difficulties}
\medskip

None

\newpage

{\Large\bf Declaration}
\medskip

I, Matthew Thomson of Gonville and Caius College, being a candidate for Part II
of the Computer Science Tripos, hereby declare that this dissertation and the
work described in it are my own work, unaided except as may be specified below,
and that the dissertation does not contain material that has already been used
to any substantial extent for a comparable purpose.

\bigskip
\leftline{Signed}

\bigskip
\leftline{Date}

\cleardoublepage

\tableofcontents

\cleardoublepage
\setcounter{page}{1}
\pagenumbering{arabic}

\chapter{Introduction}

In this project I create Dolphin, an OSI layer 2 for Android that allows data to
be transferred using audible sound. All
five of the original success criteria have been achieved, as well as one of the
two extensions. The reliable data transfer rate achieved was just over 15
bytes per second, so this should be used when the amount of information to
send is small (such as a \emph{vCard}), an Internet connection is
unavailable and techniques such as Bluetooth pairing would be too slow.

\section{Motivation}

With more people owning smart phones the need to transfer information
between devices has become a larger issue. Some example methods of data transfer
include: multimedia messaging; infrared; bluetooth; WiFi; and 3G. A more recent
method of transferring small text messages, such as internet links, is through a
\emph{Quick Response} (QR) code, which is scanned and decoded using the phone's
camera and a dedicated application. These QR codes have become increasingly
popular, both in phone to phone data transfer and advertising, with 10\% of all
US magazine advertising containing a QR code in 2012~\cite{QRPopularity}.

The weaknesses of QR codes are the need to be close enough
to the code to accurately scan it, so for long distances the code needs to be made
very large, and needing to hold the camera still enough to scan the image, which
is not always possible in a crowd or in a moving vehicle. They can only
contain a small amount of information: around 4000 alphanumeric characters, or
less than 2000 using kanji symbols~\cite{QRSize}.
This cannot be used for transferring larger files and most QR programs that
offer large file transfer opt instead to upload the file to a webserver and encode the
link in the QR code. Dolphin can be used to create QR code-like sequences
which use audible rather than visual stimuli to encode the data, and has no
limits on the maximum file size. It is also not necessary to hold the phone
still, so is more suitable in situations where camera-shake is expected.

Using sound as the transmission medium means larger files can be encoded
using longer sounds, which is more feasible than implementing an ever
increasing size QR code. There is a limit on the amount of information a QR
code can contain as the entire code must be in the camera's frame at once
and the dots that comprise the code need to be large enough to be
detected by the camera. The largest QR code developed to date contains 177 rows
and columns and can theoretically hold 4296 alphanumeric characters. Smaller
dots are also more difficult to scan than larger dots with a moving camera.
With Dolphin, there is no need to hold the receiving device steady and noise
cancellation techniques can be applied to the system in the same way image
stabilisation can for QR codes. Furthermore, if the receiving device is further
away from the transmitter you simply need to increase the volume rather than
print a larger QR code. For a real-world example, the link to a band's latest
album could be encoded and played through the sound system at a concert, which
will be loud enough to reach the back of the crowd as they otherwise wouldn't be
able to hear the music. However, assuming a smartphone camera has an effective
viewing range of 1:10 (so a 2cm QR code can be effectively read up to a distance
of 20cm) and a concert location size of 500 metres, to be read at the
back the QR code printed would need to be 50m across, which would be difficult
to erect.

Dolphin uses the data-link layer as higher layers can access the
functionality of the concept. The advantage over an application layer sound
transfer system is that other applications would be harder to integrate this
into and it would be less useful. This way Android app programmers can use
the data-link layer to implement file transfer systems, QR code style sounds,
discreet text communication systems, and more.

\section{Android and Java}

I use Android for this project, rather than one of the other
major mobile OS providers, because Android is open source and has an expressive
API. This means it is easier to access the underlying hardware on the
phone such as the microphone and speakers, and app programmers have more freedom
to use this layer in their projects. Android also uses Java which has useful
libraries.

\section{Encoding and decoding}

Dolphin encodes any sequence of bytes, regardless of what they represent, in
order to ensure that any computer file can be successfully transmitted. Each bit
pattern that makes up a byte is assigned a unique frequency, meaning every file can be
portrayed as a series of 255 possible frequencies. Different frequencies sound
different when they are played, so the encoding process is reversible by
analysing what the unique frequency of the sound is. To determine the frequency
of a sound I use a \emph{Fast Fourier Transform} to sort the recorded data into
an array of amplitudes at each frequency, and then the largest amplitude is taken
as the one that was sent.

\section{Results}

The evaluation (\S4) reveals that Dolphin is capable of 100\%
accuracy in data transfer, using a sample rate of 32kHz and encoding each byte of information in
a 64ms burst of sound. Frequencies representing bytes are spaced 30Hz apart so
they can be more reliably decoded. These variables mean a transfer rate of
1kB/min, but with the test of 32ms bursts only failing to decode 1 byte in 500,
this transfer rate could be doubled and accuracy of Dolphin would fall to
99.998\%.

\section{Project summary}

In Chapter 2 I discuss the background knowledge required for implementing this
project, including sound theory, how to decode signals and some existing work
that does similar things. I also outline my requirements analysis, describe a
testing strategy and detail the software development methodology I use, including how I have incorporated
version control in my work. I also briefly outline the Android application lifecycle as
Dolphin uses Android and all testing takes place using an Android app.

In Chapter 3 I describe specifically how I encode data as sound and then decode
it. This includes the subtle differences between Java and Android
programming in this field and the complications that need to be addressed when
programming a mobile app for Android. I also describe the existing libraries I
make use of to complete the work and present some results of the ongoing testing
that took place during the software development.

In Chapter 4 I show how the original criteria were achieved using the test plan
in Chapter 2. I present a more detailed account of the tests that produced the
results described at the end of Chapter 3 and use the results to carry out a
significant improvement to the implementation. I then present a comparison
between the original design and the new version and show a 2000\% increase in
reliability.\footnotemark

\footnotetext{4\% errors improved to 0.2\%errors in testing.}

\chapter{Preparation}

Before creating the new layers I considered which professional software
development methodology to use, how much information from
courses in the Computer Science Tripos would be useful, and read the relevant
literature. In this chapter, I describe these considerations, along with some of
the fundamental concepts of the project such as digitally representing sound and how to encode information in a sound, presenting a
number of different ways the information encoding could be achieved
(\emph{amplitude}, \emph{phase} and \emph{frequency} shifting) with their
advantages and disadvantages. My review concludes that using frequency is
the simplest method of data encoding.

\section{Prerequisite Knowledge}

A number of courses offered at Cambridge contain useful information for
completing this project.

\begin{description}
\item{{\bf Digital Signal Processing} contains information on Fast Fourier
Transforms that is critical for decoding the sounds back into bit patterns.}
\item{{\bf Computer Networking} explains the OSI layer model and how the layers
interact with each other, which tells me I need to offer a single socket for
data coming into the data-link layer from the network layer, written for another
program.}
\item{{\bf Mobile and Sensor Systems} contains theory on wireless data
transfer.}
\item{{\bf Programming in Java} and {\bf Further Java} taught how to create Java
programs and also contained theory on design patterns.}
\item{{\bf A Group Project} gave me experience in using software lifecycle
methodologies, programming a larger project in Java and using a version control
system to track the progress of implementation changes, for backups and
synchronisation of work from multiple locations.}
\end{description}

\section{Development life cycle}

The software development model I will use for this project is the
\emph{waterfall} model because the requirements can be well-defined at the
beginning of the project. That said, I use elements of the \emph{spiral}
model for test purposes by attempting different coding schemes, but they
are usually variants of the same code, e.g. in a frequency shifting coding
scheme using 4 frequencies for 2-bit blocks instead of 255 frequencies for 8-bit
blocks will both use the same method for converting bits to sounds but with
different lengths.

\section{Testing strategy}

To test this project I have a series of files to transfer and receive with
different conditions, and compare the quality of signal received and time
to transmit for each of them. The main tests include:

{\bf Unit Tests}
\begin{itemize}
  \item{Encoding: Converting data to frequencies and analysing numerical
  output for errors in frequency mapping}
  \item{Encoding: Saving predetermined frequencies as sound files and comparing
  to stock frequencies for accuracy}
  \item{Decoding: Test Fourier Transform with known frequencies (no microphone
  involvement)}
  \item{File operators: Check microphone code captures audio correctly and
  consistently by testing various lengths and sample rates}
\end{itemize}

{\bf System Tests}
\begin{itemize}
  \item{Converting the data to sound and back to data on one device (no
  microphone/speaker involvement)}
  \item{Send data between phones at all sample rates between a derived
  minimum and maximum to determine optimal sample rate with the least
  processing}
  \item{Sending data between phones at various sound segment lengths to
  determine the optimal sound length with minimum data loss}
  \item{Repeat all these tests for encoded sequence lengths of 1, 2, 4 and 8
  bits to determine which is the most efficient}
\end{itemize}

{\bf Integration Tests}
\begin{itemize}
  \item{Socket Interface: write an app to send a bit stream to the layers,
  decode result}
  \item{Socket Interface: write an app to receive decoded output from layers,
  test with known data.}
\end{itemize}

Further tests I added after the results of these tests are analysed,
which are explained in Chapter 4.

\section{Version control}

Version control is important for managing changes and tracking overall project
progress, automating regular backups and rolling back to a functional state in
the event of a mistake or dead-end. I use \emph{Subversion} (SVN) as it is compatible
with Windows, has free support already in place through \emph{Google Code}, and
there are plugins available for \emph{Eclipse}, which I am using for the Java
programming. It is also the programming language that I have most experience in.
The central repository is at  a remote location for backup
redundancy.\footnotemark To use rollback effectively I regularly commit small,
atomic changes making it simple to find and revert code that has not worked as
expected.

\footnotetext{\emph{https://mt521-cam-ac-uk.googlecode.com/svn}}

\section{Requirements analysis}

The end-user of this project will be developers of Android apps which wish
to send data via sound. To make this simple they will require a well-defined
interface to the layers so I will create a socket to receive bytes which can be
converted to sound and sent directly to the device audio output. To that end,
the requirements are as follows:

\begin{description}
  \item{{\bf R1.} The layers will convert bits to sound and sound to bits.}
  \item{{\bf R2.} The layers should act like a black box with a single input and
  output.}
  \item{{\bf R3.} The layers should connect directly to the physical layer
  devices (microphone and speaker).}
\end{description}

\section{Sound as frequencies}
\label{sec:compressive}

To use any transmission medium you need to be able to alter the information sent
to convey different information. In copper cable this can be alternating between
putting charge on the line and no charge to represent binary data. An
extension to that would be varying the amount of charge at each pulse to
represent more information per clock cycle. Sound waves can be considered in the
same way by varying one of the characteristics that makes a unique sound.

In general, a monotonous sound of a given frequency \emph{f} can be represented
as a sine wave using the formula:
\begin{equation}
\mathrm{tone} = \sin(2\pi f)
\end{equation}
Figure~\ref{fig:three_tones} shows how the tone can be changed by increasing
the frequency.
\begin{figure}[t]
\includegraphics[width=\textwidth]{combined.png}
\caption{Three frequencies representing A, C\# and E. Increasing the frequency
in the sine argument causes different tones to be created, which is
one way of representing different information in a sound.}
\label{fig:three_tones}
\end{figure}
Furthermore, it is possible to combine multiple tones and create \emph{chords},
which is done by adding the sine values together. For example, the three
frequencies shown in Figure~\ref{fig:three_tones} can be combined to create the
sine wave in Figure~\ref{fig:chord}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{440-550-660.png}
\caption{A chord from combined frequencies 440Hz, 550Hz and 660Hz. Doing this
means three times as much information can be represented in the same timespan,
offering a different way to represent data using frequencies.}
\label{fig:chord}
\end{figure}

Sound as a sine wave is a continuous function, so
it results in an infinite series of numbers to represent one sound. To represent
this in memory discrete samples are taken at a given \emph{sample rate}. The
\emph{Nyquist-Shannon Theorem} states that to ensure no data is lost the sample
rate needs to be twice the maximum freqency~\cite{Nyquist}. This
is because if you sample a sine wave at $x=0,1,2,3,\ldots$ then there are
infinitely many frequencies and linear combinations of frequencies which could
produce the same values. An example of this aliasing is shown in
Figure~\ref{fig:aliasing}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{aliasing.png}
\caption{Two different waves with the same sampled values, because the sample
rate was not high enough. If there were eight samples in this range there would
be no ambiguity.}
\label{fig:aliasing}
\end{figure}

The \emph{Nyquist-Shannon Theorem} assumes sampling at regular time intervals.
Recent work considers time intervals generated at random, called
\emph{Compressive sensing}~\cite{CompSensing1}. This assumes that
frequencies in use are sparse so the chances of getting aliasing effects are
reduced, which means you can sample at lower rates.

The range of frequencies we can represent as soundwaves are the frequencies a
mobile phone speaker is capable of producing, this varies between
handsets. All phones are capable of playing tones at frequencies humans can
hear, which is usually 20\textendash20,000 Hz. Human speech typically
occurs between 300Hz and 3400Hz~\cite{Conversation}, but the nature of speech
means the length of time any 1 frequency is being produced is very small and
the frequencies from a conversation will be distributed over this range.
This means interference from nearby speech will not impact the performance of
Dolphin extensively.

The range of frequencies used in Dolphin will directly
affect the maximum transmission rate of the medium, based on the
\emph{Shannon-Hartley Theorem}:
\begin{equation}
C = B \lg(1+S/N)
\end{equation}
Which states that the channel capacity is a product of the \emph{bandwidth} and
the \emph{signal to noise} ratio. In other words, the more frequencies available
the higher the channel capacity, provided the signal to noise ratio logarithm does
not grow faster than the bandwidth~\cite{Shannon}.

\section{Coding schemes}

Changing the formula for a sound to a function of time instead of
frequency gives an improved general sine wave with three variables:
\begin{equation}
f(t) = A\sin(\omega t + \phi)
\end{equation}
The three elements of this wave which can be modified to represent
information are: the \emph{phase} ($\phi$), \emph{frequency} ($\omega$) and
\emph{amplitude} ($A$).

\subsection{Phase}
Altering the first of these, the phase, is known as \emph{phase shift keying}
(PSK). The most common form of PSK is \emph{Binary PSK}, in
which binary information is sent by giving the sine argument a 180\degrees phase
shift to represent binary 0, or no phase shift to represent binary
1~\cite{phaseshift}.
There are two different ways to measure this phase difference:
\begin{description}
\item{{\bf Comparing the incoming signal to a predetermined tone} and see from
observation which parts of the new tone are phase shifted}
\item{{\bf Comparing the incoming tone to itself}. This works
by comparing the phase shift of one time interval to the phase of the interval
preceding it, so instead of the comparison at time $t=5$ being $t=5$ in a stock
sine wave, it would be compared to $t=4$ in the data just received. For example, four 0s would be
represented by a tone shifting 180\degrees four times, and four 1s would be no
shift or a continuous tone. Figure~\ref{fig:phase_shift} demonstrates this.
Creating the tone for each binary element depends on the preceding binary
element, but because the changes in phase are measured at the
receiving end a dropped bit won't affect the decoding of subsequent bits. The
downside of the first approach is needing to store a baseline tone for
comparison at the receiving end. This will either need to be arranged beforehand
or computed as needed which will require extra processing time per connection.}
\end{description}
\begin{figure}[t]
\includegraphics[width=\textwidth]{phaseshift.png}
\caption{An example of phase shift with respect to itself. The initialisation
wave gives the first baseline comparison. The second timeslot is 180\degrees
phase shifted compared to this so it represents binary 0. The green wave then
becomes the baseline. The red wave is not phase shifted compared to this so it
is a binary 1. Blue is phase shifted compared to red so it is a binary 0.}
\label{fig:phase_shift}
\end{figure}

Quadrature PSK is a similar scheme which uses four phases, 90\degrees apart, to
convey two bits at once~\cite{PhaseShift}. This doubles the transmission rate at
the same bandwidth, but reduces fault tolerance as with smaller phase changes it becomes
more likely the change in phase measured was a result of the \emph{Doppler
Effect} or external interference~\cite{Doppler}.

\subsection{Frequency}
In frequency shifting two frequencies represent binary 0 and 1, and
a tone is transmited comprised of those two frequencies in
predetermined~timeslots~\cite{PhaseShift}. It then lends itself to transmission
rate optimisation, such as letting sequences of bits be represented by different frequencies. A
byte can be represented in binary by 255 distinct numbers so a linear mapping of
numbers to frequencies allows 255 frequencies to represent all the bit patterns
of a byte. Alternatively, a smaller sequence of bits could be used, such as
two-bit blocks, and that would result in four frequencies representing all
possible data combinations, though there would need to be four times as many
sound bursts sent per file. The more frequencies that are used the higher the sample rate
needs to be, and the more processing needs to be done per second to
determine the received frequency.
On a mobile device the processing power may be limited so overloading the
receiving end may cause data to be dropped.

\subsection{Amplitude}
In amplitude shift keying each wave amplitude represents a sequence of bits, in
the same way that each frequency could represent a bit sequence in FSK. The
problem with using this for sound is that moving the microphone during
transmission will change the amplitude received. Even if the microphone were
stationary, if it is further away than expected the amplitudes may be different
to what is on record. This can be fixed by listening for a baseline
comparison tone at the start of the transmission and looking at the change in
subsequent amplitudes, rather than the amplitude value, but it is still a less
reliable system of transmission as any movement after that point will cause
data errors. Amplitude shifting is the least applicable to audio transmissions
and is usually used with light which does not deteriorate as much with distance.

\subsection{Conclusion}
As the \emph{Doppler Effect} will likely have a minimal impact on the
received frequencies, amplitude shift keying is the most susceptible to movement
of the transmission device as there will be a distinct difference in amplitude
if the speaker moves~\cite{Doppler}. I have decided to use frequency as it
offers a higher data rate per second than phase shift, which becomes
increasingly susceptible to errors after \emph{Quadrature PSK} (four phases),
for example even restricting the frequencies used to one in every thousand
audible frequencies there are still twenty frequencies to use, five times more
than \emph{Q-PSK}.

\section{Existing schemes}

In the 1970s and 1980s audio cassette drives were used to store
information, such as computer programs for home computers. One standard
for cassette tapes was the \emph{Kansas City Standard} which used frequency
shifting at 300 baud~\cite{KansasCityStandard}. A binary 0 was represented by
four cycles of a 1200Hz sine wave, and a binary 1 was represented by eight cycles of a 2400Hz wave. Data
was sent in eleven-bit frames consisting of a start bit (0), eight bits of data
and two stop bits (11). It therefore had a transfer rate of 27 bytes
per second~\cite{KansasCityPerformance}. A higher baud version was developed,
capable of 1200 baud~\cite{KansasCityPerformance}. This was achieved by
shortening the time needed for each binary element. A 0 is now one cycle of 1200Hz and a 1 is two cycles of
2400Hz, and the stop bit is now a single 1. This scheme was capable of 120 bytes
per second and the data was stored in 256 byte blocks, which were numbered so it
was possible to rewind the tape to a specific location in the event of a read
error.

Using audio to transmit data lends itself to radio use, and amateur
radio operators have used \emph{slow scan television} (SSTV) to send pictures
using sound. It is a frequency modulation system in which every colour brightness gets its own frequency and the red, green and blue components
are then sent seperately for each pixel using those frequencies. Each bit takes
30ms to send and ranges over 1500Hz to 2300Hz, so the bandwidth is 33 bytes per
second. It also contained some error correction, by sending odd lines of the
picture first, then the even ones to fill in the gaps. If a line was corrupted
or missing the ones either side can approximate what was supposed to be there.

Both these schemes use \emph{frequency shift keying}, which I also
use. The \emph{Kansas City Standard} technique of sending data in cycles offers
the possibility to avoid error correction by having the receiving phone request one
or more chunks to be resent at the end of the transmission.

\section{Decoding signals}

So far, I have explained how data can be converted to sound and I decided
to use the frequency as the encoding variable. I will now show how to decode the
data at the receiving end using \emph{digital signal processing} (DSP). There
are two ways to obtain the frequency of a portion of the sound.
\begin{itemize} 
  \item{Re-create the sine wave and count the number
of times the wave crosses the axis in a set time. Higher frequencies will cross
the axis more times per second than lower frequencies. The problem is if the
frequencies used are so low that the wave doesn't cross the axis in the time
sample. This is very unlikely to happen as the lowest frequency I use
is 330Hz, so even if the duration of each tone is only 10ms, every
frequency will have at least two solutions for $f(t)=0$.}
  \item{Utilise the periodic nature of the
sine waves and analyse the data using a \emph{Fourier Transform}:
\begin{equation}
F(\omega) = \int a(t)\cos(\omega t)dt
\end{equation}
for frequency $\omega$ and amplitude $a(t)$ at time $t$.
The transform takes a set of complex numbers and returns another, equally sized, set of
complex numbers. If you set the real part of the input to the audio sample, with
0 for all imaginary parts, running the transform will return an array of complex
numbers, each of which represent a range of frequencies called a
\emph{frequency bin}. The range depends on the sample rate of the
input so it can be limited to 1Hz if necessary to assign a different bit
sequence to every frequency value. In that case, each small sample of the input
will only have large values in the bins corresponding to the frequencies present
and decoding the input becomes a simple matter of searching the array for the
most significant value or values. To speed up computation the size of the bin
doesn't need to be limited to one per frequency as to help with error correction
the assigned frequencies will be several hertz apart, so each bin could
represent 10\textendash20Hz and the bin containing the largest value will still
represent the correct frequency.

Figure~\ref{fig:example_fourier} shows an example of using a Fourier Transform
to retrieve an encoded frequency.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier1.png}
\caption{An example pulse with frequency 3. It oscillates 3 times per second
and is also multiplied by an exponent so the graph converges to 0 in both
positive and negative. This does not alter the frequency.}
\label{fig:example_fourier}
\end{figure}
Consider the integrand, modified using \emph{Euler's formula}, $e^{-2\pi i\omega
t}f(t)$. If the frequency $\omega$ in the first part of the product matches the frequency of the received signal
$f(t)$ then they will be very closely related, as demonstrated in Figure~\ref{fig:real_imaginary}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier2.png}
\caption{Real and imaginary parts with respect to frequencies 3 and 5 when
multiplied with the function previously shown. The
correct frequency (green) results in an entirely positive integrand, whereas the
incorrect frequency (red) oscillates between positive and negative so will
integrate to near 0.}
\label{fig:real_imaginary}
\end{figure}
When the real part of one is negative, the other will be negative and when the
real part of one is positive, the other will be positive. This means when the frequencies match, the real part
of this integrand will almost always be positive, so the integration will return
a positive value. If the frequencies do not match, there is no
positive/negative link between the elements of the product so the real part of
the integrand can oscillate and will have some negative values which when
integrated will cancel out the positive peaks and return a smaller value. This
is why the desired frequency is in the bin with the largest value. This is
shown in Figure~\ref{fig:freq_bins}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier3.png}
\caption{The \emph{frequency bins} returned after integrating the real parts of
the transform. The positive\/negative oscillation means the integration returned
0 for the incorrect frequency, but a relatively high value for the correct
frequency. ``Negative Frequencies" mirror their positive counterparts as they
do not really exist. The height of the peak represents the amplitude of the
received frequency.}
\label{fig:freq_bins}
\end{figure}}
\end{itemize}

\section{Android}

I use Android for this project as it is a popular mobile platform, which will
make this usable to a large market~\cite{Android}. As Android is `app'
focused and an open source platform many more people will be able to write
programs that use Dolphin. iOS is not open
source and has a restrictive API. This prevents accessing the phone's underlying
hardware. Furthermore, Android uses Java, which I have experience
in.

\subsection{Application life cycle}

In Android each application acts as a separate user in a Linux-based multi-user
system.
Each application therefore has different security permissions to access
peripherals, such as the speakers and microphone. Every process also runs its
own \emph{Dalvik Virtual Machine} so one application does not impact the operation
of another under normal circumstances. Android will keep processes in memory for
as long as possible and typically only free them when memory is low. There are
four states an Android process can be in:

\begin{itemize}
  \item{{\bf Foreground:} This is the activity the user is currently
  interacting with, it will likely be responsible for the content being
  displayed on the screen at that moment. It will only be terminated if there
  is nothing else available to reuse.}
  \item{{\bf Visible:} This activity is in view but is not at the front.
  For example, if a dialog box has appeared to give the user information then
  that is the foreground activity, and the former foreground activity becomes
  a \emph{visible} activity.}
  \item{{\bf Background:} This activity is not visible or being currently used
  so can be terminated if required. If it is terminated and the user navigates
  back to it then the activity will simply be restarted instead of loaded from
  memory.}
  \item{{\bf Empty:} This process does not control any application components,
  a class which was called from a foreground activity to perform a
  calculation. These are the first to be terminated for the sake of memory
  reallocation.}
\end{itemize}

These activities are primarily navigated through by using \emph{intents}.
Intents are messages from the Android system that indicate something has
changed.
They can be generated by the user explicity to start a new intent, or they can
serve as listeners for external activity, and an \emph{intent filter} in the manifest file will create the
new foreground activities.

Empty processes that offer no user interface must be declared as
\emph{services} if they are important to the running of another activity.
Services carry out long running activities such as music playing. In the Dolphin
system, both the Fourier Transform for decoding data and the playback of the
encoded sound are defined as services as they are long-running and
computationally intensive. Services, like activities, are launched using
intents.

Activities, intent filters and services all need to be declared in the manifest
file. The manifest is a summary of the requirements and permissions of an
application, for information and security reasons. The manifest lists the permissions required for the application to run.
Permissions act as a security feature, alerting the user that the application is
capable of accessing some of the mobile device's functionality that might not be
expected, e.g. writing to the external storage or accessing the Internet. In
addition to these, the manifest also declares the hardware the application
requires, such as the camera or an RFID scanner, so devices without these
features are unable to install the app. Additional libaries to the standard
framework API are also declared here.

\subsection{Dolphin}

Dolphin is a module rather than a standalone product. It is meant to be used
inside an app that uses this feature to do something, such as an app that creates audio-based QR codes.
With that in mind, I will need to write an app for the system and integration
tests that uses Dolphin.

A developer using Dolphin should not have to be familiar its
inner workings. Therefore, the interaction with the program takes the form of a
black-box style input and output using sockets.
The file to be encoded is sent to a socket and retrieved by my classes. The
sound is then played by the speaker from the inside of the black box without any
further user interaction. When decoding, the socket input is the microphone, and
the decoded byte array should be returned to the user. Figure~\ref{fig:blackbox}
shows how an application created by the end user can utilise the socket
interface without needing to interact with the hardware at all.

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\section{Summary}

This chapter has explored the literature that is relevant to Dolphin. I have
also described the professional software development style I employ in
creating the program. The next stage is using these theories to create the
program and perform tests to ensure it functions reliably.

\chapter{Implementation}

In this chapter I explain the two implementations of Dolphin. Dolphin is a Java
program that runs on Android, accessing a mobile device's microphone for data in
to be decoded and the speakers for the output of encoded data. It also has a
socket interface to receive the data to be encoded. Figure~\ref{fig:structure}
shows how Dolphin is structured. The two main components are encoding bits as
sounds and decoding sounds into bits. The encoding component uses data provided
by the user and accesses the speakers directly, whereas the decoding component
accesses the microphone directly and returns a sequence of bits to the user
based on what was recorded.

\begin{figure}[t]
\includegraphics[width=\textwidth]{structure.png}
\caption{A class diagram showing the structure of Dolphin.}
\label{fig:structure}
\end{figure}

\section{Socket interface}

The socket interface is defined entirely on the device to avoid using an
Internet connection, as specified in the project proposal. To do this, Dolphin
creates a \texttt{ServerSocket} that a user-generated \texttt{Socket} using
\emph{localhost} as the address can connect to. This \texttt{ServerSocket}
listens for connections, and returns another \texttt{Socket} to the user when a
request is made. In Linux, which Android uses, the bottom 1024 ports are
available only to the root user, so these are unavailable. I use port 3574 for
the \texttt{ServerSocket}.

Once the sockets have been established the methods \texttt{getInputStream()} and
\texttt{getOutputStream()} let you read and write data to the socket. That input
stream is then converted to a byte array using the \texttt{readBytes(byte[] b)} method, which returns the number of bytes read
into \texttt{b}. That loops until the number of bytes received is 0. On each
iteration through the loop the contents of b are written to a
\texttt{ByteArrayOutputStream} which once the data input stops is flushed to a
byte array. Dolphin then decodes this byte array, and returns a byte array
representing a sound to the output stream.

A rooted Android device could use \texttt{LD\_PRELOAD} to put Dolphin between
an app and the libraries it uses. This means any app which used another method
to convert bytes, and therefore used a byte stream in a function call, could use
Dolphin without changing the implementation of their app at all. This is because
the function call would be intercepted and replaced with the functionality of
Dolphin without calling the original library. The rest of the calls to the
original library would procede unimpeded. Similar functionality could be
achieved using \emph{iptables}. This would involve creating an \emph{iptables
rule}\footnotemark to intercept traffic originally going to another encoder and
redirecting it to Dolphin. Dolphin would then be able to return the result directly.

\footnotetext{for IP address \emph{eth}: ``iptables -t nat -A POSTROUTING -o eth
-j MASQUERADE''}

\section{Encoding}

As I discussed in Chapter 2, there are many different ways to encode the data
as sound waves. The coding scheme I use is \emph{Frequency Shift Keying (FSK)}.
This is because amplitude and phase techniques are best suited
to binary encoding, whereas FSK can easily accommodate $n$-bit encoding with
little impact on performance. This is because even after taking into
consideration a suitable tolerance, such as not using a frequency within 20Hz of
another being used, there are several thousand frequencies in the range of a
microphone's detection available for use.

I use the binary representation of data to perform encoding, without any heuristic
analysis. This is simpler than creating different cases for different types of
file. Sending text one letter at a time and images one pixel at a time
would require different implementations and an analysis of the file
type beforehand. I considered enhancing the encoding scheme using a
technique similar to \emph{delta frames} in video encoding, in which only the
changes from one frame to the next are sent~\cite{deltaframes}. This works for
video as sequential image frames are often very similar, so few changes are
necessary and the amount of data sent is reduced. In Dolphin it would
involve sending the differences between each sequential byte rather than
the bytes themselves. However, there is no guarantee that the bit-structure
of a generic file will yield bytes similar to their predecessors so its
performance would be varied. I therefore leave this to higher layers in the
stack, which can implement techniques like this by altering the data stream sent
into the socket.

\section{Libraries}

Using previously developed libraries in software is not only easier than
reimplementing features but is also good professional practice. This is because
it stops code duplication and reduces the risk of bugs as established
libraries will have been extensively tested.

\subsection{Input}

The bytes comprising the data, to be sent to this method for encoding, can be
accessed using a \texttt{FileInputStream} and an existing method in the
\emph{Apache Commons} library \texttt{IOUtils} called \texttt{toByteArray}, which is applied
directly to the \texttt{FileInputStream}. Once the bit structure of the file has
been accessed in a byte array the bytes are mapped to frequencies for playback as a
sound. As Java has no unsigned types any of the bytes in the array with a 1 in
the most significant bit will be a negative number. This means a na{\"i}ve
mapping of multiplying the bytes by a suitable factor will result in negative
frequencies, which will not work. Therefore, before mapping bytes to frequencies
the signed bytes need to be converted to unsigned, using a larger type such as a
short.

\subsection{Representing sound}

In Java a sine wave can be represented by a byte array, with each array element
representing a sampled value of the function. I sample by using the Java
\emph{Math} library, shown in Code Sample~\ref{lst:sineWave}, when given
integers \texttt{freq}, \texttt{sampleRate} and \texttt{time}.

\begin{lstlisting}[caption={Encoding a sine wave in a Java byte array. The sine
value is multiplied by 127 to change the range of the result from the usual
(-1,1) to (-127,127), utilising a much wider range of
possible byte values.},label={lst:sineWave}]
public byte[] bufferSound(int freq, int time, int sampleRate) {
	byte[] sineWave = new byte[sampleRate*time];
	for (int i=0; i<sineWave.length; i++) {
		double angle = (i * freq * 2.0 * Math.PI) / sampleRate;
		sineWave[i] = (byte)(Math.sin(angle) * 127.0);
	}
	return sineWave;
}
\end{lstlisting}

This code is a method to convert an integer value to the
corresponding sound for that frequency. Each of these byte arrays can be stored
in an array of byte arrays and then output one after the other by iterating over
it. 

\subsection{Output}

To output the sound on a computer system, as this technique
is not limited to phone-phone communication, a \texttt{SourceDataLine} can be obtained for the
computer speakers using the libary \texttt{AudioSystem}.\footnotemark~A
\texttt{SourceDataLine} converts the byte data into the actual sound for
playback. The \texttt{SourceDataLine} write method can then be used to play the
byte array directly, as the processing on the array up to this point means it is in PCM format (for .wav
files), which is supported by the library. It is important to remember to call
the drain method before closing the \texttt{SourceDataLine}, in the same way you
would call flush for an output stream. Without this call the sound will not play for
the full duration of the array and data will be lost.

\footnotetext{Using the method AudioSystem.getSourceDataLine(AudioFormat),
where the AudioFormat has been initialised with the required settings for data
capture, e.g. use 8 bits per sample as we are dealing with bytes.}

\subsection{Android}

Android does not support the \texttt{AudioSystem} library, but has similar
functionality in \texttt{AudioTrack}. Previously, the \texttt{AudioFormat} input
to the \texttt{SourceDataLine} was initialised with information on the sample
rate, bits per sample, etc.
Correspondingly, an \texttt{AudioTrack} is initialised with the mostly the same
arguments using class-specific constants, and afterwards the code to play the
sound on a mobile device is almost identical but with different method names.
Conceptually, both start the object, write to the object, flush the data and
close the object again.

The way Android is written also means there are additional tasks to complete.
One such task is allowing the program to access the speaker separately in the
\emph{manifest} file, using the Android permission system. Permissions are a
security feature declared statically at compile time, designed to both alert
the user that an app has access to extra features on their device and to allow
the scheduler to more easily allocate shared resources. The relevant permission
in this case is \emph{android.permission.MODIFY\_AUDIO\_SETTINGS}.

\section{Decoding}

Decoding the sound means getting the frequency of that sound and converting the
number into a byte, based on the original encoding mapping. The frequency is the
number of cycles the sine wave representing the sound goes through per second.
As a sine wave crosses the $x$-axis twice per cycle, one way to decode
the sound would therefore be to count the number of times the wave crosses
the axis in a certain period of time. Dividing this value by 2 will reveal the
frequency. Another way to determine the frequency is to use a Fourier Transform.
This technique utilises the periodic nature of sine waves and finds the
frequency by brute force. It does this by multiplying the incoming sine wave
with unknown frequency by another sine wave with a known frequency, and
integrating the result.
If the two frequencies match then they will rise and fall at the same rate and positive values will always
multiply with positive values and negative values will always multiply with
negative values, resulting in a wave which is entirely in the positive domain.
Integrating this wave will return a large positive number. However, if the
frequencies do not match then the waves will rise and fall at different rates
and at some point positive values will be multiplied with negative values which
will return a wave with some peaks greater than 0 and some less than 0.
When this wave is integrated the negative peaks will cancel out the positive
peaks and the result will be much less than the integration on the correct
frequency. Once the transform has calculated these integrations for a series of
frequencies in the range of possible frequencies the original unknown frequency
will be the one represented by the largest value in the integrals.

I use a Fourier Transform to determine the frequency or frequencies
present rather than the counting the solution to $f(x)=0$ as counting the
solutions will not allow for multiple frequencies to be decoded at once, which
may be implemented to improve bandwidth, and the background noise could have a
very unpredictable effect on the shape of the incoming wave. There are libraries
that implement various types of transform so there is no need to rewrite this
effort. For example, Princeton University has written a single class version
which is used in the Data Analysis module of their Computer Science
course.\footnotemark~The main weakness of this implementation is that it
reallocates an array for every transform so is memory inefficient for
very large arrays. As the sample rates I use are all in the order of thousands,
each array created also contains several thousand entries and eventually there
is the risk of Java running low on memory. When Java detects the heap is very
large the garbage collector is called which slows the program down as the
garbage collector scans every object currently active in the heap looking for
those no longer referenced, which is likely to be millions of objects after so
much array allocation.

\emph{Apache} have written a Fast Fourier Transformer in
their \emph{Commons} library. This does perform the transform in place, meaning
instead of the transform using the array of values to create a new array
containing the transformed result, the input array itself is altered to
represent the new information. This means it is much more space efficient as
there will be half as many large arrays created. As I already use the
\emph{Apache Commons} library for processing incoming files this also means I
can limit the number of additional libraries I need to include in the Dolphin
code by using the same one.

\footnotetext{\emph{http://introcs.cs.princeton.edu/java/97data/FFT.java.html}}

\subsection{Microphone input}

To access the microphone on a mobile device using Android the
necessary permissions must be added to the manifest.\footnotemark~This is a
static check that Android performs for security purposes, informing the user
that the program is capable of accessing this non-standard piece of hardware.
The code uses the class \texttt{AudioRecord} to gather the data from the
microphone and a \texttt{ByteArrayOutputStream} to store it until the
transmission ends. When initialising the \texttt{AudioRecord} object it is not
necessary to use the same arguments as those used when initialising
the \texttt{AudioTrack} object for playback. This means a higher sample rate
or a larger type can be used to record the input for more reliable decoding,
whereas a lower sample rate may suffice for encoding depending on the quality of
the device audio drivers. For example, when experimenting with sample rates I
discovered that a PC running Dolphin in Java will encode a sound using 8 bits
per sample and play it perfectly, but to get the same results on a mobile device
a 16-bit short is required, else the distortion in the sound makes it unusable.

\texttt{AudioRecord.read} returns an integer stating how many
bytes were read into an array which serves as a buffer. This method can
therefore be used inside a while loop to continue reading data from the microphone until the total number of bytes read
reaches a certain value, set by the user. For example, this will be useful in
audio QR codes as the maximum size could be determined in advance. Alternatively
the read number can go unused and the while-loop condition could be a flag that
is changed when the decoding portion of the code detects that the transmission
has stopped. As with the \texttt{AudioTrack} object it is important to call stop
and release once the microphone is no longer needed or the data will not
completely flush into the buffer and no other applications will be able to use
the microphone until Dolphin closes. Dolphin itself would also be unable to use
the microphone until the lock had been released as the previous process would
still be holding the lock. The \texttt{ByteArrayOutputStream} holding the
recorded data can then be converted to a byte array and returned for analysis in
a Fourier Transform.

\footnotetext{\textless uses-permission android:name="android.permission.RECORD\_AUDIO"
/\textgreater}

\subsection{Apache Fast Fourier Transform}

Using the existing \emph{Apache} library\footnotemark~is more
reliable than implementing one of my own because it has been tested
by a large number of people, instead of just being tested by me. It also comes
with various options for the different transforms. For example, there are
separate methods already written to either transform the data and return
a new array, or to perform the transformation in place to save memory,
which is useful considering the current implementation uses 16000
bytes per byte of data encoded.\footnotemark~It also distinguishes
between a forward or inverse transform, which for my implementation
is not important, but could be useful in implementing other functionality. It
also has its own API already written which will help future developers alter
this project if necessary.

\footnotetext{\emph{http://commons.apache.org/}. The FFT is stored
under \emph{org.apache.commons.math3.transform}}
\footnotetext{For test purposes. Experiments to determine optimum sample rates
can be found in Chapter 4.}

The FFT returns an array of type \texttt{Complex}, redefined by the \emph{Apache
Commons} library. It is a datatype that stores two values, the real and complex
parts of the number, and the class has methods to retrieve these parts
separately.
Each 64ms of the sound is sent to the FFT individually to determine
which frequencies are present in that 64ms burst. Once the complex
array is returned a new array is defined of type \texttt{Double}, which for
each original element $x+iy$ the new array contains the true amplitude value
given by the formula:

\begin{equation}
amplitude = x^{2}+y^{2}
\end{equation}

The result from the Fourier Transform is the single array element with the
largest value, representing the frequency bin. This does not consider the whole range of the frequency
bin so nearby frequencies could share an array element with the largest value
but the surrounding elements would be distributed differently. I found that,
especially when using shorter lengths of time to encode the data, the decoded
data was frequently off by 1, for example ``d'' being decoded as ``e''. This was
not due to an insufficient spacing factor as the same characters were being
frequently replaced. During the evaluation I decode a test file containing the
sentence "The quick brown fox jumps over the lazy dog." ten times, and all the
results were of the form:

\emph{Vgg sskbk bsowo gow ksoos owgs sgg kb\{w bog}

Eight were identical to this and two had single character substitutions. The FFT
is returning the same array index for ``i'', ``j'' and ``k'', to name one of the
overlaps. I believe this is due to some internal rounding in the Apache FFT,
which is causing the similar frequencies to be mapped to the same array element.
I therefore altered both implementations of Dolphin so the result returned is
not based on one array index, but rather the ten array indexes surrounding the
largest amplitude. This is because when comparing characters ``i'' and ``k'',
ASCII 105 and 107 respectively, for ``i'' the lower five array elements
around the high-point contain larger values than the upper five, and vice versa
for ``k''. Using a spread of data from the FFT gives a more reliable
representation of what the frequency actually is.

Then, to determine the frequency, I multiply the median of the spread of data in
the FFT by the sample rate and dividing by twice the total size of the array. As
demonstrated in Figure~\ref{fig:mirrored} I divide by twice the array size as
the FFT returns both positive and negative frequencies in the range, which are
mirrored about zero so -300Hz has the same amplitude as 300Hz. Rounding to the
nearest expected frequency (multiples of 30Hz) will return the intended
frequency.

\begin{figure}[h]
\includegraphics[width=\textwidth]{mirrored.png}
\caption{The Fourier Transform returns both positive and negative frequencies,
with values mirrored about 0.}
\label{fig:mirrored}
\end{figure}

\section{Second implementation}
\label{sec:newimpl}

Figure~\ref{fig:oldimpl} shows how presently the decoding listens for data and
then considers the maximum length of an encoded tone, which may include
frequencies other than the one intended.

\begin{figure}[h]
\includegraphics[width=\textwidth]{oldimpl.png}
\caption{In the first implementation almost every segment of data analysed
contained a small amount of the next segment as well. When this is smaller than
the first its frequency wouldn't be chosen instead of the dominant one, but
it means the desired frequency has a lower total in the FFT and background
noise may be slightly louder. In the worst case the segments each fill
exactly half of the recorded chunk so both the chances of being ignored as
too quiet and the odds of an incorrect decode increase.}
\label{fig:oldimpl}
\end{figure}

It also shows how in the worst case
the recording may have started exactly half way through a frequency pulse which
means every sound sent to the Fourier Transform has two equally strong
frequencies so the probability of getting one byte correct is 50\%. It is
likely that more than one byte will be sent in a transmission so the
probability of the entire file being decoded correctly becomes increasingly
small. Figure~\ref{fig:newimpl} shows the second implementation. Dolphin still
scans the incoming data as before until the first data frequency is detected. Then it
divides each data segment (currently 64ms) in two and measures which is
stronger.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newimpl.png}
\caption{In the second implementation the first two consecutive segments of data
are considered, and as one necessarily must contain nothing but a single
frequency it will be stronger than the other and not be diluted by extraneous
frequencies. After that only the chunks containing one frequency are
considered. Now the ``worst case'' scenario is both contain nothing but the
desired frequency, in which case either the first segment is chosen or one is
minutely stronger than the other.}
\label{fig:newimpl}
\end{figure}

It also shows how one of these two segements is guaranteed to contain
nothing but the desired frequency, and in the rare event that the start of the
recording corresponds exactly with the start of the data sent, both will
contain only the correct frequency. The weaker amplitude is ignored as it
contains other data, and from then on only alternate bytes from the recording
are sent for decoding, meaning every decoded byte is 100\% accurate.

\section{Extensions}

I have implemented an extension to the core requirements in the project
proposal, allowing simultaneous two-way communication between the mobile
devices. This is useful not only for simultaneous transfer of files, but also as
a method of feedback sent from the receiving phone so concepts such as backoff
signals can be implemented.

To do this I created two threads in the program to run simultaneously, one for
sending data and one for receiving. Both phones now store data from the
microphone as well as the data they have sent out. When analysing the incoming
data the phone will compare what it receives to what it is sending. If they
exactly match then the phone knows that it is the only one sending data and can
ignore it, but still progress through the array in preparation for a collision.
Once a collision occurs and it detects two frequencies it can compare them both
to the array of data it is sending and store the other in an array for foreign
frequencies. In the event both send the same frequency at the same time it
should have a significantly higher amplitude, but can opt to save it in the
array regardless. Once the phone runs out of data it is sending to compare to,
or if it stops detecting collisions as the other phone has stopped transmitting,
the foreign frequencies array can be output to the user.

\section{Summary}

All the required features of Dolphin as described in the proposal have been
implemented, as well as one extension extra to the original success criteria.
Namely, the encoding, decoding and socket interface. The next stage is
evaluating the effectiveness of the implementation, which is described in the
next chapter.

\chapter{Evaluation}

This chapter describes in more detail the tests I undertook during
implementation of the project to determine the optimum sample rate, spacing
between used frequencies and length of tone. I then carry out the same tests
again using the new implementation to measure the performance differences. The
best sample rate is 32kHz, the length of each encoded tone is 64ms and the
spacing between frequencies is 30Hz.

\section{Experimental Setup}

My testing procedure involved sending the same test file between two phones
running an app I wrote to use Dolphin, attempting to maximise the percentage of
the original file correctly decoded by observing the bytes written at the
receiving end. The phones were always at a constant distance to control any
extraneous effects such as differing amplitude. Figure~\ref{fig:tests} shows a
photograph of the testing environment. This way, altering a variable was not
falsely assumed to succeed due to other factors. The test file was a text file
containing the sentence:

\begin{description}
\item{ \emph{The quick brown fox jumps over the lazy dog.}}
\end{description}

This is a suitable, small, test file because one application for Dolphin is the
development of audio QR-codes which are often Internet addresses. This sentence
contains an instance of every letter (with an upper case example) as well as a
full stop, which all frequently appear in web addresses. Using the same file
at the same distance and amplitude makes the test repeatable, and means the
results of two tests are directly comparable.

\begin{figure}[h]
\includegraphics[width=\textwidth]{testing.png}
\caption{A photo of the testing setup. The phones are always a constant
distance apart to minimize extraneous factors in the initial testing.}
\label{fig:tests}
\end{figure}

The variables that affect the percentage of bytes correctly decoded are:

\begin{enumerate}
  \item{{\bf Enlarging factor:} the spacing between each frequency to allow
  error correction. The most accurate decoding uses 30Hz spacing.}
  \item{{\bf Sample rate:} the number of samples per second taken. The most
  accurate decoding occurs at 32kHz.}
  \item{{\bf Tone length:} the time each encoded tone plays to represent one
  byte. The best length is 64ms}
\end{enumerate}

I will demonstrate my tests of each one of these variables in turn, using the
features described in the remainder of this section.

\subsection{Android application}

To test my module I created an Android application to simply send and
receive text. Figure~\ref{fig:app} shows a
screenshot of this app and Figure~\ref{fig:blackbox} shows how my app interacts
with my project, taking the place of the top ``end user'' section.

\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{app_screenshot.png}
\caption{A screenshot of the test app designed to use the Dolphin system, with
access to the phone microphone and speakers. This is the view after the phone
has completed the decoding of a file from the audio input.}
\label{fig:app}
\end{figure}

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\subsection{Graphing tools}

To analyse the data received I used a combination of a Java graphing class and
Matlab. The Graph class uses Java's existing \emph{Swing} \texttt{JPanel} to
display the graph on screen, overriding \texttt{JComponent.paintComponent(Graphics g)}.
It draws a series of lines, using the method \texttt{g2.draw(new
Line2D.Double(x1, y1, x2, y2))} the endpoint \texttt{y2} of each being the next
value in the data given (in this case the frequencies recorded).

The data given to Matlab for graphing was obtained by storing a record of 
\begin{enumerate} 
  \item{ the original file to be encoded;}
  \item{ its encoded frequency values;}
  \item{ the microphone input after transmission;}
  \item{ the file written based on the decoded information.}
\end{enumerate}
All of these take the form of byte arrays for numerical comparison. Comparing
(1) with (4) and (2) with (3) will give a measure of how accurate each
transmission is.

\section{Frequency range}

Before looking at the spacing factor I determined where to start the encoded
frequencies from.
\begin{figure}[h]
\includegraphics[width=\textwidth]{lowfreq.png}
\caption{Background noise means low frequencies are almost always detected
alongside the desired frequency. To counter this frequencies below 300Hz are
ignored.}
\label{fig:lowfreq}
\end{figure}
Background noise such as computer fans, wind and distant conversations cause a
rise in the amount of lower frequencies detected. Figure~\ref{fig:lowfreq} shows
the microphone input from listening to a tone of 700Hz, and it also shows the
majority of these background frequencies are below 300Hz.
Therefore, to partly counter background noise, I don't map byte values to a
frequency below 300Hz. This is achieved by shifting every frequency up by that
amount. In the implementation this means the portion of the output array from
the Fourier Transform that concerns frequencies below that level can be deleted
entirely, so when scanning the array for the largest amplitude detected, the
lower frequencies are not considered. Figure~\ref{fig:bkgnoise} shows the new
output from the FFT, which is much less ambiguous.

\begin{figure}[h]
\includegraphics[width=\textwidth]{bkgnoise.png}
\caption{Output of FFT from listening to a 700Hz tone. Ignoring the bottom 300Hz
means the frequency detected is much clearer.}
\label{fig:bkgnoise}
\end{figure}

\section{Enlarging factor}

The enlarging factor is how many unused frequencies should be left between the
useful, encoded frequencies. As freqencies very close together are
almost indistinguishable, mapping each byte value from 0 to 255 to the frequency of the same number (offset by 300)
would cause a high rate of errors. In a silent environment 400Hz would be
indistinguishable from 401Hz or 402Hz depending on the quality of the
microphone. In a room with mild background noise the error will rise. A multiplying factor is
required to space out the frequencies and make each frequency more easily
distinguishable from its neighbours. A Fourier Transform also does not return
the exact frequency which was transmitted as it returns \emph{frequency bins}
which contain several frequencies depending on the bin size.
Furthermore, background noise and differing amplitudes may alter the perceived
frequency so it is unrealistic to expect the transform to work with perfect
accuracy in all situations. I therefore round to the nearest expected frequency,
which allows a tolerance of half the spacing size either side of the expected frequencies.

A further consideration when testing frequency gap sizes is the sample rate. The
larger the frequency spacing, the higher the largest frequency will be, which
due to the \emph{Nyquist-Shannon Sampling Theorem} means the sample rate may
need to be increased. This means more samples are recorded per second, which
increases the total memory requirement for Dolphin and, as discussed in
Chapter 3, more memory means the garbage collector is more likely to run which
will slow down processing.

The size of the frequency spacing directly affects the required sample rate,
as due to the \emph{Nyquist-Shannon Sampling Theorem} the sample rate must be
twice the maximum possible frequency. To use the \emph{Apache} Fast Fourier
Transform the sample rate must also be a power of 2 multiplied by 1000, as there
is a conversion from milliseconds to seconds between the recording and sending
it to the transform which needs to be corrected, so the only choices between 0
and 44100Hz (the standard sample rate used for audio encoding) are 16000Hz and
32000Hz. As using less memory is preferable, a sample rate of 16000Hz is better
than 32000Hz if the accuracy is not adversely affected. Therefore all
frequencies used must be less than 8000Hz and, allowing for the 300Hz offset, a
spacing which maps 255 frequencies into a space of 7700Hz is needed. The maximum
spacing size possible, using the largest range in the frequency spectrum is 30Hz
which uses a range of 7650Hz, so this would be the best choice for maximum error
correction. I also considered that a 32000Hz sample rate may provide more
reliable results, in which case a larger frequency spacing is possible to use a
larger portion of the available spectrum.
Figure~\ref{fig:factor} shows the average difference between the actual
frequency and the decoded frequency for frequency spacings 30\textendash61Hz
(the maximum possible enlarging factor to fit in a range of 15700Hz), using a text
file containing the sentence ``The quick brown fox jumps over the lazy dog.'' as
the encoded data.

\begin{figure}[h]
\includegraphics[width=\textwidth]{spacing.png}
\caption{The average, absolute difference in actual frequency and decoded
frequency when using frequency spacings 30\textendash60Hz compared to the
maximum tolerable error.}
\label{fig:factor}
\end{figure}

There is an increase in the distance from the intended frequency as the
spacing increases. As I explained, higher frequencies
are harder to distinguish from each other, so as the spacing increases and higher
frequencies are more regularly used, the accuracy decreases. However, as the
green line shows, all these average results are within the tolerance of half the
spacing value, so would round to the correct value regardless and the
error grows at the same rate as the maximum tolerable error so each
spacing is equally valid. Therefore, as humans find lower frequencies more
comfortable to listen to~\cite{HighPitch}, I use 30Hz spacing.

\section{Sample rate}
The sample rate is the number of discrete samples recorded per second. The
higher the sample rate, the more data there is to analyse which means more
memory and processing time is required per encoded byte. The previous results
indicated a spacing of 30Hz per byte would be sufficient, which means both 16kHz
and 32kHz are viable sample rates. Figure~\ref{fig:samplerates} shows the
results of my tests for both in the first implementation, with variable encoding
lengths.

\begin{figure}[h]
\includegraphics[width=\textwidth]{samplerate.png}
\caption{The percentage of bytes decoded incorrectly under different sample
rates, averaged over 10 tests for each tone length. The blue line, representing
16kHz is inferior to the red 32kHz sample rate in all cases.}
\label{fig:samplerates}
\end{figure}

32000Hz is far superior at every measured tone length, so I use it for the
final implementation of Dolphin at the expense of extra memory. Given the length
of time each tone is likely to be played for, the discomfort caused by higher frequencies will be limited so it
is worth investigating how accurate Dolphin is using the maximum possible range
of frequencies under 16000Hz by comparing the accuracy of a transmission using
30Hz spacing to one using 61Hz spacing (the highest frequency for which would
be 15855Hz).
Figure~\ref{fig:spacingat32khz} shows this comparison.

\begin{figure}[h]
\includegraphics[width=\textwidth]{samplerate2.png}
\caption{The percentage of bytes decoded incorrectly at a 32kHz sample rate
using different frequency spacings, averaged over 10 tests for each tone length.
The red line is as before, using a 30Hz spacing. The green line shows the
results from a 61Hz spacing, which is worse than the 30Hz spacing at a 16kHz
sample rate.}
\label{fig:spacingat32khz}
\end{figure}

The green line representing 61Hz spacing in Figure~\ref{fig:spacingat32khz} is
worse for every measured tone length. In its best case it had 4 times as many
errors as 30Hz. In the average case for 16ms bursts Dolphin
either incorrectly decoded or omitted 36\% of the data. It is clear that the
sample rate should be 32000Hz with a spacing of 30Hz per encoded frequency.
The reason 61Hz is worse is likely due to the higher frequencies being harder to
distinguish, so in this case using the whole available frequency spectrum is
detrimental to error control.

For the second implementation I again tested lengths 16, 32 and 64ms, which
means the actual lengths decoded were 8, 16 and 32ms to see if shorter sequences
were just as reliable. I also retested at 128ms to verify that the unreliability
of 128ms was not a unique occurrence. Figure~\ref{fig:newsample} shows the
comparison of the second system performance with the first at both
16000Hz and 32000Hz for all four tone lengths.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newsample.png}
\caption{The red data, representing 32kHz sample rates, still outperforms the
blue 16kHz. In 16ms and 64ms for the new implementation the average number of
errors over 10 tests was 0, meaning for these tests there was 100\% data
retrieval.}
\label{fig:newsample}
\end{figure}

In all tone lengths the new implementation performs better with a sample
rate of 32000Hz, the 2\% of byte errors for 16ms at 16kHz are reduced to 0\% for
32kHz, at 32ms 8\% drops to less than 1\% and at 64ms 4\% again drops to 0\% in
these tests. There is an improvement for 128ms but it is still less reliable
than 64ms. This implementation is also more reliable than the corresponding
values for the first implementation in all cases.

\section{Length of tone}
The length of the tone transmitted has an effect on the data transfer
rate of Dolphin. Shorter tones mean higher throughput, whereas longer tones may be
more reliable and accurate. As the tones still need to be powers of 2 due to
the Fourier Transform, I test 16, 32, 64 and 128ms. Figure~\ref{fig:samplerates}
showed how the length of the tone influences the accuracy of the received sound,
but the average case presented in that experiment was influenced by the
occasional poor performance of one transfer. A system which works perfectly 9
times out of 10 is preferable to one which drops 10\% of the data in every
transmission. Figure~\ref{fig:length} shows how accurate these 4 burst-lengths
were over a series of 10 tests each using the first implementation of Dolphin,
considering all the test data rather than an average.

\begin{figure}[h]
\includegraphics[width=\textwidth]{length.png}
\caption{Box plots showing the number of byte errors in the 40 tests of
Dolphin under different tone lengths. 16ms is unsuitable and although 50\% of
the tests for 128ms were successful, it has a larger range of errors so higher
level error control can be better performed on 64ms.}
\label{fig:length}
\end{figure}

The Fourier Transform returns a sum of the amplitudes in each sample,
so if 1000Hz measures 10dB and everything else measures 1dB, after 500 samples
the frequency bin for 1000Hz will contain 5000dB and everything else will only
contain 500dB. This is how the stronger frequencies grow faster than the weaker
frequencies, as larger values are summed on each pass. This should mean the more
time you analyse the less ambiguous the results are, however 64ms performs more
reliably than 128ms, with a smaller spread of data over the same interquartile
range. This is likely due to the longer tone increasing the risk of detecting
spikes in background noise, so shorter tones will on average perform better.

The second implementation showed a 1500\% improvement in accuracy (3\% error
down to 0.2\%) in Figure~\ref{fig:newsample}. Figure~\ref{fig:newboxplot} shows
the number of byte errors detected testing the second implementation, analogous to
Figure~\ref{fig:length}. This figure includes a bar chart for clarity as the
original box plot has so much data registering zero errors.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newlength.png}
\caption{The results of the tests for the new implementation. Both graphs show
the range of errors across all 40 tests, as the box plots are non-existant for
3 of the variables.}
\label{fig:newboxplot}
\end{figure}

With three errors in two of the tests and at least one error in half the tests,
128ms is an unreliable variable to use. Considering the error in one of the
32ms bursts as an outlier, 16, 32 and 64ms bursts resulted in 100\% decoding in
ten separate tests.

\section{Performance of Dolphin}

As I have determined the optimum values for each of the significant variables,
the overall perfomance of Dolphin in real-world scenarios can be evaluated. Here
I discuss using Dolphin with larger files and its ability to operate in
environments with background noise.

\subsection{Accuracy with larger files}
So far I have tested 40\textendash50 byte files, and have used the same file
when testing the effect changing a variable has to minimise the number of changes
that could contribute to the results. Figure~\ref{fig:perfsmall} shows how
Dolphin, with the settings described up to this point, performs decoding 10,
small, different files to verify that it can decode files other than the test
file used so far.

\begin{figure}[h]
\includegraphics[width=\textwidth]{perf_small.png}
\caption{A bar chart showing the average number of byte errors detected from
decoding 10 different files twice. In only one of the cases did the file decode
correctly.}
\label{fig:perfsmall}
\end{figure}

Figure~\ref{fig:perflarge} shows the same process for 10 larger files,
each over 500 bytes instead of 50 bytes.

\begin{figure}[h]
\includegraphics[width=\textwidth]{perf_large.png}
\caption{A bar chart showing the average number of byte errors detected from
decoding ten different files twice. They are slightly higher than for the
smaller files on average, likely because the longer the sound the more likely an element
of background noise will change and interfere, such as a mobile phone ringing.
Assuming the background noise remains relatively constant, they are within an
acceptable margin.}
\label{fig:perflarge}
\end{figure}

These results are within the margin of error as, provided the level of
background noise remains low, the length of the file decoded does not have an
impact on the reliability of decoding. The same cannot be
said of QR codes, as larger files require smaller dots to represent the bits,
which become increasingly susceptible to camera shake.

\begin{figure}[h]
\includegraphics[width=\textwidth]{newperf_small.png}
\caption{The total number of errors detected in testing the same 10 small files
with the new implementation. There was only 1 error across all 30 cases. The
tests of the larger, 500 byte files resulted in no errors, so this error is
likely due to extraneous factors.}
\label{fig:newperf}
\end{figure}

Figure~\ref{fig:newperf} shows these tests for the new implementation.
As expected, file size still does not impact performance and the test of
the larger files resulted in perfect decoding in all but one of the thirty test
cases, so I have omitted that graph. As tests have taken place in the
Computer Laboratory to mimick real-world use, this one error was likely due to a
an increase in background noise after the background noise baseline was
established, corrupting the highest detected frequency.

\subsection{Noise}
Using the values for sample rate and encode length I have derived so far, I now
show that Dolphin copes with background noise. To test this I use a sample of
recorded conversation freely available on the Internet,\footnotemark played
while Dolphin is decoding on a nearby computer. I use the same sample of
conversation to get more reliably comparable results in repeated tests.

\footnotetext{\emph{http://www.soundjay.com/crowd-talking-1.html}}

\begin{figure}[h]
\includegraphics[width=\textwidth]{noise.png}
\caption{A reasonable level of background conversation causes a small rise in
most frequencies from 0\textendash2000Hz but this is not a significant factor
in what is detected. Due to the multiple different frequencies used in
speech the total noise is distributed, meaning the largest single detected
frequency is still the constant tone of the Dolphin transmission. Note the
lower 300Hz are still ignored.}
\label{fig:backgroundnoise}
\end{figure}

Figure~\ref{fig:backgroundnoise} shows how speech causes small increases in
frequencies between 0 and 2000Hz, but because speech is varied over multiple
frequencies the total amplitude detected in any one frequency is minimal,
meaning the overall impact is negligable. In the event of louder conversations
causing a more significant influence on all the frequencies in the range of
speech Dolphin can still work by increasing the volume of the transmission.
Dolphin cannot be expected to work flawlessly in excessive noise in the same way
QR-codes are not expected to work in total darkness.

\chapter{Conclusions}

I have developed a system under Android that transfers data between two mobile devices using sound as the carrier
signal. This comprises two main project components, the encoder and the decoder.
In the encoding stage the sounds are frequency modulated to represent different
bytes, incorporating error control and maximising the potential
reliability. In the decoding stage a Fast Fourier Transform is performed on the
received sound to retrieve the frequencies and decode them back into bytes. This
is where the error correction prepared for in the encoding stage takes place, by
altering the perceived frequencies to match the expected frequencies which can
be mathematically mapped to bytes. Both stages involve accessing the hardware on the mobile
device, namely the speakers and the microphone. Dolphin is written in such a way
that it can be inserted into other applications as a module and be used without
extensive knowledge of the inner workings.

\section{Initial success criteria}

All of the initial success criteria, as described in my proposal, have been met.
An additional data link layer has been implemented for Android. As demonstrated in
over 250 data transfers in Chapter 4, I have converted a stream of
bits into a logical sound representation, and ensured that no two bit streams
have identical sound patterns unless the streams are themselves identical. I
have decoded the sound back into a stream of bits on a seperate
device. I have carried out an evaluation of the transfer rates and reliability,
and created a second implementation based on the results. This second
implementation works with more than 99\% accuracy using three different settings
for the length of the tones, an improvement over the 75\% to 94\% success of the
first implementation. Furthermore, the 99\% success of the second implementation
means that only one byte in dozens of transmissions is dropped, whereas the 94\%
accuracy of the first implementation was due to dropping an average of 6\% of
the bytes in every transmission, shown by the error bars in
Figure~\ref{fig:spacingat32khz}, which is unusable as a system.

In addition to this,
I have implemented an extension to the original project and full duplex
communication between devices is now available. I have shown in
Section~\ref{sec:performance} that Dolphin was successful in all thirty of the
test cases using the new implementation, so 100\% of the bytes sent in
those tests were successfully decoded.

\section{Results}

64ms bursts of sound representing bytes turned out to be more
reliable than 128ms bursts, despite 128ms offering more data to verify the byte.
Whilst 64ms offered 100\% data retrieval in the series of tests, 128ms was only
just above 97\%, resulting from half the tests using 128ms bursts containing an
error. This would mean higher level error control would have to step in. The
bandwidth at 64ms is 133bits/s, although I also demonstrated the 32ms
version, which operates at 267bits/s, has a successful transfer rate of 99\%.
These speeds are fast enough to send URLs or vCards between mobile devices but
not necessarily for larger files such as videos or high resolution photos,
although I have demonstrated that Dolphin can be used for large files if
necessary.

If starting this project again with the benefit of hindsight, I would have
explored further the idea of using non-constant frequency spacing, as I
demonstrated that lower frequencies could be more easily distinguished than
higher frequencies. I also would have considered encoding a clock into the
transmission so the problem of decoding from a random start point in the first
implementation would not have been an issue. In the second implementation, even
though it works with near-perfect precision, half the data transmitted is
ignored which is an inefficient use of bandwidth.

\section{Future work}

There are a few aspects of Dolphin which could lead to future development. One
would be to investigate the use of \emph{Compressive Sensing} as I outlined in
Section~\ref{sec:compressive} to reduce the number of samples required per tone,
by taking a much smaller random sample instead of a regular interval sample.
This would mean the \emph{Nyquist-Shannon Sampling Theorem} would not apply and
the memory requirement can be lowered. This technique has been shown to be
succsesful in fields such as rapid MRI generation~\cite{CompressiveMRI}, an area
in which accuracy of data is critical, so it could be applied to data transfer
systems.

Another area to improve could be the data transfer rate. Using 32ms bursts the
data transfer rate of Dolphin is currently 267bits/s. One reason for
this is the need to only play one tone at once, meaning there can only be one byte
transfered at a time. As outlined in Section~\ref{sec:compressive}, it is
possible to combine sine waves by adding them together and then the Fourier
Transform would detect two equally strong frequencies which represent the two
bytes sent. This would double the data transfer rate. Determining
in which order the bytes were intended is a matter of encoding. One possible
method would be to encode the first byte as before, and then add the numeric
value of the second to the first and encode that. Therefore, the smaller
frequency would always represent the first byte, and the second can be retrieved
by decoding the frequency and subtracting the value of the first byte.

This method would also make multiplexing transfers possible, sending multiple
files at once by playing them at the same time, which was one of my originally
proposed possibly extensions.

\begin{thebibliography}{100}

\bibitem{QRPopularity} Rouillard, J., "Contextual QR Codes," Computing in the Global Information Technology, 2008. ICCGI '08. The Third International Multi-Conference on , vol., no., pp.50,55, July 27 2008-Aug. 1 2008 doi: 10.1109/ICCGI.2008.25

\bibitem{QRSize} Yue Liu; Ju Yang; Mingjun Liu, "Recognition of QR Code with
mobile phones," Control and Decision Conference, 2008. CCDC 2008. Chinese , vol., no., pp.203,206, 2-4 July 2008 doi: 10.1109/CCDC.2008.4597299

\bibitem{CompSensing1} Shihao Ji; Ya Xue; Carin, L.; "Bayesian Compressive
Sensing" Signal Processing, IEEE Transactions on , vol.56, no.6, pp.2346-2356,
June 2008 doi: 10.1109/TSP.2007.914345

\bibitem{CompSensing2} Thomas Blumensath, Mike E. Davies, Iterative hard
thresholding for compressed sensing, Applied and Computational Harmonic Analysis, Volume 27,
Issue 3, November 2009, Pages 265-274, ISSN 1063-5203, 10.1016/j.acha.2009.04.002.

\bibitem{Nyquist} Nyquist, H.; , "Certain Topics in Telegraph Transmission Theory," American Institute of Electrical Engineers, Transactions of the , vol.47, no.2, pp.617-644, April 1928 doi: 10.1109/T-AIEE.1928.5055024

\bibitem{Shannon} C. E. Shannon, A mathematical theory of communication, ACM SIGMOBILE Mobile Computing and Communications Review, v.5 n.1, January 2001  doi:10.1145/584091.584093

\bibitem{phaseshift} Pasupathy, S.; , "Minimum shift keying: A spectrally efficient modulation", Communications Magazine, IEEE , vol.17, no.4, pp.14-22, July 1979 doi: 10.1109/MCOM.1979.1089999

\bibitem{Doppler} John M. Chowning; "The Simulation of Moving Sound Sources", Computer Music Journal Vol. 1, No. 3 (Jun., 1977), pp. 48-52

\bibitem{KansasCityStandard} M. Peschke, V. Peschke; "BYTE's Audio Cassette Standards Symposium", BYTE, Feb 1976, pp. 72-73

\bibitem{KansasCityPerformance} P.J. Robertson, B. Campbell, Interface unit for audiocassette and RS232-standard serial port, Journal of Microcomputer Applications, Volume 8, Issue 3, July 1985, Pages 279-284, ISSN 0745-7138, 10.1016/0745-7138(85)90007-7.

\bibitem{Android} Butler, M.; , "Android: Changing the Mobile Landscape", Pervasive Computing, IEEE , vol.10, no.1, pp.4-7, Jan.-March 2011 doi: 10.1109/MPRV.2011.1

\bibitem{Conversation} Jie Yang , Simon Sidhom , Gayathri Chandrasekaran , Tam Vu , Hongbo Liu , Nicolae Cecan , Yingying Chen , Marco Gruteser , Richard P. Martin, Detecting driver phone use leveraging car speakers, Proceedings of the 17th annual international conference on Mobile computing and networking, September 19-23, 2011, Las Vegas, Nevada, USA  doi:10.1145/2030613.2030625

\bibitem{deltaframes} Mogul, Jeffrey, et al.; "Delta encoding in HTTP" Work in Progress 2002

\bibitem{CompressiveMRI} Lustig, M., Donoho, D. and Pauly, J. M. (2007), Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn Reson Med, 58: 1182–1195. doi: 10.1002/mrm.21391

\bibitem{HighPitch} Skinner, Margaret Walker. "Speech intelligibility in noise-induced hearing loss: Effects of high-frequency compensation." (1976) pp. 74

\end{thebibliography}

\end{document}
