\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
 
\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Preparation}

\graphicspath{C:/Users/Matt/My Documents/!University
Work/CompSci/workspace/PartII/gr/}

Before creating the new layers I considered which the software engineering
approach would be best for this particular project, how much information from
courses in the Computer Science Tripos would be useful, and how much I needed to
research further. In this chapter I describe these considerations, along
with some of the fundamental concepts of the project such as digitally
representing sound and how to encode information in a sound, presenting a
number of different ways the information encoding could be achieved
(\emph{amplitude}, \emph{phase} and \emph{frequency} shifting) with their
advantages and disadvantages. My research concludes that using frequency is
the simplest method of data encoding.

\section{Prerequisite Knowledge}

A number of courses offered at Cambridge contain useful information for
completing this project.

\begin{description}
\item{{\bf Digital Signal Processing} contains information on Fast Fourier
Transforms that is critical for decoding the sounds back into bit patterns.}
\item{{\bf Computer Networking} explains the OSI layer model and how the layers
interact with each other, which tells me I need to offer a single socket for
data coming into the data-link layer from the network layer, written for another
program.}
\item{{\bf Mobile and Sensor Systems} contains theory on wireless data
transfer which will also prove useful.}
\item{{\bf Programming in Java} and {\bf Further Java} taught how to create Java
programs and also contained theory on design patterns which are applicable to
larger projects.}
\item{{\bf A Group Project} gave me experience in using software lifecycle
methodologies, programming a larger project in Java and using a version control
system for backups and synchronisation of work from multiple locations.}
\end{description}

\section{Development life cycle}

The software development model I will use for this project is the
\emph{waterfall} model because the requirements can be well-defined at the
beginning of the project. It is also likely that, with the Java libraries
available, the number of different ways of completing the internal structure of
the project will be limited, so there would be little to gain from creating
multiple prototypes which would be functionally identical. That said, I
will attempt to try a few different coding schemes for test purposes, but they
will likely be variants of the same code, e.g. in a frequency shifting
coding scheme using 4 frequencies for 2-bit blocks instead of 255 frequencies
for 8-bit blocks will both use the same method for converting bits to sounds
but with different lengths.

\section{Testing strategy}

To test this project I will have a series of files to transfer and receive with
different conditions, and will compare the quality of signal received and time
to transmit for each of them. The main tests will include:

{\bf Unit Tests}
\begin{itemize}
  \item{Encoding: Converting data to frequencies and analysing numerical
  output for errors in frequency mapping}
  \item{Encoding: Saving predetermined frequencies as sound files and comparing
  to stock frequencies for accuracy}
  \item{Decoding: Test Fourier Transform with known frequencies (no microphone
  involvement)}
  \item{File operators: Check microphone code captures audio correctly and
  consistently by testing various lengths and sample rates}
\end{itemize}

{\bf System Tests}
\begin{itemize}
  \item{Converting the data to sound and back to data on one device (no
  microphone/speaker involvement)}
  \item{Send data between phones at all sample rates between a derived
  minimum and maximum to determine optimal sample rate with the least
  processing}
  \item{Sending data between phones at various sound segment lengths to
  determine the optimal sound length with minimum data loss}
  \item{Repeat all these tests for encoded sequence lengths of 1, 2, 4 and 8
  bits to determine which is the most efficient}
\end{itemize}

{\bf Integration Tests}
\begin{itemize}
  \item{Socket Interface: write an app to send a bit stream to the layers,
  decode result}
  \item{Socket Interface: write an app to receive decoded output from layers,
  test with known data.}
\end{itemize}

Further tests may prove necessary after the results of these tests are analysed,
which will be explained in the Evaluation section.

\section{Version control}

Version control is important for managing changes and tracking overall project
progress, automating regular backups and rolling back to a functional state in
the event of a mistake or dead-end. I use \emph{Subversion} (SVN) as it is compatible
with Windows, has free support already in place through \emph{Google Code}, and
there are plugins available for \emph{Eclipse} which I am using for the Java
programming. It is also the system I have most experience in as I used it for
the Part 1B group project. The central repository is at
\emph{https://mt521-cam-ac-uk.googlecode.com/svn} which is a remote
location for backup redundancy. To use rollback effectively I regularly commit any change
to an atomic section of code making it simple to find and revert code which has
not worked as expected.

\section{Requirements analysis}

The end-user of this project will be developers of Android apps which send data
via sound. To make this as simple as possible they will require a single point
of access to the layers without worrying about the internal structure so I will
create a socket to receive bytes which can be converted to sound and sent
directly to the device audio output. To that end, the requirements are as
follows:

\begin{description}
  \item{{\bf R1.} The layers will convert bits to sound and sound to bits.}
  \item{{\bf R2.} The layers should act like a black box with a single input and
  output.}
  \item{{\bf R3.} The layers should connect directly to the physical layer
  devices (microphone and speaker).}
\end{description}

\section{Sound as frequencies}

To use any transmission medium you need to be able to alter the information sent
to convey different information. In wires this can be alternating between
putting charge on the line and no charge to represent binary data. An
extension to that would be varying the amount of charge at each pulse to
represent more information per clock cycle. Sound waves can be considered in the
same way by varying one of the characteristics that makes a unique sound.

In general, a monotonous sound of a given frequency \emph{f} can be represented
as a sine wave using the formula:
\begin{equation}
\mathrm{tone} = sin(2\pi f)
\end{equation}
Figure~\ref{fig:three_tones} shows how the tone can be changed by increasing
the frequency.
\begin{figure}[t]
\includegraphics[width=\textwidth]{combined.png}
\caption{Three frequencies representing A, C\# and E. Increasing the frequency
in the sine argument causes different tones to be created, which is
one way of representing different information in a sound.}
\label{fig:three_tones}
\end{figure}
Furthermore, it is possible to combine multiple tones and create \emph{chords},
which is done by adding the sine values together. For example, the three
frequencies shown in Figure~\ref{fig:three_tones} can be combined to create the
sine wave in Figure~\ref{fig:chord}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{440-550-660.png}
\caption{A chord from combined frequencies 440Hz, 550Hz and 660Hz. Doing this
means three times as much information can be represented in the same timespan,
offering a different way to represent data using frequencies.}
\label{fig:chord}
\end{figure}
Sound as a sine wave is a continuous function, which means it can be measured at
every instant in time and possibly have a slightly different value for each so
it results in an infinite series of numbers to represent one sound. No matter
how big the computer memory is you cannot store an infinite amount of data, therefore when
tones are represented electronically you need to take discrete samples at a
given \emph{sample rate}. The \emph{Nyquist-Shannon Theorem}
states that to ensure no data is lost the sample rate needs to be twice the maximum freqency
you will create~\cite{Nyquist}. This is because if you sample a sine wave at
$x=0,1,2,3,\ldots$ then there are infinitely many frequencies and linear combinations of
frequencies which could produce the same values. An example of this aliasing is
shown in Figure~\ref{fig:aliasing}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{aliasing.png}
\caption{Two different waves with the same sampled values, because the sample
rate was not high enough. If there were eight samples in this range there would
be no ambiguity.}
\label{fig:aliasing}
\end{figure}

The \emph{Nyquist-Shannon Theorem} assumes sampling at regular time intervals.
Recent work considers time intervals generated at random, called
\emph{Compressive sensing}~\cite{CompSensing1}. This assumes that
frequencies in use are sparse so the chances of getting aliasing effects are
reduced, which means you can afford to sample at lower rates.

The range of frequencies we can represent as soundwaves are the frequencies a
mobile phone speaker is capable of producing, which varies depending on the
handset. All phones are capable of playing tones at frequencies humans can hear,
which is usually 20-20,000 Hz. Human speech typically
occurs between 300Hz and 3400Hz~\cite{Conversation}, so it would be best to
shift the frequencies I use for this project above that range to limit
interference.
The range of frequencies used will directly affect the maximum transmission rate of the medium, based on the
\emph{Shannon-Hartley Theorem}:
\begin{equation}
C = B \lg(1+S/N)
\end{equation}
Which states that the channel capacity is a product of the bandwidth and the
signal to noise ratio. In other words, the more frequencies available the
higher the channel capacity, provided the signal to noise ratio logarithm does
not grow faster than the bandwidth~\cite{Shannon}.

\section{Coding schemes}

Changing the formula for a sound to a function of time instead of
frequency gives an improved general sine wave with three variables:
\begin{equation}
f(t) = Asin(\omega t + \phi)
\end{equation}
The three elements of this wave which can be modified to represent
information are: the \emph{phase} ($\phi$), \emph{frequency} ($\omega$) and
\emph{amplitude} ($A$).

\subsection{Phase}
Altering the first of these, the phase, is known as \emph{phase shift keying}
(often called \emph{PSK}). The most common form of PSK is \emph{Binary PSK}, in
which binary information is sent by giving the sine argument a 180 degree phase
shift to represent binary 0, or no phase shift to represent binary
1~\cite{phaseshift}.
There are two different ways to measure this phase difference:
\begin{description}
\item{Comparing the incoming signal to a predetermined tone and see from
observation which parts of the new tone are phase shifted}
\item{Comparing the incoming tone to itself. This works
by comparing the phase shift of one time interval to the phase of the interval
preceding it, so instead of the comparison at time $t=5$ being $t=5$ in a stock
sine wave, it would be compared to $t=4$ in the data just received. For example, four 0s would be
represented by a tone shifting 180 degrees four times, and four 1s would be no
shift or a continuous tone. Figure~\ref{fig:phase_shift} demonstrates this.
Creating the tone for each binary element depends on the preceding binary
element, but because the changes in phase are measured at the
receiving end a dropped bit won't affect the decoding of subsequent bits. The
downside of the first approach is needing to store a baseline tone for
comparison at the receiving end. This will either need to be arranged beforehand
or computed as needed which will require extra processing time per connection.}
\end{description}
\begin{figure}[t]
\includegraphics[width=\textwidth]{phaseshift.png}
\caption{An example of phase shift with respect to itself. The initialisation
wave gives the first baseline comparison. The second timeslot is 180 degrees
phase shifted compared to this so it represents binary 0. The green wave then
becomes the baseline. The red wave is not phase shifted compared to this so it
is a binary 1. Blue is phase shifted compared to red so it is a binary 0.}
\label{fig:phase_shift}
\end{figure}

Quadrature PSK is a similar scheme which uses four phases, 90 degrees apart, to
convey two bits at once~\cite{PhaseShift}. This doubles the transmission rate at
the same bandwidth, but reduces fault tolerance as with smaller phase changes it becomes
more likely the change in phase measured was a result of the \emph{Doppler
Effect} or external interference~\cite{Doppler}.

\subsection{Frequency}
In frequency shifting two frequencies represent binary 0 and 1, and
a tone is transmited comprised of those two frequencies in
predetermined~timeslots~\cite{PhaseShift}. It then lends itself to transmission
rate optimisation, such as letting sequences of bits be represented by different frequencies. A
byte can be represented in binary by 255 distinct numbers so a linear mapping of
numbers to frequencies allows 255 frequencies to represent all the bit patterns
of a byte. Alternatively, a smaller sequence of bits could be used, such as
2-bit blocks, and that would result in 4 frequencies representing all possible
data combinations, though there would need to be 4 times as many sound bursts
sent per file. The more frequencies that are used the higher the sample rate
will need to be, and the more processing needs to be done per second to
determine what the received frequency is, as described in the decoding section.
On a mobile device the processing power may be limited so a huge number of
calculations per second could overload the receiving end and data will be
dropped.

\subsection{Amplitude}
In amplitude shift keying each wave amplitude represents a sequence of bits, in
the same way that each frequency could represent a bit sequence in FSK. The
problem with using this for sound is the receiver can be tricked into thinking
it is listening to a different amplitude by moving the microphone. And even if
the microphone were stationary, if it is further away than expected the
amplitudes may be different to what is on record. This can be fixed by looking
at the change in amplitudes rather than the amplitude value but it is still a
less reliable system of transmission. Amplitude shifting is the least applicable
to audio transmissions and is usually used with light as it does not deteriorate
as much with distance.

\subsection{Conclusion}
As the \emph{Doppler Effect} will likely have a minimal impact on the
received frequencies, amplitude shift keying is the most susceptible to movement
of the transmission device as there will be a distinct difference in amplitude
if the speaker moves~\cite{Doppler}. A closer microphone will give the
impression of a louder signal and even if the receiver listens for a baseline comparison at the start
of the transmission, movement after that will cause data errors. This makes it
unsuitable for this project. I have decided to use frequency as it offers a
higher data rate per second than phase shift, which becomes increasingly
susceptible to errors after \emph{Quadrature PSK} (4 phases), for example even
restricting the frequencies used to one in every thousand audible frequencies
there are still 20 frequencies to use, five times more than \emph{Q-PSK}.

\section{Existing schemes}

In the 1970s and 1980s audio cassette drives were used to store
information, such as computer programs for home computers. One standard
for cassette tapes was the \emph{Kansas City Standard} which used frequency
shifting at 300 baud. A binary 0 was represented by four cycles of a 1200Hz
sine wave, and a binary 1 was represented by eight cycles of a 2400Hz wave. Data
was sent in eleven-bit frames consisting of a start bit (0), eight bits of data
and two stop bits (11). It therefore had a transfer rate of 27 bytes
per second. (CITE) A higher baud version was developed, capable of 1200 baud.
(CITE) This was achieved by shortening the time needed for each binary element.
A 0 is now one cycle of 1200Hz and a 1 is two cycles of 2400Hz, and the stop bit is now a
single 1. This scheme was capable of 120 bytes per second and the data was
stored in 256 byte blocks, which were numbered so it was possible to rewind the
tape to a specific location in the event of a read error.

Using audio to transmit data lends itself to radio use, and amateur
radio operators have used \emph{slow scan television} (SSTV) for over 50
years to send pictures using sound. It is a frequency modulation system in which every
colour brightness gets its own frequency and the red, green and blue components
are then sent seperately for each pixel using those frequencies. Each bit takes
30ms to send and ranges over 1500Hz to 2300Hz. It also contained some error
correction, by sending odd lines of the picture first, then the even ones to
fill in the gaps. If a line was corrupted or missing the ones either side can
approximate what was supposed to be there.

Both these schemes use \emph{frequency shift keying}, which I have decided to
use, so they give inspiration for how I can proceed with my project. The
\emph{Kansas City Standard} technique of sending data in cycles offers the
possibility to avoid error correction by having the receiving phone request one
or more chunks to be resent at the end of the transmission. Error correction may
not always be possible or accurate so resending a small part of the transmission
in this way is preferable.

\section{Decoding signals}

So far, I have explained how data can be converted to sound and I have decided
to use the frequency as the encoding variable. I will now show how to decode the
data at the receiving end using \emph{digital signal processing} (DSP). There
are two ways to obtain the frequency of a portion of the sound. The
first, and most simple, is to re-create the sine wave and count the number
of times the wave crosses the axis in a set time. Higher frequencies will cross
the axis more times per second than lower frequencies. The problem is if the
frequencies used are so low that the wave doesn't cross the axis in the time
sample. This is very unlikely to happen as the lowest frequency I plan to use
would be 20Hz, so even if the duration of each tone is only 100ms, every
frequency will have at least two solutions for $f(t)=0$.

The second, more involved, method would be to utilise the periodic nature of the
sine waves and analyse the data using a \emph{Fourier Transform}:
\begin{equation}
F(\omega) = \int a(t)cos(\omega t)dt
\end{equation}
for frequency $\omega$ and amplitude $a(t)$ at time $t$.
The transform takes a set of complex numbers and returns another, equally sized, set of
complex numbers. If you set the real part of the input to the audio sample, with
0 for all imaginary parts, running the transform will return an array of complex
numbers, each of which represent a range of frequencies called a
\emph{frequency bin}. The size of the range depends on the sample rate of the
input so it can be limited to 1Hz if necessary to assign a different bit
sequence to every frequency value. In that case, each small sample of the input
will only have large values in the bins corresponding to the frequencies present
and decoding the input becomes a simple matter of searching the array for the
most significant value or values. To speed up computation the size of the bin
doesn't need to be limited to one per frequency as to help with error correction
the assigned frequencies will be several hertz apart, so each bin could
represent 10-20Hz and the bin containing the largest value will still represent
the correct frequency.

Figure~\ref{fig:example_fourier} shows an example of using a Fourier Transform
to retrieve an encoded frequency.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier1.png}
\caption{An example pulse with frequency 3. It is also multiplied by an
exponent so the graph converges to 0 in both positive and negative, this does
not alter the frequency.}
\label{fig:example_fourier}
\end{figure}
Consider the integrand, modified using \emph{Euler's formula}, $e^{-2\pi i\omega
t}f(t)$. If the frequency $\omega$ in the first part of the product matches the frequency of the received signal
$f(t)$ then they will be very closely related, as demonstrated in Figure
~\ref{fig:real_imaginary}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier2.png}
\caption{Real and imaginary parts with respect to frequencies 3 and 5 when
multiplied with the function previously shown. The
correct frequency (green) results in an entirely positive integrand, whereas the
incorrect frequency (red) oscillates between positive and negative so will
integrate to near 0.}
\label{fig:real_imaginary}
\end{figure}
When the real part of one is negative, the other will be negative and when the
real part of one is positive, the other will be positive. This means when the frequencies match, the real part
of this integrand will almost always be positive, so the integration will return
a positive value. If the frequencies do not match, there is no
positive/negative link between the elements of the product so the real part of
the integrand can oscillate and will have some negative values which when
integrated will cancel out the positive peaks and return a smaller value. This
is why the desired frequency is in the bin with the largest value. This is
shown in Figure~\ref{fig:freq_bins}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{fourier3.png}
\caption{The \emph{frequency bins} returned after integrating the real parts of
the transform. The positive\/negative oscillation means the integration returned
0 for the incorrect frequency, but a relatively high value for the correct
frequency. ``Negative Frequencies" mirror their positive counterparts as they
do not really exist. The height of the peak represents the amplitude of the
received frequency.}
\label{fig:freq_bins}
\end{figure}

\section{Android}

I use Android for this project as it is a popular mobile platform, which will
make this usable to a large market~\cite{Android}. As Android is `app'
focused and an open source platform many more people will be able to write
programs which can use my new OSI layers. Apple's iOS for the iPhone is not open
source so accessing the phone's underlying hardware is more difficult.
Furthermore, Android uses Java, which I have experience in.

\subsection{Application life cycle}

In Android each application acts as a user in a Linux-based multi-user system.
Each application therefore has different security capabilities and access to
peripherals such as the speakers and microphone. Every process also runs its own
\emph{Virtual Machine} so one application does not impact the operation of
another under normal circumstances. Android will keep processes (commonly
known as \emph{activities}) in memory as long as possible and typically only
delete them when memory is low and it needs to reuse the space. There are four
states an Android process can be in, which are, in decreasing order of
importance:

\begin{itemize}
  \item{{\bf Foreground:} This is the activity the user is currently
  interacting with, it will likely be responsible for the content being
  displayed on the screen at that moment. It will only be terminated if there
  is nothing else available to reuse.}
  \item{{\bf Visible:} This activity is in view but is not at the forefront.
  For example, if a dialog box has appeared to give the user information then
  that is the foreground activity, and the former foreground activity becomes
  a \emph{visible} activity.}
  \item{{\bf Background:} This activity is not visible or being currently used
  so can be terminated if required. If it is terminated and the user navigates
  back to it then the activity will simply be restarted instead of loaded from
  memory.}
  \item{{\bf Empty:} This process does not control any application components,
  e.g. a class which was called from a foreground activity to perform a
  calculation. These are the first to be terminated for the sake of memory
  reallocation.}
\end{itemize}

These activities are primarily navigated through by using \emph{intents}.
Intents are messages to the Android system which indicate something has changed.
They can be generated by the user explicity to start a new intent, or they can
serve as listeners for external activity such as a button press or an RFID card
being scanned, and an \emph{intent filter} in the manifest file will create the
new foreground activities.

Empty processes which offer no user interface must be explicitly declared as
\emph{services} if they are important to the running of another activity.
Services carry out long running activities such as music playing or complex
calculations. In the Dolphin system, both the Fourier Transform for decoding
data and the playback of the encoded sound would be defined as services as they
are either long-running or computationally complex. Services, like activities,
are launched using intents.

Activities, intent filters and services all need to be declared in the manifest
\texttt{xml} file. The manifest is a brief summary of the requirements and
capabilities of an application, for information and security reasons. The
manifest lists the permissions required for the application to run.
Permissions act as a security feature, alerting the user that the application is
capable of accessing some of the mobile device's functionality that might not be
expected, e.g. writing to the external storage or accessing the internet. In
addition to these, the manifest also declares the hardware the application
requires, such as the camera or an RFID scanner, so devices without these
features are unable to install the app. Additional libaries to the standard
framework API are also declared here.

\subsection{Dolphin}

Dolphin is like a pseudo-library, or a module to be dropped in elsewhere,
rather than a standalone product. It is meant to be used inside an app that uses
this feature to do something, such as an app that creates audio-based QR codes.
With that in mind, I will need to write an app for the system and integration
tests that uses Dolphin.

A developer using Dolphin should not have to be familiar its
inner workings. Therefore, the interaction with the program takes the form of a
black-box style input and output using sockets.
The file to be encoded is sent to a socket and retrieved by my classes. The
sound is then played by the speaker from the inside of the black box without any
further user interaction. When decoding, the socket input is the microphone, and
the decoded byte array should be returned to the user. Figure~\ref{fig:blackbox}
shows how an application created by the end user can utilise the socket
interface without needing to interact with the hardware at all.

\begin{figure}[t]
\includegraphics[width=\textwidth]{blackbox.png}
\caption{Other apps should be able to use my project like a black box and be
able to integrate this into other projects or applications like an extra
module.}
\label{fig:blackbox}
\end{figure}

\bibliography{citations}

\end{document}