\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
 
\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Preparation}

\graphicspath{C:/Users/Matt/My Documents/!University
Work/CompSci/workspace/PartII/gr/}

In this chapter I will describe some of the fundamental concepts of the project
and present a number of different ways the same result could be achieved with
their advantages and disadvantages.

\section{Java and Android}

I use Android for this project as it is a popular mobile platform, which will
make this usable to a large market [find a reference about android market
share]. As Android is 'app' focused and an open source platform company many
more people will be able to write programs which can use my new OSI layers.
Apple's iOS for the iPhone is not open source so accessing the phone's
underlying hardware is more difficult. Furthermore, Android uses Java,
which I have a lot of experience in.

\section{Sound as frequencies}

To use any transmission medium you need to be able to alter the information sent
to convey different information. In wires this can be alternating between
putting charge on the line and no charge to represent binary data. An
extension to that would be varying the amount of charge at each pulse to
represent more information per clock cycle. Sound waves can be considered in the
same way by varying one of the characteristics that makes a unique sound.

In general, a monotonous sound of a given frequency \emph{f} can be represented
as a sine wave using the formula:
\begin{equation}
\mathrm{tone} = sin(2\pi f)
\end{equation}
Figure~\ref{fig:three_tones} shows how the tone can be changed by increasing
the frequency.
\begin{figure}[h]
\includegraphics[width=\textwidth]{combined.png}
\caption{Three frequencies representing A, C\# and E. Increasing the frequency
in the sine argument causes different tones to be created, which is
one way of representing different information in a sound.}
\label{fig:three_tones}
\end{figure}
Furthermore, it is possible to combine multiple tones and create \emph{chords},
which is done by adding the sine values together. For example, the three
frequencies shown in Figure~\ref{fig:three_tones} can be combined to create the
sine wave in Figure~\ref{fig:chord}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{440-550-660.png}
\caption{A chord from combined frequencies 440Hz, 550Hz and 660Hz. Doing this
means you can represent three times as much information in the same timespan,
offering a different way to represent data using frequencies.}
\label{fig:chord}
\end{figure}
Sound as a sine wave is a continuous function, which means you can measure it at
every instant in time and possibly get a slightly different value so you end up with an
infinite series of numbers to represent one sound. No matter how big
the computer memory is you cannot store an infinite amount of data, therefore 
when tones are represented electronically you need to take discrete samples at a
given \emph{sample rate}. The \emph{Nyquist-Shannon Theorem} states that to
ensure no data is lost the sample rate needs to be twice the maximum freqency
you will create. This is because if you sample a sine wave at $x=0,1,2,3,\ldots$
then there are infinitely many frequencies and linear combinations of
frequencies which could produce the same values. An example of this aliasing is
shown in Figure~\ref{fig:aliasing}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{aliasing.png}
\caption{Two different waves with the same sampled values, because the sample
rate was not high enough. If there were eight samples in this range there would
be no ambiguity.}
\label{fig:aliasing}
\end{figure}

The \emph{Nyquist-Shannon Theorem} assumes sampling at regular time intervals.
There is another way to sample data using time intervals generated by a
linear operator, or sometimes a random interval, called \emph{Compressive
sensing} \cite{CompSensing1}. The theory assumes that frequencies in use are
sparse so the chances of getting aliasing are reduced. This means you can afford
to sample at lower rates. To improve accuracy with this method you can also
determine error bars\cite{CompSensing2} to more precisely decide which frequency
is detected.

The range of frequencies we can represent as soundwaves are the frequencies a
mobile phone speaker is capable of producing, which varies depending on the
handset. All phones are capable of playing tones at frequencies humans can hear,
which is usually 20-20,000 Hz. Some phones will be able to use frequencies much
higher than this, which will be useful if tests reveal human speech is
interfering with the transmission. The range of frequencies used will directly
affect the maximum transmission rate of the medium, based on the
\emph{Shannon-Hartley Theorem}:
\begin{equation}
C = B \lg(1+S/N)
\end{equation}
Which states that the channel capacity is a product of the bandwidth and the
signal to noise ratio. In other words, the more frequencies available the
higher the channel capacity, provided the signal to noise ratio logarithm does
not grow faster than the bandwidth.

\section{Coding schemes}

Changing the formula for a sound to a function of time instead of
frequency gives an improved general sine wave with three variables:
\begin{equation}
f(t) = Asin(\omega t + \phi)
\end{equation}
The three elements of this wave which can be modified to represent
information are: the \emph{phase} ($\phi$), \emph{frequency} ($\omega$) and
\emph{amplitude} ($A$).

\subsection{Phase}
Altering the first of these, the phase, is known as
\emph{phase shift keying} (often called \emph{PSK}). The most common form of PSK is \emph{Binary PSK},
in which binary information is sent by giving the sine argument a 180 degree
phase shift to represent binary 0, or no phase shift to represent binary 1.
There are two different ways to measure this phase difference:
\begin{description}
\item[comparing the
incoming signal to a predetermined tone and see from observation which parts of
the new tone are phase shifted]
\item[or comparing the incoming tone to itself.
Comparing the tone to itself works by comparing the phase shift of one time
interval to the phase of the interval preceding it, so instead of the
comparison at time $t=5$ being $t=5$ in a stock sine wave, it would be
compared to $t=4$ in the data just received. For example, four 0s would be
represented by a tone shifting 180 degrees four times, and four 1s would be no
shift or a continuous tone. Figure~\ref{fig:phase_shift} demonstrates this.
Creating the tone for each binary element depends on the preceding binary
element, but because the changes in phase are measured at the
receiving end a dropped bit won't affect the decoding of subsequent bits. The
downside of the first approach is needing to store a baseline tone for
comparison at the receiving end. This will either need to be arranged beforehand
or computed as needed which will require extra processing time per connection.]
\end{description}
\begin{figure}[h]
\includegraphics[width=\textwidth]{phaseshift.png}
\caption{An example of phase shift with respect to itself. The initialisation
wave gives the first baseline comparison. The second timeslot is 180 degrees
phase shifted compared to this so it represents binary 0. The green wave then
becomes the baseline. The red wave is not phase shifted compared to this so it
is a binary 1. Blue is phase shifted compared to red so it is a binary 0.}
\label{fig:phase_shift}
\end{figure}

Quadrature PSK is a similar scheme which uses four phases, 90 degrees apart, to
convey two bits at once. This doubles the transmission rate at the same
bandwidth, but reduces fault tolerance as with smaller phase changes it becomes
more likely the change in phase you measure was a result of the \emph{Doppler
Effect} or external interference.

\subsection{Frequency}
In frequency shifting you let two frequencies represent binary 0 and 1, and
transmit a tone comprised of those two frequencies in predetermined timeslots.
It then lends itself to transmission rate optimisation, such as letting
sequences of bits be represented by different frequencies, though the more
frequencies you use the higher the sample rate will need to be, and the more
processing needs to be done per second to determine what the received frequency
is, as described in the decoding section. On a mobile device the processing
power may be limited so a huge number of calculations per second could overload
the receiving end and data will be dropped.

\subsection{Amplitude}
In amplitude shift keying each wave amplitude represents a sequence of bits, in
the same way that each frequency could represent a bit sequence in FSK. The
problem with using this for sound is the receiver can be tricked into thinking
it is listening to a different amplitude by moving the microphone. And even if
the microphone were stationary, if it is further away than expected the
amplitudes may be different to what is on record. This can be fixed by looking
at the change in amplitudes rather than the amplitude value but it is still a
less reliable system of transmission. Amplitude shifting is the least applicable
to audio transmissions and is usually used with light as it does not deteriorate
as much with distance.

\subsection{Conclusion}
As the \emph{Doppler Effect} will likely have a minimal impact on the
received frequencies, amplitude shift keying is the most susceptible to movement
of the transmission device as there will be a distinct difference in amplitude
if the speaker moves. A closer microphone will give the impression of a louder
signal and even if the receiver listens for a baseline comparison at the start
of the transmission, movement after that will cause data errors. This makes it
unsuitable for this project. I have decided to use frequency as it offers a
higher data rate per second than phase shift, which becomes increasingly
susceptible to errors after \emph{Quadrature PSK} (4 phases), for example even
restricting the frequencies used to one in every thousand audible frequencies
there are still 20 frequencies to use, five times more than \emph{Q-PSK}.

\section{Existing schemes}

In the 1970s and 1980s audio cassette drives were used to store
information, such as computer programs for home computers. One standard
that used was the \emph{Kansas City Standard} which used frequency shifting at
300 baud. A binary 0 was represented by four cycles of a 1200Hz
sine wave, and a binary 1 was represented by eight cycles of a 2400Hz wave. Data
was sent in eleven-bit frames consisting of a start bit (0), eight bits of data
and two stop bits (11). It therefore had a transfer rate of 27 bytes
per second. (CITE) A higher baud version was developed, capable of 1200 baud.
(CITE) This was achieved by shortening the time needed for each binary element.
A 0 is now one cycle of 1200Hz and a 1 is two cycles of 2400Hz, and the stop bit is now a
single 1. This scheme was capable of 120 bytes per second and the data was
stored in 256 byte blocks, which were numbered so it was possible to rewind the
tape to a specific location in the event of a read error.

Using audio to transmit data lends itself to radio use, and amateur
radio operators have used \emph{slow scan television} (SSTV) for over 50
years to send pictures using sound. It is a frequency modulation system in which every
colour brightness gets its own frequency and the red, green and blue components
are then sent seperately for each pixel using those frequencies. Each bit takes
30ms to send and ranges over 1500Hz to 2300Hz. It also contained some error
correction, by sending odd lines of the picture first, then the even ones to
fill in the gaps. If a line was corrupted or missing the ones either side can
approximate what was supposed to be there.

Both these schemes use \emph{frequency shift keying}, which I have decided to
use, so they give inspiration for how I can proceed with my project. The
\emph{Kansas City Standard} technique of sending data in cycles offers the
possibility to avoid error correction by having the receiving phone request one
or more chunks to be resent at the end of the transmission. Error correction may
not always be possible or accurate so resending a small part of the transmission
in this way is preferable.

\section{Decoding signals}

So far, I have explained how data can be converted to sound and I have decided
to use the frequency as the encoding variable. I will now show how to decode the
data at the receiving end using \emph{digital signal processing} (DSP). There
are two ways to obtain the frequency of a portion of the sound. The
first, and most simple, is to re-create the sine wave and count the number
of times the wave crosses the axis in a set time. Higher frequencies will cross
the axis more times per second than lower frequencies. The problem is if the
frequencies used are so low that the wave doesn't cross the axis in the time
sample. This is very unlikely to happen as the lowest frequency I plan to use
would be 20Hz, so even if the duration of each tone is only 100ms, every
frequency will have at least two solutions for $f(t)=0$.

The second, more involved, method would be to utilise the periodic nature of the
sine waves and analyse the data using a \emph{Fourier Transform}:
\begin{equation}
F(\omega) = \int a(t)cos(\omega t)dt
\end{equation}
for frequency $\omega$ and amplitude $a(t)$ at time $t$.
The transform takes a set of complex numbers and returns another, equally sized, set of
complex numbers. If you set the real part of the input to the audio sample, with
0 for all imaginary parts, running the transform will return an array of complex
numbers, each of which represent a range of frequencies called a
\emph{frequency bin}. The size of the range depends on the sample rate of the
input so it can be limited to 1Hz if necessary to assign a different bit
sequence to every frequency value. In that case, each small sample of the input
will only have large values in the bins corresponding to the frequencies present
and decoding the input becomes a simple matter of searching the array for the
most significant value or values. To speed up computation the size of the bin
doesn't need to be limited to one per frequency as to help with error correction
the assigned frequencies will be several hertz apart, so each bin could
represent 10-20Hz and the bin containing the largest value will still represent
the correct frequency.

Figure~\ref{fig:example_fourier} shows an example of using a Fourier Transform
to retrieve an encoded frequency.
\begin{figure}
\includegraphics[width=\textwidth]{fourier1.png}
\caption{An example pulse with frequency 3}
\label{fig:example_fourier}
\end{figure}
Consider the integrand, modified using \emph{Euler's formula}, $e^{-2\pi i\omega
t}f(t)$. If the frequency $\omega$ in the first part of the product matches the frequency of the received signal
$f(t)$ then they will be very closely related, as demonstrated in Figure
~\ref{fig:real_imaginary}.
\begin{figure}
\includegraphics[width=\textwidth]{fourier2.png}
\caption{Real and imaginary parts with respect to frequencies 3 and 5}
\label{fig:real_imaginary}
\end{figure}
When the real part of one is negative, the other will be negative and when the
real part of one is positive, the other will be positive. This means when the frequencies match, the real part
of this integrand will almost always be positive, so the integration will return
a positive value. If the frequencies do not match, there is no
positive/negative link between the elements of the product so the real part of
the integrand can oscillate and will have some negative values which when
integrated will cancel out the positive peaks and return a smaller value. This
is why the desired frequency is in the bin with the largest value. This is shown
in Figure~\ref{fig:freq_bins}.
\begin{figure}
\includegraphics[width=\textwidth]{fourier3.png}
\caption{The frequency bins}
\label{fig:freq_bins}
\end{figure}

\section{University courses}

A number of courses offered at Cambridge contain useful information for
completing this project. The \emph{Digital Signal Processing} course contains
information of Fast Fourier Transforms that is critical for decoding the sounds
back into bit patterns. The \emph{Computer Networking} course explains the OSI
layer model and how the layers interact with each other, which tells me I need
to offer a single socket for data coming into the data-link layer from the
network layer, written for another program. The \emph{Mobile and Sensor Systems}
course contains theory on wireless data transfer will also prove useful. In
addition to these, I have undertaken several smaller Java projects which give me
a background in the language.

\section{Development life cycle}

The software development model I will use for this project is the
\emph{waterfall} model because the time constraints of this project do not allow
for multiple prototypes. It is also likely that, with the Java libraries
available, the number of different ways of completing the internal structure of
the project will be limited, so there would be little to gain from creating
multiple prototypes as they could just be functionally identical. That said, I
will attempt to try a few different coding schemes for test purposes, but they
will likely be variants of the same code, e.g. using 4 frequencies for 2-bit
blocks instead of 255 frequencies for 8-bit blocks will both use the same method
for converting bits to sounds, just with different lengths.

\section{Testing strategy}

To test this project I will have a series of files to transfer and receive with
different conditions, and will compare the quality of signal received and time
to transmit for each of them. The main tests will include:

\begin{itemize}
\item Converting the data to sound and back to data on one device (no
microphone/speaker involvement)
\item Sending data between phones at all sample rates between a derived minimum
and maximum to determine optimal sample rate with the least processing
\item Sending data between phones at various sound segment lengths to determine
the optimal sound length with minimum data loss
\item Repeat all these tests for encoded sequence lengths of 1, 2, 4 and 8 bits
to determine which is the most efficient
\end{itemize}

Further tests may prove necessary after the results of these tests are analysed,
which will be explained in the Evaluation section.

\section{Version control}

In a large project version control is important for tracking overall progress,
automating regular backups and rolling back to a functional state in the event
of a mistake or dead-end. I will use Subversion (also known as \emph{SVN}) as it
is compatible with the Windows operating system, it has free support already in
place through \emph{Google Code}, and there are plugins available for the
\emph{Eclipse IDE} which I am using for the Java programming, so it is easy to
maintain regular backups. It is also the system I have most experience in as I
used it for the Part 1B group project.

\section{Requirements analysis}

The main requirements of the project as outlined in the proposal are

\bibliography{citations}

\end{document}