\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
 
\parindent 0pt
\parskip 6pt

\begin{document}

\chapter{Preparation}

\graphicspath{C:/Users/Matt/My Documents/!University
Work/CompSci/workspace/PartII/gr/}

In this chapter I will describe some of the fundamental concepts of the project
and present a number of different ways the same result could be achieved with
their advantages and disadvantages.

\section{Java and Android}

I use Android for this project as it is a popular mobile platform, which will
make this usable to a large market [find a reference about android market
share]. As Google is an open source platform company so many more people
will be able to write programs which can use my new OSI layers. Another reason
is that Android uses Java, which I have a lot of experience in.

\section{Sound as frequencies}

A a monotonous sound of a given frequency \emph{f} can be represented as a sine
wave using the formula:
\begin{equation}
\mathrm{tone} = sin(2\pi f)
\end{equation}
Examples of this formula are shown in Figure ~\ref{fig:three_tones}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{combined.png}
\caption{Three frequencies representing A, C\# and E}
\label{fig:three_tones}
\end{figure}
It is possible to combine multiple tones and create chords, which is done
by simply adding the sine values together. For example, the three
frequencies shown in Figure ~\ref{fig:three_tones} can be combined to create the
sine wave in Figure ~\ref{fig:chord}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{440-550-660.png}
\caption{A chord from combined frequencies}
\label{fig:chord}
\end{figure}
This is a continuous function. When tones are represented electronically
you need to take discrete samples, so a \emph{sample rate} is specified. This is
because you have to represent the continuous function, which you could measure
at infinite points, in a finite number of registers on the computer. The
\emph{Nyquist-Shannon Theorem} states that to ensure no data is lost the sample
rate needs to be twice the maximum freqency you will create. This is because if you
sample a sine wave at $x=0,1,2,3$ then there are infinitely many frequencies and
linear combinations of frequencies which could produce the same values. An
example of this aliasing is shown in Figure ~\ref{fig:aliasing}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{aliasing.png}
\caption{Two different waves with the same sampled values}
\label{fig:aliasing}
\end{figure}

The range of frequencies we can represent as soundwaves are the frequencies a
mobile phone speaker is capable of producing, which varies depending on the
handset. All phones are capable of playing tones at frequencies humans can hear,
which is usually 20-20,000 Hz. The range of frequencies used will directly
affect the maximum transmission rate of the medium, based on the
\emph{Shannon-Hartley Theorem}:
\begin{equation}
C = B \lg(1+S/N)
\end{equation}
Which states that the channel capacity is a product of the bandwidth and the
signal to noise ratio.

\section{Coding Schemes}

Consider a general sine wave as a function of time represented by
\begin{equation}
f(t) = Asin(\omega t + \phi)
\end{equation}
There are three elements of this wave which can be modified to represent
information: the \emph{phase} ($\phi$), \emph{frequency} ($\omega$) and
\emph{amplitude} ($A$).

The most common form of \emph{phase shift keying} (PSK) is \emph{Binary PSK}, in
which binary information is sent by giving the sine argument a 180 degree phase
shift to represent binary 0, or no phase shift to represent binary 1. There are
two different ways to measure this phase difference: comparing the
incoming signal to a predetermined tone and see from observation which parts of
the new tone are phase shifted; or comparing the incoming tone to itself.
Comparing the tone to itself works by comparing the phase shift of one time
interval to the phase of the interval preceding it, so instead of the
comparison at time $t=5$ being $t=5$ in a stock sine wave, it would be
compared to $t=4$ in the data just received. For example, four 0s would be
represented by a tone shifting 180 degrees four times, and four 1s would be no
shift or a continuous tone. Figure ~\ref{fig:phase_shift} demonstrates this.
\begin{figure}[h]
\includegraphics[width=\textwidth]{phaseshift.png}
\caption{An example of phase shift with respect to itself}
\label{fig:phase_shift}
\end{figure}
Creating the tone for each binary element will depend on the preceding binary
element, but because it is the changes in phase that are measured at the
receiving end a dropped bit won't affect the decoding of subsequent bits. The
downside of the first approach is needing to store a baseline tone for
comparison at the receiving end. This takes up memory and will either need to be
arranged beforehand or computed as needed which will require extra processing
time.

Quadrature PSK is a similar scheme which uses four phases, 90 degrees apart, to
convey two bits at once. This doubles the transmission rate at the same
bandwidth, but reduces fault tolerance as with smaller phase changes it becomes
more likely the change in phase you measure was a result of the \emph{Doppler
Effect} or external interference.

Frequency shifting is the simplest to visualise. Analogous to BPSK, you could
let two frequencies stand for binary 0 and 1, and transmit a tone
comprised of those two frequencies in predetermined timeslots. It then lends
itself to transmission rate optimisation, such as letting sequences of bits be
represented by different frequencies, though the more frequencies you use the
higher the sample rate will need to be, and therefore the more memory will be
required at both the encoding and decoding ends which on a mobile device may
not always be available.

In amplitude shift keying each wave amplitude represents a sequence of bits, in
the same way that each frequency could represent a bit sequence in FSK. The
problem with using this for sound is the receiver can be tricked into thinking
it is listening to a different amplitude by moving the microphone. And even if
the microphone were stationary, if it is further away than expected the
amplitudes may be different to what is on record. This can be fixed by looking
at the change in amplitudes rather than the amplitude value but it is still a
less reliable system of transmission. Amplitude shifting is the least applicable
to audio transmissions and is usually used with light as it does not deteriorate
as much with distance.

\section{Existing Schemes}

In the 1970s and 1980s audio cassette drives were used to store
information, such as computer programs for home computers. One standard
that used was the Kansas City Standard which used frequency shifting at
300 baud. A binary 0 was represented by four cycles of a 1200Hz
sine wave, and a binary 1 was represented by eight cycles of a 2400Hz wave. Data
was sent in eleven-bit frames consisting of a start bit (0), eight bits of data
and two stop bits (11). It therefore had a transfer rate of just over 27 bytes
per second. (CITE) A higher baud version was developed, capable of 1200 baud.
(CITE) This was achieved by shortening the time needed for each binary element.
A 0 is now one cycle of 1200Hz and a 1 is two cycles of 2400Hz, and the stop bit is now a
single 1. This scheme was capable of 120 bytes per second and the data was
stored in 256 byte blocks, which were numbered so it was possible to rewind the
tape to a specific location in the event of a read error.

Using audio to transmit data lends itself to radio use, and amateur
radio operators have used \emph{slow scan television} (SSTV) for over 50
years to send pictures using sound. It is a frequency modulation system in which every
colour brightness gets its own frequency and the red, green and blue components
are then sent seperately for each pixel using those frequencies. Each bit takes
30ms to send and ranges over 1500Hz to 2300Hz. It also contained some error
correction, by sending odd lines of the picture first, then the even ones to
fill in the gaps. If a line was corrupted or missing the ones either side can
approximate what was supposed to be there.

\section{Digital Signal Processing}

The more difficult aspect of dealing with sound is decoding the data at the
receiving end using \emph{digital signal processing} (DSP). Amplitude analysis is
the most straightforward, but it will be affected by changing distances between
the microphone and speaker, due to the \emph{Doppler effect}, which is to be
expected on a mobile phone. Frequency is therefore a better choice as the
amplitude has no effect on it at all. There are two obvious ways to obtain the
frequency of a portion of the sound. The first, and most simple, would be to
re-create the sine wave and count the number of times the wave crosses the axis
in a set time. Higher frequencies will cross the axis more times per second than
lower frequencies. The problem is if the frequencies used are so low that the
wave doesn't cross the axis in the time sample. This is very unlikely to happen
as the lowest frequency I plan to use would be 20Hz, or 20 oscillations per
second, so even if the duration of each tone is only 100ms, every frequency will
have at least two solutions for $f(t)=0$.

The second, more involved, method would be to utilise the periodic nature of the
sine waves and analyse the data using a \emph{Fourier Transform}:
\begin{equation}
F(\omega) = \int a(t)cos(\omega t)dt
\end{equation}
for frequency $\omega$ and amplitude $a(t)$ at time $t$.
The transform takes a set of complex numbers and returns another, equally sized, set of
complex numbers. If you set the real part of the input to the audio sample, with
0 for all imaginary parts, running the transform will return an array of complex
numbers, each of which represent a range of frequencies called a
\emph{frequency bin}. The size of the range depends on the sample rate of the
input so it can be limited to 1Hz if necessary to assign a different bit
sequence to every frequency value. In that case, each small sample of the input
will only have large values in the bins corresponding to the frequencies present
and decoding the input becomes a simple matter of searching the array for the
most significant value or values. To speed up computation the size of the bin
doesn't need to be limited to one per frequency as to help with error correction
the assigned frequencies will be several hertz apart, so each bin could
represent 10-20Hz and the bin containing the largest value will still be the
frequency you are looking to detect.

To understand how this works, take the following example:
\begin{figure}
\includegraphics[width=\textwidth]{fourier1.png}
\caption{An example pulse with frequency 3}
\label{fig:example_fourier}
\end{figure}
Consider the integrand, modified using \emph{Euler's formula}, $e^{-2\pi i\omega
t}f(t)$. If the frequency $\omega$ in the first part of the product matches the frequency of the received signal
$f(t)$ then they will be very closely related, as demonstrated in Figure
~\ref{fig:real_imaginary}.
\begin{figure}
\includegraphics[width=\textwidth]{fourier2.png}
\caption{Real and imaginary parts with respect to frequencies 3 and 5}
\label{fig:real_imaginary}
\end{figure}
When the real part of one is negative, the other will be negative and when the
real part of one is positive, the other will be positive. This means when the frequencies match, the real part
of this integrand will almost always be positive, so the integration will return
a positive value. If the frequencies do not match, there is no
positive/negative link between the elements of the product so the real part of
the integrand can oscillate and will have some negative values which when
integrated will cancel out the positive peaks and return a smaller value. This
is why the desired frequency is in the bin with the largest value. This is shown
in Figure ~\ref{fig:freq_bins}.
\begin{figure}
\includegraphics[width=\textwidth]{fourier3.png}
\caption{The frequency bins}
\label{fig:freq_bins}
\end{figure}

\end{document}