\documentclass[12pt,A4Paper]{report}
\usepackage{graphicx}
\usepackage{amsmath}
 
\parindent 0pt
\parskip 6pt

\begin{document}

\section{Preparation}

\subsection{Java and Android}

I will be using the Android environment for this project as they have global
presence, which will make this usable to a large market [find a reference about
android market share]. Android uses the language Java, an object-oriented
programming language popular with mobile devices in part due to the \emph{Java
Virtual Machine}. The JVM means libaries are stored online and every Java
enabled computer has access to the same libraries.

\subsection{Sound as frequencies}

A tone of a given frequency \emph{f}Hz can be represented as a sine wave using
the formula:
\begin{equation}
tone = sin(2\pi f)
\end{equation}
Examples of this formula are shown in Figure 1.
(figure1)
It is also possible to combine multiple tones and create chords, which is done
by simply adding the sine values together. For example, the three
frequencies shown in Figure 1 can be combined to create the sine wave in Figure
2.
(figure2)
This is clearly a continuous function. When tones are represented electronically
you need to take discrete samples, so a \emph{sample rate} is specified. To
ensure no data is lost the sample rate needs to be twice the maximum freqency
you will create. For example, if the highest frequency you use is 4000Hz the
sample rate should be 8000Hz. This is because if you simply sample a sine wave
at x=0,1,2,3,etc., then there are infinitely many frequencies and linear
combinations of frequencies which could produce the same values. An example of
this is shown in Figure 3 (``Aliasing'').
(figure3)

The range of frequencies we can represent as soundwaves are the frequencies a
mobile phone speaker is capable of producing, which varies depending on the
handset. All phones are capable of playing tones at frequencies humans can hear,
which is usually 20 to 20,000 Hz. The range of frequencies used will directly
affect the maximum transmission rate of the medium, based on the Shannon-Hartley
Theorem:
\begin{equation}
C = B \lg(1+S/N)
\end{equation}
The channel capacity C is the bandwidth B multiplied by the logarithm of one
plus the average signal power over average noise (signal to noise ratio).
Therefore, the higher the bandwidth you get by using a larger range of
frequencies, the better the transmission rate.

\subsection{Coding Schemes}

Consider the equation
\begin{equation}
f(t) = Asin(\omega t + \phi)
\end{equation}
There are three elements of a sine wave that can be modified to represent
information: the \emph{phase} (\phi), \emph{frequency} (\omega) and
\emph{amplitude} (A).

The most common form of phase shift keying is Binary PSK, in which binary
information is sent by giving the sine argument a 180 degree phase shift to
represent binary 0, or no phase shift to represent binary 1. Further, there are
two different ways to measure this phase difference. You can either compare the
incoming signal to a predetermined tone and see from observation which parts of
the new tone are phase shifted, or compare the incoming tone to itself. When
comparing the tone to itself, four 0s would be represented by a tone shifting
180 degrees four times, and four 1s would be no shift or a continuous tone.
Creating the tone for each binary element will depend on the preceding binary
element, but because it is the changes in phase that are measured at the
receiving end a dropped bit won't affect the decoding of subsequent bits. The
downside of the first approach is needing to store a baseline tone for
comparison.

Quadrature PSK is a similar scheme which uses four phases, 90 degrees apart, to
convey two bits at once. This doubles the transmission rate at the same
bandwidth, but reduces fault tolerance as with smaller phase changes it is
harder to tell what a partial signal was supposed to be.

Frequency shifting is the simplest to visualise. Analogous to BPSK, you could
let two frequencies stand for binary 0 and 1, and simply transmit a tone
comprised of those two frequencies in predetermined timeslots. Obviously, it
then lends itself to transmission rate optimisation, such as letting sequences
of bits be represented by different frequencies, though the more frequencies you
use the higher the sample rate will need to be, and therefore the more memory
will be required at both the encoding and decoding ends.

Amplitude shifting is the least applicable to audio transmissions and is usually
used with light as it does not deteriorate as much with distance. In amplitude
shift keying each wave amplitude represents a sequence of bits, in the same way
that each frequency could represent a bit sequence in FSK. The problem with
using this for sound is the receiver can be tricked into thinking it is
listening to a different amplitude by moving the microphone. And even if the
microphone were stationary, if it is further away than expected the amplitudes
may be different to what is on record. This can be fixed by looking at the
change in amplitudes rather than the amplitude value but it is still a less
reliable system of transmission.

\subsection{Existing Schemes}

Audio cassette drives used to be used to store information, such as computer
programs for early home computers, and they used audio to do so. One standard
that used was the Kansas City Standard which used frequency shifting at
approximately 300 baud. A binary 0 was represented by four cycles of a 1200Hz
sine wave, and a binary 1 was represented by eight cycles of a 2400Hz wave. Data
was sent in eleven-bit frames consisting of a start bit (0), eight bits of data
and two stop bits (11). It therefore had a transfer rate of just over 27 bytes
per second. A higher baud version was developed, capable of 1200 baud. This was
achieved by shortening the time needed for each binary element. A 0 is now one
cycle of 1200Hz and a 1 is two cycles of 2400Hz, and the stop bit is now a
single 1. This scheme was capable of 120 bytes per second and the data was
stored in 256 byte blocks, which were numbered so it was possible to rewind the
tape to a specific location in the event of a read error.

Using audio to transmit data obviously lends itself to radio use, and amateur
radio operators have been using a system called SSTV for over 50 years to send
pictures using sound. It, too, is a frequency modulation system in which every
colour brightness gets its own frequency and the red, green and blue components
are then sent seperately for each pixel using those frequencies. Each bit takes
30ms to send and ranges over 1500Hz to 2300Hz. It also contained some error
correction, by sending odd lines of the picture first, then the even ones to
fill in the gaps. If a line was corrupted or missing the ones either side can
approximate what was supposed to be there.

\subsection{Digital Signal Processing}

The more difficult aspect of dealing with sound is decoding the data at the
receiving end using DSP. The method to be used will depend on which type of
coding I implement. Amplitude analysis is possibly the most simple, but it
will be affected by changing distances between the microphone and speaker
which is to be expected on a mobile phone. Frequency is therefore a better
choice as the amplitude has no effect on it at all. There are two obvious ways
to obtain the frequency of a portion of the sound. The first, and most simple,
would be to re-create the sine wave and count the number of times the wave
crosses the axis in a set time. Higher frequencies will cross the axis more
times per second than lower frequencies. The only potential problem is if the
frequencies used are so low that the wave doesn't cross the axis in the time
sample. This is very unlikely to happen as the lowest frequency I plan to use
would be 20Hz, or 20 oscillations per second, so even if the duration of each
tone is only 100ms, every frequency will have at least two solutions for f(t)=0.

The second, more involved, method would be to utilise the periodic nature of the
sine waves and analyse the data using a Fourier Transform. The Fourier Transform
is:
\begin{equation}
F(\omega) = \int a(t)cos(\omega t)dt
\end{equation}
for frequency \omega$
and amplitude\emph{a(t)}
at time \emph{t}.
The transform takes a set of complex numbers and returns another, equally sized, set of
complex numbers. If you set the real part of the input to the audio sample, with
0 for all imaginary parts, running the transform will return an array of complex
numbers, each of which represent a range of frequencies called a
\emph{frequency bin}. The size of the range depends on the sample rate of the
input so can be limited to 1. In that case, each small sample of the input will
only have large values in the bins corresponding to the frequencies present and
decoding the input becomes a simple matter of searching the array for the
most significant value or values.

--possibly be more specific about getting real parts with cos and imaginary
parts with sin, then as e(2pi i x) = cos(2 pi x)+isin(2 pi x), can integrate
real and imaginary parts of e(-2 pi i)(frequency.t).f(t) and when you integrate
over a freq that isn't there it oscillates and cancels itself out, so 0,
otherwise is positive, making peaks in the graph where the frequencies are
present--

\end{document}