\section{Digital Signal Processing}
This section of the design chapter talks about what theory is needed to build the application that was presented in the initial concept section. It includes the most basic parts of the theory, as well as the theory behind audio effects and filters, Fourier transformations and more.

\subsection{Analog and Digital Signals}
In short, sound can be described as rapid changes in air pressure. These pressure waves can have different levels of intensity (this is experienced as volume) and can vibrate back and forth at different rates (this is heard as a pitch or frequency). Human ears can interpret frequencies ranging from about 20 Hz to 20,000 Hz \cite{petter}.

A signal in the physical world is any quantity that varies over time/space and conveys information from a source to a receiver. Examples of signals that vary over time would be sound, water ripples and light. Signals varying over space are images and text.

The term \textit{signal processing} refers to the science of analyzing signals. Depending on the context and field of study, signal processing can vary in description. For the purpose of this report, signal processing will be divided into two categories: analog signals and digital signals \cite{understandDSP}.

\textit{Analog signal processing} is used when describing a waveform that is continuous. A continuous signal can have an infinite amount of possible values. This basically means that when studying an analog signal, it has an infinite amount of varying values. Analog signals are any kind of signals that can be found in the real world.

On the other hand, there is the \textit{digital signal processing} that works with discrete signals. Here, the values are not continuous, but instead a sequence of finite numbers that a computer can use. Unlike the analog continuous signal, the digital discrete signal does not have an infinite amount of values, meaning that, when converting the analog signal into a digital signal, a precise representation will not be generated. How precise this representation depends on the sample rate, which the report will go into more detail with in a later section. Typical examples of fields that work with digital signals are speech and audio processing, as well as radar and sonar processing. The main focus for this project is digital signals.

For a computer to be able to store a signal, it needs to be finite and thereby discrete, since a computer cannot store an infinite amount of numbers. This is also why, when recording analog continuous signals, that the signals are converted to discrete signals, solving the problem of having to store an infinite amount of numbers, see figure \ref{fig:ADC}.

This process is called \textit{sampling}. When sampling, the analog signal is converted into a digital signal. The digital signal is finite and is therefore not as precise as the real continuous analog signal. The sampling rate determines the quality of the conversion. In audio processing, at certain intervals a converter measures the frequency of the input signal and stores this as binary information. The rate at which it does this is the sample rate, measured in Hertz (Hz). The higher the sample rate, the more precise the conversion of the input signal is. But this comes with the price of more storage usage and the need for more processing power (see figure \ref{fig:ADC}) \cite{understandDSP}.

Sampling is important, because it makes it possible to filter audio digitally, effectively substituting a lot of heavy analog filters. This report will only focus on digital signal processing and filtering.

%which samples at a certain rate, usually measured in Hertz (Hz).  When sampling, one takes the value of a signal, with a certain frequency, and save this so that it can be used later by the computer, which can then process this discrete signal and convert it back to a continuous signal when for example sending it to a speaker. This conversion is done based on the discrete values that the computer obtained, therefore it won't necessarily look like the signal that went into the computer, even though nothing has been done to it, but it will be very close if the sampling rate is high enough.

%Digital signals are typically used when one wants to convert real-life analog signals into binary information that can be used in computing and electronics. Since this project is about computers and software, the group will mainly focus on digital signal processing (DSP) \cite{understandDSP} (see figure \ref{fig:ADC}

\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{images/TheoryDesign/signaltype}
\caption{Graph showing difference between a continuous and discrete signal \cite{adc}.}
\label{fig:ADC}
\end{figure}

\subsection{Sound Waves}
A wave is a disturbance in a medium, be it ocean waves, sound waves, light waves, seismic waves, etc. What characterizes a sound wave is that it is a longitudinal wave, meaning in this case that it is change in pressure. As soon as the change is periodic, the ear can detect the change - though, only if the change in pressure reaches the eardrum. This is why we can hear sound in air and water, but not in space, where there is no medium to disturb. A soundwave can be illustrated using a sinusoidal wave (sine or cosine wave).

The formula for a sinusoidal wave is as follows:

\begin{align}
y(t) = A*sin(2\pi ft + \varphi) = A*sin(\omega t + \varphi)
\end{align}

%An audio signal is basically data that is being encoded as a wave. These signals can take various forms, the most basic being the sine and cosine waves. When it is not important to distinguish between sine and cosine, the term \textit{sinusoid} is used. \cite{DSPPrimer}

where $A$ is the amplitude, $f$ is the frequency, $\omega = 2\pi f$ is the angular frequency and $\varphi$ is the phase in radians.

Below some of the fundamental concepts when working with wave signals are described.

\subsection{Amplitude}
The \textit{amplitude} describes the amount of change in a signal, such as volume, air-pressure, strength or energy over a single period. There are several different definitions of the term, but in this report the peak amplitude is used unless otherwise noted. As the name suggests, it looks at the peaks of the amplitude (the red dots on figure \ref{fig:sinusoid}).

The peak amplitude is a number that shows how much the signal varies compared to its resting place (which is zero on the Y axis). On figure \ref{fig:sinusoid} this is shown on the Y axis where the amplitude is measured to be 1.

\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{images/TheoryDesign/sinusoid.png}
\caption{Graph showing amplitude and period. Red dots are peaks, the green dot is a trough \cite{basicTime}.}
\label{fig:sinusoid}
\end{figure}

\subsection{Period}
A signal that repeats a pattern is defined as being periodic. The \textit{period} is the repetition interval. This also means that if the signal is shifted by the period, the result should be the same. Mathematically, this can be written as $x(t + T_0) = x(t)$ for all $t$.

The period of a wave can be seen as the time for a particle on a medium to make one complete vibrational cycle \cite{waveProp}. It basically refers to the time it takes for something to repeat itself. When working with a periodic wave, i.e. a wave that repeats itself, the period describes how long it takes for the signal to run a full cycle and thereby repeating itself. An easy way to see the period is by going from amplitude peak to the next amplitude peak, as illustrated by the two red dots in figure \ref{fig:sinusoid}.

The period is defined as the time it takes for a signal to complete one cycle. In figure \ref{fig:sinusoid} the period is $2\pi$, which is the same as 1 Hz.

Period is often denoted T and can be found using the frequency.

$T = 1/Frequency$ - or, if working with radians, as $2\pi/Frequency$ ($2\pi$ is the same as 360 degrees: one circle cycle).

For example: the period of a 5 Hz signal is 0.2 seconds, since 1/5 = 0.2.

\subsection{Frequency}
While the period is the time it takes to complete one cycle, \textit{frequency} is how often something happens. Period is a time quantity, whereas frequency is a rate quantity \cite{waveProp}.

The frequency describes how many times a wave repeats per time unit (typically in one second). The signal is measured in Hertz and can be seen as a way to describe how fast a wave's cycles are.

Looking at figure \ref{fig:frequency}, one can see in one second the signal repeats itself four times, i.e. the frequency is 4 Hz.

The period in figure \ref{fig:frequency} is 0.25 cycles/second. Frequency can be found as $f = 1/T$.

This means that the frequency here is f = 1/0.25 = 4.

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{images/TheoryDesign/frequency.png}
\caption{A simple sine wave with a frequency of 4 Hz \cite{basicTime}.}
\label{fig:frequency}
\end{figure}


\subsection{Wavelength}
The wavelength describes how far the wave has travelled after a single period (cycle) and is denoted by the lambda sign, $\lambda$ (see figure \ref{fig:wavelength}). It's the distance between two ``hills" in the wave, also known as crests. The wavelength is closely related to the frequency. The higher the frequency, i.e. the faster the wave repeats itself, the shorter the wavelength becomes - and vice versa.

For instance, the wavelength can be used to calculate velocity. Since velocity is the same as distance divided by time, it can also be described as wavelength/period or simply:
$V = \lambda /T
= \lambda * 1/T 
= \lambda * f$

\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{images/TheoryDesign/wavelength}
\caption{Graph showing wavelength \cite{wavelength}.}
\label{fig:wavelength}
\end{figure}

\subsection{Phase - Starting Angle}
The \textit{phase} of a wave has two meanings. One is related to a wave's cycle and defines how far into the cycle the wave currently is in relation to the cycle's origin. The second, also called the wave offset, defines the initial angle of a wave at its origin.
It is important to know that finding or setting the phase of a wave is only possible when working with sinusoidal functions.

When looking at the equation for a sinusoidal function, $\varphi$ describes the phase, or the angle, at which the function is currently at.

$x(t) = A*cos(2\pi ft + \varphi)$

To change the phase at which the function starts, one simply just change the equation so that the wanted phase is either subtracted or added to $\varphi$, meaning that the function will start at that phase. For example, if we wanted to start the wave at a 90 degrees offset we would subtract half a $\pi$ from $\varphi$ like so:

$x(t) = A*cos(2\pi ft + \varphi - \frac{\pi}{2})$

This would generate a sinusoidal wave that would appear to start a bit earlier than the original one as seen in figure \ref{fig:phase-offset}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{images/TheoryDesign/phaseoffset.png}
\caption{Visualisation of two sinusoidal waves, one starting at -90 degrees \cite{phaseoffset}.}
\label{fig:phase-offset}
\end{figure}

\subsection{Adding Sinusoids Together}
Basically, one can construct any signal by combining multiple sinusoids.

When adding two sinusoids together, the result depends on the amplitude, frequency and phase of the signals. The effect is easiest to see/hear if only one of the three varies between the two sinusoids \cite{addingFreq}.

If one adds two sinusoids that has the same frequency, a third sinusoid is produced that has the same frequency. Its amplitude is simply the sum of the originals, while frequency and phase is unchanged. Figure \ref{fig:addingTwoSinusoids} illustrates this. This means that if we for instance strike two tuning forks that are tuned to the same frequency, at different times with different forces, the result that we will hear will be the sum of the two and is in theory heard as as the sound of just one tuning fork \cite{DSPPrimer}.

\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{images/TheoryDesign/addingTwoSinusoids}
\caption{Adding two sinusoids that have the same frequency. The sum is a third sinusoid of that same frequency \cite{DSPPrimer}.}
\label{fig:addingTwoSinusoids}
\end{figure}

However, if the sinusoids have different frequencies, the new signal is no longer a sinusoid, because it does not repeat a simple pattern as the originals. Instead, one can observe a "ripple effect", as shown in figure \ref{fig:rippleFreq}. The frequencies in the resulting signal will be the lower of the two original frequencies, while the amplitude is the sum of the originals, and the phase is still unchanged \cite{addingFreq}.

\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{images/TheoryDesign/rippleFreq}
\caption{Adding two sinusoids that don't have the same frequencies produce a "ripple effect" \cite{addingFreq}.}
\label{fig:rippleFreq}
\end{figure}

If one adds two sinusoids with different phases, it can produce rather special results. If the signals are "in phase", then their peaks and troughs (high and low points) will coincide and produce the same result as previous described. But if the two signals are "out of phase", i.e. they differ in their phases, then their peaks and troughs will oppose each other and thereby cancel each other out. The resulting sinusoid signal will have a phase that is the sum of the phases of the constituents \cite{addingFreq}.

Instead of only adding two sinusoids, one can add multiple to produce complex waveforms. Generally, when combining many different frequencies, the result will be of the lowest frequency component, also known as the \textit{fundamental frequency} of that signal \cite{addingFreq}.