%!TEX root = ./HDG_report.tex
\section{Method}\label{method}

In this section we will describe how we programmed the demo and implemented the granular synthesis. We also describe which parameters of granular synthesis we chose to use, and our reasons for doing so. Finally, we include a brief description of the perceptual qualities of each chosen synthesis parameter. 

\subsection{Source sound analysis}\label{soundanalysis}

The first step in our system is determining which parts of the source file should be used as the actual grains in the synthesis. There are many strategies that could be used here, but for our purposes, we decided to use the part of the audio file with the least amount of spectral change. We chose to go with this approach because we wanted to guarantee a steady grain from any possible input sound. Also, by repeating one single grain, we are assured that the final sound's temporal structure is controlled uniquely by our sound descriptor. 

A useful technique here, also employed by Picard et al.\cite{picard2009retargetting}, is to use the sound's \emph{spectral flux}. The spectral flux is computed by computing the frequency spectrum using FFT with a window size of 1024 over the entire sound, calculating the euclidean distance between the normalized energy values in consecutive spectra in each step. The resulting number is a measure on how much the spectrum is changing at all times, where a lower value will indicate a small spectral change. The spectral flux does not take amplitude differences into account, as it only computes the distance between two normalized spectra.

% TODO: The check that makes sure that silence is not considered
% The result is a vector of values that denote a combined delta value of both amplitude spectral information.

\subsection{The Granular Synthesis Parameters}\label{granularparameters}

Because the implementation of an all-purpose granular synthesizer is a large and advanced programming task, we chose to use \emph{Partikkel} \cite{partikkel_website}, an opcode for the open source audio framework \emph{csound}\cite{csound_website}, developed by {\O}yvind Brandtsegg a.o. \cite{brantsegg2002particle}. In the words of the developers, ``partikkel was conceived after reading Curtis Roads' book \emph{Microsound}, and the goal was to create an opcode that was capable of all time-domain varieties of granular synthesis described in this book.''. While exploring the different possibilities of granular synthesis, our point of departure was also the book \emph{Microsound}\cite{roads2004microsound}, making Partikkel a natural choice of synthesis tool for this experiment. We ended up choosing only four of the many techniques described by Roads. Our main reasons for choosing these parameters comes from them being applicable to any source waveform\footnote{Some granular techniques, like formant synthesis, require that the source waveforms fill certain requirements} and that they have a clearly perceivable effect on the result. The examples referred to in this section can be found in the examples folder on the project CD.

\subsubsection{Grain duration and grain rate}

An important parameter in granular synthesis is the grain rate, defined as the amount of grains per second.
In our implementation, the length of each grain is relative to the grain rate -- higher grain rates will result in shorter grain durations. The maximum grain rate (and thus, the minimum grain duration) is computed from length of the source sound segment that was found in the source sound analysis previously described. Because we want our output sound to be somewhat continuous, we configured the duration of each grain is to fill two grain rate periods. The amplitude of each grain is scaled by a Hamming window function, resulting in a crossfade at the points where the grains overlap. Figure \ref{grainfigure} illustrates this relationship.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/grainfigure.png}}
	\caption{Grain duration in relation to grain rate}
	\label{grainfigure}
\end{figure}

It should be noted here that this crossfade does not guarantee that playing the grains in succession will result in a perceived smooth, continuous sound when played back at lower grain rates. Usually, the result is more of a wobbly sound where an attentive listener would clearly be able to discern that the sound is composed from repeating one segment. This is because the periods inside the source sound are rarely compatible with the period produced by the grain duration, something that is easily heard. Roads proposes a method he dubs \emph{particle cloning}, an approach that involves manually shaping the start and end points of the source grain using an editor. As no algorithmic method is proposed, this could be a subject for closer research in the future.

Changing the grain rate during playback has large consequences for the sound, and is probably one of of the most expressive parameters in granular synthesis. At lower grain rates, the source sound is clearly audible, but this changes as the grain rate increases. At around 18 grains per second, the grain rate itself will be perceived as a clear pitch, and as the rate increases, so will this pitch. As Farnell \cite{farnell2009designing} writes, ``As a very tight drum roll speeds up, it ceases to be a sequence of discrete beats at about 18Hz and becomes a pitch''. At this point, we move into the domain of \emph{microsound}, where parameters of the synthesis begin to have very different effects. In the folder of sound examples on the accompanying CD, the file \emph{grainrate.wav} demonstrates how it can sound when a grain rate increase is followed by a grain rate decrease. Figure \ref{grainratespectro} shows the spectrogram for this file. Notice the rapid pitch and spectral change that follows from increasing the grain rate.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/grainrate_spectro.png}}
	\caption{Spectogram for \emph{grainrate.wav}}
	\label{grainratespectro}
\end{figure}

\subsubsection{Source sound playback speed}\label{sspbs}

In our implementation, the playback speeds ranges from the original pitch of the source sound, to three times slower. We chose to omit faster playback speeds, as this would result in playback outside of the boundaries set by our analysis step. A fix could be to let the grain duration be dependent on the the playback speed, but this would not work in all cases as a grain's duration cannot be altered after instantiation. So in the likely case where playback speed is altered throughout the duration of a grain, the grain's duration would not change to accommodate this. 

Changes in the playback speed are clearly perceived when grain rate is lower than 18 Hz. Once the grain goes over this, the fundamental pitch will be determined by the grain frequency, while the source sound playback speed will affect how the harmonics of the fundamental will be emphasized. In the examples folder, consider the files \textit{pitch\_low\_grainrate.wav} and \textit{pitch\_high\_grainrate.wav}. In both files, the playback speed of the source sound is gradually increased throughout the file's duration.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/pitch_spectrograms.png}}
	\caption{Spectrogram for \emph{pitch\_low\_grainrate.wav} (top) and \emph{pitch\_high\_grainrate.wav} (bottom)}
	\label{pitchspectro}
\end{figure}

\subsubsection{Pulsaret width modulation}

Roads uses the term \emph{pulsaret} when discussing the insertion of silence within a grain rate period. The pulsaret width refers to the relation between silence and sound within a grain. At lower grain rates, increasing the silence portion results in an increased perception of stuttering. At grain rates above 18 Hz, the changing the pulsaret width results in (\ref{pwmspectro}). In the examples folder, the files \emph{pwm\_low\_grainrate.wav} and \emph{pwm\_high\_grainrate.wav} illustrate one case of how this parameter influences the synthesis sound. Both files start with no inserted silence, and end with a large portion of inserted silence.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/pwm_spectrograms.png}}
	\caption{Spectrograms for \emph{pwm\_low\_grainrate.wav} (top) and \emph{spectogram for pwm\_high\_grainrate.wav} (bottom)}
	\label{pwmspectro}
\end{figure}

\subsubsection{Glisson}
A grain is called a \emph{glisson} when it's pitch changes rapidly throughout its duration. In combination with a low grain rate, applying a strong glisson envelope to the grains will increasingly obfuscate the source sound into a somewhat more noisy component. We only pitch the source sound downwards, for the same reasons as described in \ref{sspbs}.
At grain rates above 18 Hz, the resulting sound will fill a larger frequency range while still retaining the fundamental pitch provided by the grain rate. 

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/glisson_spectrograms.png}}
	\caption{Spectrograms for \emph{glisson\_low\_grainrate.wav} (top) and \emph{glisson\_high\_grainrate.wav} (bottom)}
	\label{glissonspectro}
\end{figure}

\subsection{Generating sound descriptors}

The synthesis parameters presented in the previous section make up the four different dimensions of a sound descriptor. We focused on generating sounds rather than generating weapons. For generating envelopes for these descriptors, there are many strategies that could be employed. In our case, we wanted to generate weapon sounds that would fit well with an old school space shooter. To achieve this, we had a look at some common characteristics of abstract weapon sounds in games. Figures \ref{chargedshot}, \ref{iceshot} and \ref{stormtornado} show the spectrograms for three different weapon sounds from the classic SNES game \emph{Mega Man X} from 1994. Although very different in spectral content, these sounds have very similar envelopes, where most of the spectral change is concentrated in a short start segment, followed by a longer and more steady decay segment. The referenced sounds can be found inside the examples folder.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/charged_shot_spectro.png}}
	\caption{The spectrogram for \emph{charged\_shot.wav}}
	\label{chargedshot}
\end{figure}
\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/ice_gun_spectro.png}}
	\caption{The spectrogram of \emph{ice\_gun.wav}}
	\label{iceshot}
\end{figure}
\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/storm_tornado_spectro.png}}
	\caption{The spectrogram of \emph{storm\_tornado.wav}}
	\label{stormtornado}
\end{figure}

For our envelope generation, we used a slight variation of midpoint displacement to accommodate for this specific characteristic. In addition to displacing the y-value of the point, we also displace the x-value within the boundaries of the two neighboring points. The initial values are determined from a predefined set of templates, shown in figure \ref{envelopearchetypes}. Together with the template, we also pass a \emph{tendency value} that defines the most probable direction of the x-displacement. A value of 0.5, for example, will give an equal probability of the x-displacement, while a value of 0.9 would greatly favor displacements in the positive x-direction. Figure \ref{displacementexample} illustrates how the midpoint displacement algorithm could alter the \emph{climb}-template. The charged shot sound shown in figure \ref{chargedshot} for example, could be generated from the \emph{mountain}-template and a low tendency value. One should note, however, that this algorithm is used in a very simple manner, as each of the four envelopes are generated independently and do in no way inform each other. 

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/envelope_archetypes.png}}
	\caption{The available envelope templates}
	\label{envelopearchetypes}
\end{figure}

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.9\columnwidth]{images/climb_envelope_displacement.png}}
	\caption{An example of envelope generation with both x and y displacement}
	\label{displacementexample}
\end{figure}

\subsection{Mapping synthesis parameters to weapon behavior}

\subsubsection{The element}
A very important attribute of a weapon is its element. Since this is a discrete value that can either be fire, ice or lightning, we decided to map this attribute to the source sound as this is the only truly discrete parameter in our synthesis. This also allows us to do a quick and informal test how much control a sound designer would have over a system like this by only providing the source sound. The idea of being able to provide sound as an input for creating new sounds, is an interesting idea in itself. With the intention of making the elements sound very different, we decided to let each source sound focus on different parts of the spectrum. The fire sound is a dark and distorted sound, while the ice sound is based high notes played with a rhodes sound. The lightning sound is something in between - a 5th chord played with a distinct and timbre-rich pad sound. The source sounds can be found in the \textbf{source sounds} folder. It should be noted that it took several iterations to come up with these sounds, as it was sometimes hard to predict exactly what the result would be. 

The generated sound descriptors serve two functions as they both dictate the sound and the behavior of the weapon. As our main focus has been on generating these to describe the most weapon-like sounds, a challenge arises when these values have to be reassigned to weapon behavior. As described in \ref{gamedesign}, our goal was to make the weapon's characteristics discernible by listening to the sound. This turned out to be a very difficult task, as due to some of the points raised in section \ref{granularparameters}, where it is shown that each parameter has different effects related to the value of the grain rate.


% TODO: How many dimensions can you actually clearly hear in a sound simultaneously? 1 to 1 mapping between parameters, no intelligent cross-parameter mapping. 

\subsubsection{The elements}

A very important attribute of a weapon in our design is its element. Since this is a discrete value that can either be fire, ice or lightning, we decided to map this attribute to the source sound as this is the only truly discrete parameter in our synthesis. This also allows us to do a quick and informal test how much control a sound designer would have over a system like this by only providing the source sound. The idea of being able to provide sound as an input for creating new sounds, is an interesting idea in itself. With the intention of making the elements sound very different, we decided to let each source sound focus on different parts of the spectrum. The fire sound is a dark and distorted sound, while the ice sound is based on high notes played with a rhodes sound. The lightning sound is something in between - a 5th chord played with a distinct and timbre-rich pad sound. The source sounds can be found in the sound examples folder. It should be noted that it took several iterations to come up with these sounds, as it was sometimes hard to predict exactly what the result would be. 

\subsubsection{Shot duration}

The duration of each shot is simply determined by the length of the sound. When the sound is finished playing, the particles are ``released'' and will keep their last values until they either hit the opponent or fly outside the boundaries of the screen. 

\subsubsection{Particle size}

We decided to map the size of the particle to the playback speed of the source sound. Every particle in the shot will change size simultaneously as the sound plays. This was chosen because it gave a stringer sensation of the shot being shaped as a whole in relation to the sound. The lower the playback speed of the source sound, the bigger the particle gets. We did this because we feel that larger objects tend to be associated with deeper pitched sounds, e.g. Trucks making deeper sounds than cars.

\subsubsection{Particle trajectory}

The particle's trajectory is calculated by taking the delta value of the grain rate envelope and applying rotation according to this value. The grain rate was chosen to mirror the trajectory because of its huge significance in the synthesis. The weapon trajectory was evaluated to be of equal significance such an easily perceivable attribute, that it's mapped parameter would have to be an expressive one.

\subsubsection{Initial spread}

The initial spread of a weapon is how much the rotation is offset at the instantiation of each particle. This parameter is controlled by the amount of glisson applied to the synthesis. The idea here is that a sound with a wider spectrum should indicate a wider spread. 

\subsubsection{Particle speed}

The particle speed is how fast a particle moves. The speed is also indicated by the brightness of the particle. This parameter is controlled by the sound's pulsaret width -- the more silence is inserted between each grain, the faster the particle moves. This was chosen because. As with particle size, every particle in a shot will have their speed altered simultaneously according to the sound.

\subsection{Classifying our method}

When looking at our approach in relation to other forms of Procedural Content Generation, it is useful to utilize the taxonomy proposed by Togelius et al.\cite{togelius2011searchbased}. This taxonomy asks five questions in relation to what the algorithm does and how it performs. 

\subsubsection{Online or offline?}

In our game, the generation is very much done online. Our simple sound descriptor generation is very fast, and since our source sound files are relatively small, the analysis step is also pretty fast. However, for longer sounds this step could become a potential bottleneck, as the amount of computed FFT windows is relative to the file length. But as long as shorter files are used, this should not pose a problem. Playback of generated sounds is also immediate. However, this technique does hold potential for being a useful offline tool for shaping sound effects. Instead of using predefined envelope templates, for instance, a user could input the initial values and let the midpoint displacement algorithm run. 

\subsubsection{Necessary or optional content?}

As we briefly touch upon in the game description\ref{gamedesign}, our game tries to place the audio as necessary content in a game context, by making the only information initially available to the player. With the audio muted, our game wouldn't make much sense. This is an interesting step that breaks with the current conventions in game audio.

\subsubsection{Random seeds or parameter vectors?}

There are several parameter vectors in our algorithm. Providing templates for the envelope generation, for example, allows the control of which structures the algorithm will prefer. However, we find that the most promising parameter is the actual provided source sound. Being able to provide audio as an input to an algorithm that generates more audio, could for example prove to be an interesting tool for game audio designers.  

\subsubsection{Stochastic or deterministic generation?}

Within the realm of weapon sounds, the results are pretty stochastic. Unless you also provide the random seed, feeding the algorithm the same set of parameters will yield different results each time.

\subsubsection{Constructive versus generate-and-test?}

Our case is one where we both do con
The enemy is assigned a random weapon, no test or internal evaluation is done in relation to this. In the players case, however, the player arguably ``tests'' each weapon before choosing the preferred one.