\documentclass[12pt,a4paper]{report}
\usepackage[T1]{fontenc}
\usepackage[latin1]{inputenc}
\usepackage[english]{babel}
\usepackage[normalem]{ulem}
\usepackage{verbatim}
\usepackage{color}
\usepackage[rgb,table]{xcolor}
\usepackage{fancyhdr}
\pagestyle{fancy}
\usepackage{multirow}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{epstopdf}
\usepackage[pdftex,pdfborder={0 0 0}]{hyperref}
\setcounter{secnumdepth}{3}
\setcounter{tocdepth}{3}
\usepackage{textcomp}
\usepackage{boxedminipage}
\usepackage{graphicx}
\usepackage{listings}
\usepackage[version=latest]{pgf}
\usepackage{xkeyval,calc,listings,tikz}

\usetikzlibrary{%
  arrows,%
  calc,%
  shapes.geometric,%
  shapes.symbols,%
  shapes.arrows,%
  shapes.multipart,%
  backgrounds,%
  chains,%
  topaths,%
  positioning,%
  scopes,%
  decorations.fractals,%
  decorations.shapes,%
  decorations.text,%
  decorations.pathmorphing,%
  decorations.pathreplacing,%
  decorations.footprints,%
  decorations.markings,%
  shadows}

\usepackage{xxcolor}
\usepackage{pifont}
\usepackage{makeidx}

\makeatletter

\DeclareRobustCommand{\greektext}{%
  \fontencoding{LGR}\selectfont\def\encodingdefault{LGR}}
\DeclareRobustCommand{\textgreek}[1]{\leavevmode{\greektext #1}}
\DeclareFontEncoding{LGR}{}{}

\definecolor{gris1}{rgb}{0.60,0.60,0.60}
\definecolor{gris2}{rgb}{0.69,0.69,0.69}
\definecolor{gris3}{rgb}{0.75,0.75,0.75}
\definecolor{gris4}{rgb}{0.86,0.86,0.86}
\definecolor{listinggray}{gray}{0.9}
\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}

\providecommand{\tabularnewline}{\\}
\usepackage{picins}

\title{Multimedia Toolbox Project}
\author{Iqbal ALMOU, Cyril CADORET, Khadija IMADOUEDDINE,\\ Jean-Fran\c{c}ois LASCOUTX, No\'{e} LAVALL\'{E}E, Yvan PATURANGE}

\pagestyle{fancy}
\renewcommand{\footrulewidth}{1pt}

\fancyhf{}
\fancyhf[LO]{S8 II}
\fancyhf[RO]{\includegraphics[scale=.25]{esilv.png}}

\fancyfoot[C]{\thepage}
\fancyfoot[LO]{Multimedia Toolbox}
\fancyfoot[RO]{Project Report}

\begin{document}

\input{front_page.tex}
\tableofcontents \clearpage
\listoffigures \clearpage

\part{Abstract}

\begin{large}
\indent We propose to create software that shall enable sounds and graphics manipulation.\\

The application running on the user's computer would be able to read, write and display graphically the PCM sound data or image.\\

In this report, we describe step by step the realization of this project, we explain the basic sound technology used in computers, its raw encoding format (we used PCM format in our application), its quantization techniques and sampling effects and finally its compression technique (lossless and lossy).\\

We also explain the same for basic image technology used in computers, compression techniques and the JPEG compression format.\\
\end{large}

\part{Introduction}

\indent Nowadays, sound and graphics exist everywhere and interest more and more developers and users.\\

Generally, multimedia involves any combination of two or more of the following elements: text, image, sound, speech, video, and computer programs. These mediums are digitally controlled by computers. In order to get an idea across, one can use multimedia to convey their message. Multimedia enhances the information for better communication and understanding. It is used in contrast to media which only use traditional forms of printed or hand-produced material. \\

It is usually recorded and played, displayed or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia also describes electronic media devices used to store and experience multimedia content. It is similar to traditional mixed media in fine art, but with a broader scope. The term "rich media" is synonymous for interactive multimedia.\\

We were asked for an application that handles sounds and images, having the ability to:
\begin{itemize}
\item 	Read and write raw PCM sound,
\item	Display it graphically with the possibility to select channels,
\item	Manipulate sound channels by mixing or concatenating many sounds,
\item	Filter sounds,
\item	Do FFT computation and signal processing,
\item	Do lossless and lossy compression,
\item	Read and write raw image,
\item 	Display it graphically,
\item	Modify it by cropping it for example,
\item	Compute its DCT and display it.
\end{itemize}


Briefly, we wanted to develop a complete multimedia toolbox with all the options listed above.

\part{Sound}
\chapter{General sound theory}

\section{Sound definition}
\indent Sound is a traveling wave which is an oscillation of pressure transmitted through a solid, liquid, or gas,
composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the
sensation stimulated in organs of hearing by such vibrations.\\

For humans, hearing is normally limited to frequencies between about 12 Hz and 20 KHz. The upper limit
generally decreases with age. Other species have a different range of hearing.

\subsection{Physics of sound}

\indent The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases,
liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through vacuum.\\

Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves.
Through solids, however, it can be transmitted as both longitudinal and transverse waves. Longitudinal sound waves
are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and
rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation.\\

Matter in the medium is periodically displaced by a sound wave, and thus oscillates. The energy carried by the sound wave
converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral
displacement strain (in case of transverse waves) of the matter and the kinetic energy of the oscillations of the medium.

\subsection{Sound wave properties and characteristics}

\indent Sound waves are characterized by the generic properties of waves, which are frequency, wavelength, period, amplitude, intensity, and direction.\\

Sound characteristics can depend on the type of sound waves (longitudinal versus transverse) as well as on the physical properties of the transmission medium.\\

Whenever the pitch of the sound wave is affected by some kind of change, the distance between the sound wave maxima also changes,
resulting in a change of frequency. When the loudness of a sound wave changes, so does the amount of compression in air of the wave that is travelling through it, which in turn can be defined as amplitude.


\section{The WAV format}

\indent WAV (or WAVE), is short for WAVeform audio format, also known as Audio for Windows, a Microsoft and IBM audio file format
standard for storing an audio bitstream on PCs. It is an application of the RIFF bitstream format method for storing data
in "chunks". It is the main format used on Windows systems for raw and typically uncompressed audio. The usual bitstream
encoding is the Pulse Code Modulation (PCM) format.\\

The Wave file format is Windows' native file format for storing digital audio data. It has become one of the most widely
supported digital audio file formats on the PC due to the popularity of Windows and the huge number of programs written
for the platform. Almost every modern program that can open and/or save digital audio supports this file format, making
it both extremely useful and a virtual requirement for software developers to understand.

\subsection{Description}

\indent WAVs are compatible with Linux, Windows and Macintosh operating systems. The format takes into account some differences
of the x86 CPU such as little-endian byte order. The RIFF format acts as a "wrapper" for various audio compression codecs.\\

Though a WAV file can hold compressed audio, the most common WAV format contains uncompressed audio in the linear pulse code
modulation (LPCM) format. The standard audio file format for CDs, for example, is LPCM-encoded, containing two channels of 44,100
samples per second, 16 bits per sample. Since LPCM uses an uncompressed storage method, which keeps all the samples of an audio
track, professional users or audio experts may use the WAV format for maximum audio quality. WAV audio can also be edited and
manipulated with relative ease using software. The WAV format supports compressed audio, using, on Windows, the Audio Compression
Manager. Any ACM codec can be used to compress a WAV file. The UI for Audio Compression Manager is accessible by default through Sound Recorder.\\

Beginning with Windows 2000, a WAVE\_FORMAT\_EXTENSIBLE header was defined which specifies multiple audio channel data along with speaker positions,
eliminates ambiguity regarding sample types and container sizes in the standard WAV format and supports defining custom extensions to the format chunk.

\subsection{Limitations}
\indent The WAV format is limited to files that are less than 4 GB in size, because of its use of a 32-bit unsigned integer to record
the file size header (some programs limit the file size to 2-4 GB). Although this is equivalent to about 6.6 hours of CD-quality audio (44.1 KHz, 16-bit stereo).

\subsection{Data Formats}
\indent Since the Wave file format is native to Windows and therefor x86 processors, all data values are stored in Little-Endian (least significant byte first) order.

\subsubsection{Strings}
\indent Wave files may contain strings of text for specifying cue point labels, notes, etc. Strings are stored in a format where the
first byte specifies the number of following ASCII text bytes in the string. The following bytes are of course the ASCII character
bytes that make up the text string.

\subsubsection{Wave File Chunks}
\indent Wave files use the standard RIFF structure which groups the files contents (sample format, digital audio samples, etc.) into
separate chunks, each containing it's own header and data bytes. The chunk header specifies the type and size of the chunk data
bytes. This organization method allows programs that do not use or recognize particular types of chunks to easily skip over them
and continue processing following known chunks. Certain types of chunks may contain sub-chunks.\\

One tricky thing about RIFF file chunks is that they must be word aligned. This means that their total size must be a multiple
of 2 bytes (ie. 2, 4, 6, 8, and so on). If a chunk contains an odd number of data bytes, causing it not to be word aligned, an
extra padding byte with a value of zero must follow the last data byte. This extra padding byte is not counted in the chunk size,
therefor a program must always word align a chunk headers size value in order to calculate the offset of the following chunk.

\subsubsection{Wave File Header - RIFF Type Chunk}
\indent Wave file headers follow the standard RIFF file format structure. The first 8 bytes in the file is a standard RIFF chunk header
which has a chunk ID of "RIFF" and a chunk size equal to the file size minus the 8 bytes used by the header. The first 4 data
bytes in the "RIFF" chunk determines the type of resource found in the RIFF chunk. Wave files always use "WAVE". After the RIFF
type comes all of the Wave file chunks that define the audio waveform.

\subsubsection{Wave File Chunks}
There are quite a few types of chunks defined for Wave files. Many Wave files contain only two of them, specifically the Format
Chunk and the Data Chunk. These are the two chunks needed to describe the format of the digital audio samples and the samples
themselves. Although it is not required by the official Wave file specification, it is good practice to place the Format Chunk
before the Data Chunk. Many programs expect the chunks to be stored in this order and it is more sensible when streaming digital
audio from a slow, linear source such as the Internet. If the format were to come after the data, all of the data and then the
format would have to be streamed before playback could start correctly.\\

All RIFF Chunks and therefore Wave Chunks are stored in the following format. Notice that even the above mentioned RIFF Type
Chunk conforms to this format.

\begin{center}
\begin{tabular}{|c|c|c|c|}
  \hline
  \rowcolor{gris1} \textbf{Offset} & \textbf{Size} & \textbf{Description} & \textbf{Value} \\
    \hline
  \rowcolor{gris2}  0$\times$00    & 4             & Chunk ID             & "RIFF" \\
    \hline
  \rowcolor{gris2}  0$\times$04    & 4             & Chunk Data Size      & (file size) - 8 \\
    \hline
  \rowcolor{gris3}  0$\times$08    & 4             & RIFF Type            & "WAVE" \\
    \hline
  \rowcolor{gris3}  0$\times$10    & \multicolumn{3}{> {\columncolor{gris3}}c|}{Wave chunks} \\
  \hline
\end{tabular}
\end{center}

\subsubsection{Format Chunk - "fmt"}
\indent The format chunk contains information about how the waveform data is stored and should be played back including the type of
compression used, number of channels, sample rate, bits per sample and other attributes.\\

\begin{center}
\begin{tabular}{|c|c|c|c|}
  \hline
  \rowcolor{gris1} \textbf{Offset} & \textbf{Size} & \textbf{Description} & \textbf{Value} \\
  \hline
  \rowcolor{gris2}  0$\times$00    & 4             & Chunk ID             & "fmt" \\
  \hline
  \rowcolor{gris2}  0$\times$04    & 4             & Chunk Data Size      & 16 \\
  \hline
  \rowcolor{gris3}  0$\times$08    & 2             & Compression code     & 1 - 65,535 \\
  \hline
  \rowcolor{gris3}  0$\times$0a    & 2             & Number of channels   & 1 - 65,535 \\
  \hline
  \rowcolor{gris3}  0$\times$0c    & 4             & Sample rate          & 1 - 0$\times$FFFFFFFF \\
  \hline
  \rowcolor{gris3}  0$\times$10    & 4             & Average bytes per second & 1 - 0$\times$FFFFFFFF \\
  \hline
  \rowcolor{gris3}  0$\times$14    & 2             & Block align          & 1 - 65,535 \\
  \hline
  \rowcolor{gris3}  0$\times$16    & 2             & Significant bits per sample & 2 - 65,535 \\
  \hline
  \rowcolor{gris3}  0$\times$18    & 2             & Extra format bytes   & 0 - 65,535 \\
  \hline
\end{tabular}
\end{center}

\subsubsection{Chunk ID and Data Size}
\indent The chunk ID is always "fmt" and the size is the size of the standard wave format data (16 bytes) plus the size of any extra
format bytes needed for the specific Wave format, if it does not contain uncompressed PCM data. Note the chunk ID string ends
with the space character (0$\times$20).

\subsubsection{Compression Code}
\indent The first word of format data specifies the type of compression used on the Wave data included in the Wave chunk found in this
"RIFF" chunk. The following is a list of the common compression codes used today.

\begin{center}
\begin{tabular}{|c|c|}
  \hline
  \rowcolor{gris3}\textbf{Code}      & \textbf{Description} \\
  \hline
  \rowcolor{gris4}0 (0$\times$0000)  & Unknown \\
  \hline
  \rowcolor{gris4}1 (0$\times$0001)  & PCM/uncompressed \\
  \hline
  \rowcolor{gris4}2 (0$\times$0002)  & Microsoft ADPCM \\
  \hline
  \rowcolor{gris4}6 (0$\times$0006)  & ITU G.711 a-law \\
  \hline
  \rowcolor{gris4}17 (0$\times$0011) & IMA ADPCM \\
  \hline
  \rowcolor{gris4}80 (0$\times$0050) & MPEG \\
  \hline
  \rowcolor{gris4}65,536 (0$\times$FFFF) & Experimental \\
  \hline
\end{tabular}
\end{center}

\subsubsection{Number of Channels}
\indent The number of channels specifies how many separate audio signals that are encoded in the wave data chunk. A value of 1 means
a mono signal, a value of 2 means a stereo signal, etc.

\subsubsection{Sample Rate}
\indent The number of sample slices per second. This value is unaffected by the number of channels.


\subsubsection{Average Bytes Per Second}
\indent This value indicates how many bytes of wave data must be streamed to a D/A converter per second in order to play the wave file.
This information is useful when determining if data can be streamed from the source fast enough to keep up with playback. This
value can be easily calculated with the formula:\\

$$\mbox{AvgBytesPerSec} = \mbox{SampleRate} \times \mbox{BlockAlign}$$


\subsubsection{Block Align}
\indent The number of bytes per sample slice. This value is not affected by the number of channels and can be calculated with the formula:\\

$$\mbox{BlockAlign} = \frac{\mbox{SignificantBitsPerSample} \times \mbox{NumChannels}}{8}$$


\subsubsection{Significant Bits Per Sample}
\indent This value specifies the number of bits used to define each sample. This value is usually 8, 16, 24 or 32. If the number of
bits is not byte aligned (a multiple of 8) then the number of bytes used per sample is rounded up to the nearest byte size
and the unused bytes are set to 0 and ignored.


\subsubsection{Extra Format Bytes}
\indent This value specifies how many additional format bytes follow. It does not exist if the compression code is 0 (uncompressed PCM file)
but may exist and have any value for other compression types depending on what compression information is need to decode the wave data.
If this value is not word aligned (a multiple of 2), padding should be added to the end of this data to word align it, but the value
should remain non-aligned.

\subsection{Data Chunk - "data"}
\indent The Wave Data Chunk contains the digital audio sample data which can be decoded using the format and compression method specified in the
Wave Format Chunk. If the Compression Code is 1 (uncompressed PCM), then the Wave Data contains raw sample values. This document explains
how an uncompressed PCM data is stored, but will not get into the many supported compression formats.

\begin{center}
\begin{tabular}{|c|c|c|c|p{5cm}|}
  \hline
  \rowcolor{gris1} \textbf{Offset} & \textbf{Length} & \textbf{Type}         & \textbf{Description} & \textbf{Value} \\
    \hline
  \rowcolor{gris2}  0$\times$00    & 4               & char\verb![!4\verb!]! & Chunk ID             & "data" (0$\times$64617461)\\
    \hline
  \rowcolor{gris2}  0$\times$04    & 4               & dword                 & Chunk size           & depends on sample length and compression\\
    \hline
  \rowcolor{gris3}  0$\times$08    & \multicolumn{4}{> {\columncolor{gris3}}c|}{Sample data} \\
  \hline
\end{tabular}
\end{center}

Multi-channel digital audio samples are stored as interlaced wave data which simply means that the audio samples of a multi-channel
(such as stereo and surround) wave file are stored by cycling through the audio samples for each channel before advancing to the next
sample time. This is done so that the audio files can be played or streamed before the entire file can be read. This is handy when
playing a large file from disk (that may not completely fit into memory) or streaming a file over the Internet.


\begin{center}
\begin{tabular}{|c|c|c|}
  \hline
  \rowcolor{gris3}\textbf{Time} & \textbf{Channel} & \textbf{Value}\\
  \hline
  \rowcolor{gris4}              & 1 (left)         & 0$\times$0053\\
  \rowcolor{gris4}\multirow{-2}{2cm}{0}
                                & 2 (right)        & 0$\times$0024\\

  \hline
  \rowcolor{gris4}              & 1 (left)         & 0$\times$0057\\
  \rowcolor{gris4}\multirow{-2}{2cm}{1}
                                & 2 (right)        & 0$\times$0029\\
  \hline
  \rowcolor{gris4}              & 1 (left)         & 0$\times$0063\\
  \rowcolor{gris4}\multirow{-2}{2cm}{2}
                                & 2 (right)        & 0$\times$003C\\
  \hline
\end{tabular}
\end{center}
One point about sample data that may cause some confusion is that when samples are represented with 8-bits, they are specified as unsigned
values. All other sample bit-sizes are specified as signed values. For example a 16-bit sample can range from -32,768 to +32,767 with a mid-point (silence) at~0.\\

As mentioned earlier, all RIFF chunks (including WAVE "data" chunks) must be word aligned. If the sample data uses an odd number of bytes,
a padding byte with a value of zero must be placed at the end of the sample data. The "data" chunk header's size should not include this byte.

\chapter{Sound mixing}

\indent In nature, two distinct sounds combine themselves as a simple addition
between their air pressure waves. In term of sound level, this combination
is not that simple (if the two sounds have the same sound level, the
result of the combination will not be twice louder) because of the
logarithmic scale of the unit.


\section{Combining sound levels}

\indent If there are two sound sources in a room - for example a radio producing
an average sound level of 62.0 dB, and a television producing a sound
level of 73.0 dB - then the total sound level is a logarithmic sum:
$$
\mbox{Combined sound level} = 10\times \log(\frac{1062}{10}+\frac{1073}{10})=73.3 \mbox{dB}\\
$$

Note: for two different sounds, the combined level cannot be more
than 3 dB above the higher of the two sound levels. However, if the
sounds are phase related there can be up to a 6dB increase in the
sound pressure level (SPL).\\


The formula for the sum of the sound pressure levels of n incoherent
radiating sources is:
$$
L_{\Sigma}=10\log_{10}\left(\displaystyle\frac{p_{1}^{2}+p_{2}^{2}+\dots+p_{n}^{2}}{p_{0}^{2}}\right) =
10\log_{10}\left(\left(\displaystyle\frac{p_{1}}{p_{0}}\right)^{2}+\left(\displaystyle\frac{p_{2}}{p_{0}}\right)^{2}+\dots+\left(\displaystyle\frac{p_{n}}{p_{0}}\right)^{2}\right)
$$

From the formula of the sound pressure level we find:
$$
\left(\displaystyle\frac{p_{1}}{p_{0}}\right)^{2} = 10^{\frac{L_{i}}{10}},~i=1,2,\dots,n
$$

This inserted in the formula for the sound pressure level to calculate
the sum level shows :\\
$$
L_{\Sigma} = 10 \log_{10}\left(10^{\frac{L_{1}}{10}}+10^{\frac{L_{2}}{10}}+\dots+10^{\frac{L_{n}}{10}}\right) \mbox{dB}
$$

$L_{\Sigma}$ Total level and L1, L2, ... Ln = sound pressure level
of the separate sources in dBSPL. Incoherent means: lacking cohesion,
connection, or harmony. It is not coherent.\\


Table for combining decibel levels Difference between the two levels
to be added in dB :\\
\\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\tabularnewline
\hline
\hline
3.0 & 2.5 & 2.1 & 1.8 & 1.5 & 1.2 & 1.0 & 0.8 & 0.5 & 0.5 & 0.4\tabularnewline
\hline
\end{tabular}\\


Amount to be added to the higher level in order to get the total level
in dB.\\
\\
Adding of equal loud sound sources :\\

\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{Schallquellen}
\caption{Schallquellen plot}
\end{figure}

\begin{tabular}{|c|c|}
\hline
\multicolumn{2}{|c|}{Level increase $\Delta$L for n equal loud sound sources}\tabularnewline
\hline
Number of n equal loud sound sources & Level increase $\Delta$L in dB\tabularnewline
\hline
1 & 0\tabularnewline
\hline
2 & 3.0\tabularnewline
\hline
3 & 4.8\tabularnewline
\hline
4 & 6.0\tabularnewline
\hline
5 & 7.0\tabularnewline
\hline
6 & 7.8\tabularnewline
\hline
7 & 8.5\tabularnewline
\hline
8 & 9.0\tabularnewline
\hline
9 & 9.5\tabularnewline
\hline
10 & 10.0\tabularnewline
\hline
12 & 10.8\tabularnewline
\hline
16 & 12.0\tabularnewline
\hline
20 & \tabularnewline
\hline
\end{tabular}\\


Formulas:
$$
\Delta L \mbox{in dB} = 10 \log_{10}(n) = \mbox{level difference}
$$
or
$$
n = 10^{\frac{\Delta L \mbox{in dB}}{10}} = \mbox{number of equal loud sound sources}
$$

$n = 2$ equally loud incoherent sound sources result in a higher level
of $10 \times \log 2= +3.01 \mbox{dB}$ compared to the case that only
one source is available. $n = 4$ equally loud incoherent sound sources
result in a higher level of $10\times \log 4= +6.02 \mbox{dB}$ compared
to the case that only one source is available.


\section{Combining sound waves}

\indent If you're listening to waves from two sources at the same time:
\begin{itemize}
\item a high pressure from one will cancel out a low pressure from the other
\item two high pressures will reinforce each other
\item two low pressures will reinforce each other
\end{itemize}

You can get the overall effect by adding the waves' pressures together
at each point in time. (You have to treat the normal air pressure
as zero, so that a higher pressure is positive and a lower pressure
is negative.)

\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{adding_sine_waves.jpg}
\caption{Adding sine waves}
\par\end{figure}

The two red waves added together produce the blue wave. At the first
green line, both red waves have high pressure and reinforce each other
to give an extra high pressure in the blue wave. At the second green
line, the two red waves have opposite pressures and cancel each other
out.\\

The following two sounds have frequencies of 300 Hz and 500 Hz:

\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{slice_of_a_300_Hz_sine_wave.jpg} 
\caption{Slice of a 300Hz sine wave}
\end{figure}

\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{slice_of_a_500_Hz_sine_wave.jpg} 
\caption{Slice of a 500Hz sine wave}
\end{figure}

They can be added together:
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{adding_300_and_500_Hz.jpg}
\caption{Adding 300 and 500 Hz waves}
\end{figure}
to produce a complex wave:
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{resulting_wave.jpg} 
\caption{Resulting wave of adding 300 and 500 Hz waves}
\end{figure}

This is important because any complex wave can be treated as a combination of simple sine waves.\\

We usually don't care about the actual complex wave itself. We're
only interested in the frequencies and amplitudes of the simple waves
that it's made up of.


\section{Combining sound data}

\indent Sound files have some limitations compared to real sounds making their
combination not exact. 
\begin{itemize}
\item Fixed maximum air pressure value 
\item Undefined sound intensity 
\end{itemize}
Because the sound intensity is not defined in a sound data (depends
on the sensitivity of the recording device), there is no method to
perform a real sound combination (from real recorded sounds).\\

With two sounds at the same scale of sound intensity, the resulting
combination should reflect reality.\\

Three methods to combine sound data:
\begin{itemize}
\item Simple addition
\item Mean
\item Normalized addition
\end{itemize}

\subsection{Simple addition}

\indent This is the most simple method.\\

It does like the sound waves combination but because of the maximum
sound intensity value inherent to the computer files, the resulting
sound may saturate by reaching, at some points, the maximum sound
pressure value allowed by the sound file data (sample size).


\subsection{Mean}

\indent This is an improvement of the simple addition method to avoid saturation.\\

A mean is performed between the sound pressure values of each sound
to combine.\\

This method unlike the simple addition prevents local saturations
where the sound pressure values should reach the maximum value allowed
by the sound file data.\\

The problem is the resulting sound level which is twice above the
expected result.\\

With this method, adding a normal sound with an empty sound would
result in the original normal sound at a lower sound level.\\


\subsection{Normalized addition}

\indent This method is a combination of the two others.\\

First, a simple addition is performed. Then the resulting sound is
scaled down to fit the maximum sound pressure level allowed by the
sound file data.\\


\chapter{Sound concatenation}

\indent In mathematics, Concatenation is the process of joining small matrices to make bigger ones. In fact, you made your first matrix by concatenating its individual elements. The pair of square brackets, [], is the concatenation operator.\\

WAVE Concatenation means to put end to end two sounds, more precisely, the resulting object will be the result of the assembly of two.\\

\begin{figure}
\centering
\tikzstyle{block} = [rectangle, draw,text width=6em, text centered, rounded corners]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{void} = [rectangle]
\tikzstyle{info} = [rectangle, font=\itshape]

    
\begin{tikzpicture}[node distance = 2cm, auto]
	\node[block] (concat) {Concatenation Sound};
	\node[block, above of=concat] (SoundRes) {DATA[][] SoundRes};
	\node[block, above left of=SoundRes] (s2) {DATA[][] Sound2};
	\node[block, above right of=SoundRes] (s1) {DATA[][] Sound1};

	\node[block, above right of=s1] (ds1) {Data size 1};
	\node[block, above of=ds1] (sf1) {soundFile1};

	\node[block, above left of=s2] (ds2) {Data size 2};
	\node[block, above of=ds2] (sf2) {soundFile2};

	\path [line] (sf1) -- (ds1);
	\path [line] (sf2) -- (ds2);
	\path [line] (ds1) -- (s1);
	\path [line] (ds2) -- (s2);
	\path [line] (s1) -- (SoundRes);
	\path [line] (s2) -- (SoundRes);
	\path [line] (SoundRes) -- (concat);

\end{tikzpicture}
\caption{Concatenation of 2 sounds}

\end{figure}

Such as Sound Mixing, the combination operation is divided into 3 main parts:

\begin{itemize}
\item 	Channels sound concatenation,
\item	Waves sound concatenation,
\item	Data sound concatenation,
\end{itemize}

The concatenation of sounds begins by initializing the properties of the new sound created by combining the two waves.
\section{Channels sound concatenation}

\indent The number of channel varies from wave file to another, and it is difficult to handle this difference, imagine that we have two sounds, the first is coded on 1 channel and the second is coded on 2 channels , the resulting sound  by concatenation must be coded on 2 channels, the merged file has to have the maximum of channels of the source files to keep all the information.\\

    MAX(Number Of Channels of sound1 ,Number Of Channels of sound2)

\section{Waves sound concatenation}
\indent To concatenate wave files, we need to know the total length of all files to define ChunkSize and read NumChannels.\\
First we need to calculate the total length and data length of all files and then specify the channels, SampleRate and BitsPerSample of the output file.The last thing is to start reading data that is stored after byte number 44 and append it to the merged file. SampleRate and BitsPerSample.

\section{Data sound concatenation}
\indent The concatenation is mainly executed by adding the sounds successively in the merger sound,the data size is the sum of sizes.
Data of the first sound is combined with data of the second sound, without modifying any one of them, the problem that we can meet is when the files are coded in different frequencies, in this case we have to modify the frequency of one of them to have the real concatenate sound file.\\

Sound concatenation is not very difficult, except the problem of different frequencies and number of channels .
\chapter{Sound signal generation}

\indent The aim of a signal generator is to create a new signal. This is usually an electric or radio signal.
Here we are creating a new sound signal.
The typical artificial signal types that we have made available to the user are: sinusoidal, square, triangular and positive sinusoidal.\\

The user can choose signal type, signal frequency, signal amplitude, duration and sampling frequency.\\

Such a generator is particularly useful for sound as it allows creating music.
For example the Concert A (La 4) is a 440 Hz sinusoidal wave.
This is due to the fact that a musical note is made up of a sinusoidal waves at a given frequency.
For example the Concert A (La 4) is a 440 Hz sinusoidal wave.\\

To go from one octave to the one above, you need to double the frequency. For example a La 3 is a 220 Hz sinusoidal wave and a La 5 is a 880 Hz sinusoidal wave.\\

When generating a sound at a given frequency, special care must be given to the sampling frequency used to store it in order not to distort it.
According to the Nyquist-Shannon sampling theorem, the sampling frequency must be at least twice the sound frequency.\\


\chapter{FFT and other sound transforms}

\section{Introduction}

\indent The Fourier transform (FT) is certainly one of the best known of the integral transforms and one of the most generally useful.
Since its introduction by Fourier in 1800s, it has found use in innumerable application and has led to the development of other transforms.\\

Today the FT is a fundamental tool in engineering science. Its importance has been enhanced by the development in the twentieth century of
generalizations extending the set of functions that can be Fourier transformed and by the development of efficient algorithms for computing the discrete version f the FT.\\

This chapter will discuss the definition of the Fourier transform and introduce some of the ways it can be used.

\section{The Fourier Transform}
\subsection{Definition}
\indent The Fourier transform converts spatial coordinates into frequencies. Any
curve or surface can be expressed as the sum of some number (perhaps
infinitely many) of sine and cosine curves.\\

The Fourier transform breaks up an image, or in one dimension a signal,
into a set of sine and cosine components. It is important to keep these
components separate, and so a vector of the form (cosine, sine) is used at
each point in the frequency domain image; that is, the values of the pixels
in the frequency domain image are two component vectors. A convenient
way to represent these is as complex numbers.\\

If $\phi(s)$ is an absolutely integrable function on $(-\infty, +\infty)
$\footnote{i.e $\displaystyle\int_{-\infty}^{+\infty}\left|\phi(s)\right|ds<\infty$}, then the direct Fourier transform of $\phi(s)$, $\mathcal{F}(\phi)$, and the Fourier inverse transform of $\phi(s)$, $\mathcal{F}^{-1}(\phi)$, are the functions given by:

\begin{equation} \label{fto}
\mathcal{F}\left[\phi\right]|_{x}=\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{-jxs}ds
\end{equation}
and
\begin{equation} \label{ifto}
\mathcal{F}^{-1}\left[\phi\right]|_{x}=\frac{1}{2\pi}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{jxs}ds
\end{equation}

In most applications involving Fourier transforms, the functions of time, $t$, or position, $x$, are denoted
using lower case letters, $f$ for example; whereas the Fourier transforms of these functions are denoted using the
corresponding upper case letters, $F=\mathcal{F}\left[f\right]$ for example. The transformed functions can be
viewed as functions of angular frequency, $\omega$. Thus, it is standard practice to view a signal as a pair
of functions, $f(t)$ and $F(\omega)$, with $f(t)$ being the \emph{time domain representation of the signal}
and $F(\omega)$ being the \emph{frequency domain representation of the signal}.\\

There are definitions other than formulas~\ref{fto} and~\ref{ifto} that are often used to define
$\mathcal{F}(\phi)$ and $\mathcal{F}^{-1}(\phi)$. Some of the other formula pairs commonly used are:
$$\mathcal{F}\left[\phi\right]|_{x}=\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{-j2\pi xs}ds,~\mathcal{F}^{-1}\left[\phi\right]|_{x}=\frac{1}{2\pi}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{j2\pi xs}ds$$
and
$$\mathcal{F}\left[\phi\right]|_{x}=\frac{1}{\sqrt{2\pi}}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{-jxs}ds,~
\mathcal{F}^{-1}\left[\phi\right]|_{x}=\frac{1}{\sqrt{2\pi}}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{jxs}ds$$

Equivalent analysis can be performed using the theory arising from any of these pairs; however, the resulting formulas and equations will depend on which pair is used.

\subsection{General Identities}
\indent This section will introduce some of the more general identities commonly used in manipulating Fourier transforms
and inverse transforms.
\subsubsection{Invertibility}
\indent The Fourier transform and the Fourier inverse transform, $\mathcal{F}$ and $\mathcal{F}^{-1}$, are operational
inverses:
$$
\psi=\mathcal{F}\left[\phi\right] \Leftrightarrow \mathcal{F}^{-1}\left[\psi\right]=\phi
$$
Equivalently,
$$
\mathcal{F}^{-1}\left[\mathcal{F}\left[f\right]\right]=f,~\mbox{and}~
\mathcal{F}\left[\mathcal{F}^{-1}\left[F\right]\right]=F
$$
\subsubsection{Symmetry of the Tranforms}
\indent From a computational point of view, the classical formulas for $\mathcal{F}\left[\phi\right]|_{x}$ and
$\mathcal{F}^{-1}\left[\phi\right]|_{x}$ are virtually the same, differing only by the sign in the exponential
and the factor of $\displaystyle\frac{1}{2\pi}$. Observing that
$$
\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{-j2\pi xs}ds~=~
2\pi \left[\frac{1}{2\pi}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{j2\pi(-x)s}ds\right]~=~
2\pi \left[\frac{1}{2\pi}\displaystyle\int_{-\infty}^{+\infty}\phi(s)e^{j2\pi x(-s)}ds\right]
$$
leads to the symmetry identity (or the "near equivalence" identity)
$$
\mathcal{F}\left[\phi(s)\right]|_{x}~=~
2\pi~\mathcal{F}^{-1}\left[\phi(s)\right]|_{-x}~=~
2\pi~\mathcal{F}^{-1}\left[\phi(-s)\right]|_{x}
$$
\subsubsection{Conjugation of Transforms}
\indent It can be observed that
$$
\left(\displaystyle\int_{-\infty}^{+\infty}f(t)e^{-j\omega t}dt\right)^{\ast}~=~
\displaystyle\int_{-\infty}^{+\infty}f^{\ast}(t)e^{j\omega t}dt
$$\footnote{$f^{\ast}$ denotes the complex conjugate of $f$}
Thus,
$$
\mathcal{F}\left[f\right]^{\ast}~=~2\pi \mathcal{F}^{-1}\left[f^{\ast}\right]
$$
and
$$
\mathcal{F}^{-1}\left[f\right]^{\ast}~=~\displaystyle\frac{1}{2\pi} \mathcal{F}\left[f^{\ast}\right]
$$
\subsubsection{Linearity}
\indent Using the linearity of the integral leads to the linearity of the transform:
$$
\mathcal{F}\left[\alpha f+\beta g\right]~=~
\alpha \mathcal{F}\left[f\right] + \beta\mathcal{F}\left[g\right]
$$
and
$$
\mathcal{F}^{-1}\left[\alpha F+\beta G\right]~=~
\alpha \mathcal{F}^{-1}\left[F\right] + \beta\mathcal{F}^{-1}\left[G\right]
$$
with $(\alpha,\beta)\in\mathbb{R}^{2}$.
\subsubsection{Scaling}
\indent With $\alpha\in\mathbb{R}^{\ast}$, using the substitution $\tau=\alpha t$ leads to
$$
\displaystyle\int_{-\infty}^{+\infty}f(\alpha t)e^{-j\omega t}dt~=~
\displaystyle\frac{1}{\left|\alpha\right|}\displaystyle\int_{-\infty}^{+\infty}f(\tau)e^{-j\frac{\tau\omega}{\alpha}}d\tau
$$
Letting $F(\omega)=\mathcal{F}\left[f(t)\right]|_{\omega}$, this can be rewritten as
$$
\mathcal{F}\left[f(\alpha t)\right]|_{\omega} = \displaystyle\frac{1}{\left|\alpha\right|}F\left(\frac{\omega}{\alpha}\right)
~\mbox{and}~
\mathcal{F}^{-1}\left[f(\alpha \omega)\right]|_{t} = \displaystyle\frac{1}{\left|\alpha\right|}f\left(\frac{t}{\alpha}\right)
$$
\subsubsection{Translation and Multiplication by Exponentials}
\indent If $F(\omega)=\mathcal{F}\left[f(t)\right]|_{\omega}$ and $\alpha\in\mathbb{R}$, then
\begin{equation} \label{tmex1}
\mathcal{F}\left[f(t-\alpha)\right]|_{\omega}~=~e^{-j\alpha\omega}F(\omega)
,~
\mathcal{F}^{-1}\left[F(\omega-\alpha)\right]|_{t}~=~e^{j\alpha t}f(t)
\end{equation}
and
\begin{equation} \label{tmex2}
\mathcal{F}\left[e^{j\alpha t}f(t)\right]|_{\omega}~=~F(\omega-\alpha)
,~
\mathcal{F}^{-1}\left[e^{j\alpha \omega}F(\omega-\alpha)\right]|_{t}~=~f(t+\alpha)
\end{equation}

\subsubsection{Modulation}
\indent Using the well-known formulas
$$
\cos\left(\omega_{0}t\right)~=~\frac{e^{j\omega_{0}t}+e^{-j\omega_{0}t}}{2}
~\mbox{and}~
\sin\left(\omega_{0}t\right)~=~\frac{e^{j\omega_{0}t}-e^{-j\omega_{0}t}}{2}
$$
the modulation formulas are easily derived from identity \ref{tmex2}
$$
\mathcal{F}\left[\cos\left(\omega_{0}t\right)f(t)\right]|_{\omega}~=~\displaystyle\frac{1}{2}
\left[F(\omega-\omega_{0})+F(\omega+\omega_{0})\right]
$$
and
$$
\mathcal{F}\left[\sin\left(\omega_{0}t\right)f(t)\right]|_{\omega}~=~\displaystyle\frac{1}{2j}
\left[F(\omega-\omega_{0})-F(\omega+\omega_{0})\right]
$$

\subsubsection{Products and Convolution}
\indent If $F=\mathcal{F}\left[f\right]$ and $G=\mathcal{G}\left[g\right]$ and provided the convolutions,
$F\ast G$ and $f \ast g$ exist, then the corresponding transforms of the products, $fg$ and $FG$
can be computed using the identities
$$
\mathcal{F}\left[fg\right]=\frac{1}{2\pi}F\ast G,~~
\mathcal{F}^{-1}\left[FG\right]=f\ast g
$$
and conversely,
$$
\mathcal{F}\left[f\ast g\right]=FG
,~~
\mathcal{F}^{-1}\left[F\ast G\right]=2\pi f\ast g
$$
\subsection{Transforms of Some Specific Functions}
\indent In many applications some specific classes of functions can be encountered
in which either the functions or their transforms satisfy certain particular properties.
\subsubsection{Even/Odd Functions}
\indent Assuming $f(t)$ is an integrable function:
$$
F(\omega)~=~\displaystyle\int_{-\infty}^{+\infty}f(t)e^{-j\omega t}dt~=~\displaystyle\int_{-\infty}^{+\infty}f(t)\cos(\omega t)dt-j\displaystyle\int_{-\infty}^{+\infty}f(t)\sin(\omega t)dt
$$

If $f(t)$ is an even function, then
$$
\displaystyle\int_{-\infty}^{+\infty}f(t)\sin(\omega t)dt=0
$$
and the equation becomes
$$
F(\omega)=\displaystyle\int_{-\infty}^{+\infty}f(t)\cos(\omega t)dt
=2\displaystyle\int_{0}^{+\infty}f(t)\cos(\omega t)dt
$$
$F(\omega)$ is clearly an even function of $\omega$ and is real valued whenever $f$ is real valued.\\

Likewise, if $f(t)$ is an odd function, then
$$
F(\omega)=-j\displaystyle\int_{-\infty}^{+\infty}f(t)\sin(\omega t)dt
=-2j\displaystyle\int_{0}^{+\infty}f(t)\cos(\omega t)dt
$$

And $F(\omega)$ is clearly an odd function of $\omega$ and is imaginary valued whenever $f$ is real valued.\\

These properties can be summarized into table~\ref{tab:fpro}\\
\begin{table}[t]
\centering
\begin{tabular}{|c!{$\Longleftrightarrow$}c|}
	\hline
	$f(t)$ is even               & $F(\omega)$ is even\\
	$f(t)$ is real and even      & $F(\omega)$ is real and even\\
	$f(t)$ is imaginary and even & $F(\omega)$ is imaginary and even\\
	$f(t)$ is odd                & $F(\omega)$ is odd\\
	$f(t)$ is real and odd       & $F(\omega)$ is imaginary and odd\\
	$f(t)$ is imaginary and odd  & $F(\omega)$ is real and odd\\
	\hline
\end{tabular}
\caption{Relation between odd/even $f(t)$ and its Fourier transform}
\label{tab:fpro}
\end{table}
\newline

On occasion, it is convenient to decompose a function into its even and add components, $f_{e}(t)$
and $f_{o}(t)$
$$f(t) = f_{e}(t)+f_{o}(t)$$
where
$$
f_{e}(t)=\frac{1}{2}\left(f(t)+f(-t)\right)~\mbox{and}~
f_{o}(t)=\frac{1}{2}\left(f(t)-f(-t)\right)
$$
\subsubsection{Periodic Functions}
\indent Let $f(t)$ be a $p$-periodic function. The Fourier series, $FS\left[f\right]|_{t}$, for such a function is given by
$$
FS\left[f\right]|_{t}=\displaystyle\sum_{n=-\infty}^{+\infty}c_{n}e^{jn\Delta\omega t}
$$
where
$$
\Delta\omega=\frac{2\pi}{p}~\mbox{and}~
\forall n, c_{n}=\frac{1}{p}\displaystyle\int_{t}^{t+p}f(t)e^{-jn\Delta\omega t}dt
$$
If $f(t)$ is piecewise smooth, its Fourier series will converge, and at every value of $t$ at which
$f(t)$ is continuous.
$$
f(t)=\displaystyle\sum_{n=-\infty}^{+\infty}c_{n}e^{jn\Delta\omega t}
$$
At points where $f(t)$ has jump discontinuity, the Fourier series converges to the midpoint
of the jump. In any immediate neighbourhood of a jump discontinuity any finite partial sum
of the Fourier series $\displaystyle\sum_{n=-N}^{N}c_{n}e^{jn\Delta\omega t}$
will oscillate wildly and will, at points, significantly overshoot the actual value of $f(t)$:
it is called the "Ringing" or Gibbs phenomena.
\section{Examples of Fourier Transforms}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$f(t)$ & $F(\omega)$ \\
\hline
\includegraphics{ftex/f1.png} & \includegraphics{ftex/ft1.png}\\
\hline
\includegraphics{ftex/f2.png} & \includegraphics{ftex/ft2.png}\\
\hline
\includegraphics{ftex/f3.png} & \includegraphics{ftex/ft3.png}\\
\hline
\includegraphics{ftex/f4.png} & \includegraphics{ftex/ft4.png}\\
\hline
\includegraphics{ftex/f5.png} & \includegraphics{ftex/ft5.png}\\
\hline
\end{tabular}
\end{center}

\newpage
\section{Discrete Fourier Transform}
\indent The discrete Fourier transform is one of the most important tools in digital signal
processing. The DFT is a computational analog to the Fourier transform and is used
when dealing with finite collections of sampled data. Given a ordered sequence of $N$ values,
$\{f_{0}, f_{1}, \dots, f_{N-1}\}$, the corresponding $N^{th}$ order discrete transform is the sequence
$\{F_{0}, F_{1}, \dots, F_{N-1}\}$ given by the formula
\begin{equation} \label{dct}
F_{n}=\displaystyle\sum_{k=0}^{N-1}e^{-j\frac{2\pi}{N}nk}f_{k}
\end{equation}

This can be also written in matrix form, $F=\left[\mathcal{F}_{N}\right]f$, where
$$
F = \left(
      \begin{array}{c}
        F_{0} \\
        F_{1} \\
        \vdots \\
        F_{N-1} \\
      \end{array}
    \right),~
f = \left(
      \begin{array}{c}
        f_{0} \\
        f_{1} \\
        \vdots \\
        f_{N-1} \\
      \end{array}
    \right)
$$
and
$$
\left[\mathcal{F}_{N}\right] =
\left(
  \begin{array}{ccccc}
    1      & 1 & 1 & \dots & 1 \\
    1      & e^{-j\frac{2\pi}{N}}      & e^{-2j\frac{2\pi}{N}}        & \dots  & e^{-(N-1)j\frac{2\pi}{N}} \\
    1      & e^{-2j\frac{2\pi}{N}}     & e^{-2\times2j\frac{2\pi}{N}} & \dots  & e^{-2(N-1)j\frac{2\pi}{N}} \\
    \vdots & \vdots                    & \vdots                       & \ddots & \vdots \\
    1      & e^{-(N-1)j\frac{2\pi}{N}} & e^{-2(N-1)j\frac{2\pi}{N}}   & \dots  & e^{-(N-1)^{2}j\frac{2\pi}{N}} \\
  \end{array}
\right)
$$

The inverse to formula~\ref{dct} is given by
\begin{equation}\label{idct}
f_{k}=\frac{1}{N}\displaystyle\sum_{n=0}^{N-1}e^{j\frac{2\pi}{N}nk}F_{n}
\end{equation}
In matrix form this is $f=\left[\mathcal{F}_{N}\right]^{-1}F$, where $\left[\mathcal{F}_{N}\right]^{-1}$
$$
\frac{1}{N}
\left(
  \begin{array}{ccccc}
    1      & 1 & 1 & \dots & 1 \\
    1      & e^{j\frac{2\pi}{N}}      & e^{2j\frac{2\pi}{N}}        & \dots  & e^{(N-1)j\frac{2\pi}{N}} \\
    1      & e^{j2\frac{2\pi}{N}}     & e^{2\times2j\frac{2\pi}{N}} & \dots  & e^{2(N-1)j\frac{2\pi}{N}} \\
    \vdots & \vdots                    & \vdots                       & \ddots & \vdots \\
    1      & e^{(N-1)j\frac{2\pi}{N}} & e^{2(N-1)j\frac{2\pi}{N}}   & \dots  & e^{(N-1)^{2}j\frac{2\pi}{N}} \\
  \end{array}
\right)
$$

In practice the sample size $N$, is often quite large and the computations of the discrete transforms
directly from formulas~\ref{dct} and~\ref{idct} can be a time-consuming process even on fairly fast computers.
For this reason it is standard practice to make heavy use of symmetries inherent in the computations of the
discrete transforms for certain values of $N$ (e.g., $N=2^{M}$) to reduce the total number of calculations: such implementations are
called \emph{fast Fourier transforms}.\\

Besides the classical example of FFT convolution, the DFT can calculate
a signal's frequency spectrum. This is a direct examination of information encoded in the
frequency, phase, and amplitude of the component sinusoids. For example, human speech and
hearing use signals with this type of encoding.\\

Moreover, the DFT can find a system's frequency
response from the system's impulse response, and vice versa. This allows systems to be analysed
in the frequency domain, just as convolution allows systems to be analysed in the time domain.
Third, the DFT can be used as an intermediate step in more elaborate signal processing
techniques.\\

There are several ways to calculate the DFT, such as solving
simultaneous linear equations or the correlation method. The Fast
Fourier Transform is another method for calculating the DFT. While it produces the same
result as the other approaches, it is incredibly more efficient, often reducing the computation time
by hundreds.

\newpage
\section{Fast Fourier Transform}
\indent The fast Fourier transform (FFT) is a group of methods, given by J.W. Cooley and J.W. Tukey, that rearrange
the calculations in the DFT to allow significant computational savings.
Direct computation of the DFT requires a number of multiplies on the
order of $N^{2}$, while the FFT reduces that number to the order of $N\log N$.
The process uses the symmetry and periodicity of the exponential
factor to reduce the computations:

\begin{equation}
e^{-j\frac{2\pi}{N}k(N-n)}~=~\left(e^{-j\frac{2\pi}{N}kn}\right)^{\ast}
\end{equation}
and
\begin{equation}
e^{-j\frac{2\pi}{N}kn}~=~
e^{-j\frac{2\pi}{N}k(N+n)}~=~
e^{-j\frac{2\pi}{N}n(N+k)}
\end{equation}

The FFT works by recursively decomposing the $N$-point DFT into smaller DFTs. The implementation
is most common for powers of 2 because of the convenience in fitting the recursive
structure. However, the FFT method can be applied to any sequence length that is a product of smaller integer factors.\\

The decimation in time FFT algorithm begins by separating the $n$-length sequence
\begin{equation} \label{fft1}
F(k)~=~\displaystyle\sum_{n\in2\mathbb{N}}f(n)e^{-j\frac{2\pi}{N}kn}+
\displaystyle\sum_{n\in2\mathbb{N}+1}f(n)e^{-j\frac{2\pi}{N}kn}
\end{equation}
By substituting $n=2m$ for even $n$, and $n=2m+1$ for odd $n$, equation~\ref{fft1} can
be expressed as
\begin{equation}
F(k)~=~\displaystyle\sum_{m=0}^{\frac{N}{2}-1}f(2m)e^{-j\frac{2\pi}{N}k2m}+
\displaystyle\sum_{m=0}^{\frac{N}{2}-1}f(2m+1)e^{-j\frac{2\pi}{N}k(2m+1)}
\end{equation}
And thus,
\begin{equation} \label{fft2}
F(k)~=~\displaystyle\sum_{m=0}^{\frac{N}{2}-1}f(2m)e^{-j\frac{2\pi}{N/2}km}+
e^{-j\frac{2\pi}{N}k}\displaystyle\sum_{m=0}^{\frac{N}{2}-1}f(2m+1)e^{-j\frac{2\pi}{N/2}km}
\end{equation}

That way, both sums are rearranged into two $\displaystyle\frac{N}{2}$-point DFTs.\\

Following this algorithm, each $\displaystyle\frac{N}{2}$-point DFT is decomposed into two $\displaystyle\frac{N}{4}$-point DFTs,
and so on and so forth, until the whole DFT is decomposed into $\displaystyle\frac{N}{2}$ $2$-point DFTs.\\

\begin{figure}[h]
\centering
\begin{tikzpicture}
[level distance=10mm,
every node/.style={fill=red!60,rectangle,inner sep=4pt},
level 1/.style={sibling distance=56mm,nodes={fill=red!45}},
level 2/.style={sibling distance=28mm,nodes={fill=red!30}},
level 3/.style={sibling distance=14mm,nodes={fill=red!25}},
level 4/.style={sibling distance=7mm,nodes={fill=red!20}}
]
\node {0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15}
	child {node {0 2 4 6 8 10 12 14}%
		child {node {0 4 8 12}%
			child {node {0 8}%
				child {node {0}}%
				child {node {8}}%
			}
			child {node {4 12}%
				child {node {4}}%
				child {node {12}}%
			}
		}
		child {node {2 6 10 14}%
			child {node {2 10}%
				child {node {2}}%
				child {node {10}}%
			}
			child {node {6 14}%
				child {node {6}}%
				child {node {14}}%
			}
		}
	}
	child {node {1 3 5 7 9 11 13 15}
		child {node {1 5 9 13}%
			child {node {1 9}%
				child {node {1}}%
				child {node {9}}%
			}
			child {node {5 13}%
				child {node {5}}%
				child {node {13}}%
			}
		}
		child {node {3 7 11 15}%
			child {node {3 11}%
				child {node {3}}%
				child {node {11}}%
			}
			child {node {7 15}%
				child {node {7}}%
				child {node {15}}%
			}
		}
	}
;
\end{tikzpicture}
\caption{FFT decomposition}
\label{fft_decompo}
\end{figure} 
The figure~\ref{fft_decompo} shows an example of the time domain decomposition used in the FFT.\\

In this example, a 16 point signal is decomposed through four separate stages. The first stage breaks the 16 point signal into two signals each consisting of 8 points. The second stage decomposes the data into four signals of 4 points. This pattern continues until there are N signals composed of a single point. An interlaced decomposition is used each time a signal is broken in two, that is, the signal is separated into its even and odd numbered samples. There are $\log2N$ stages required in this decomposition, i.e., a 16 point signal (24) requires 4 stages, a 512 point signal (27) requires 7 stages, a 4096 point signal (212) requires 12 stages, etc.\\

\begin{table}[t]
\centering
\begin{tabular}{|cc|c|cc|}
\cline{1-2}\cline{4-5}
\multicolumn{2}{|m{3cm}|}{Sample numbers in normal order}  &  & \multicolumn{2}{m{3cm}|}{Sample numbers after bit reversal}\\
\cline{1-2}\cline{4-5}
\textit{Decimal} & \textit{Binary} & & \textit{Decimal} & \textit{Binary}\\
\cline{1-2}\cline{4-5}
0 & 0000 & & 0 & 0000\\
1 & 0001 & & 8 & 1000\\
2 & 0010 & & 4 & 0100\\
3 & 0011 & & 12 & 1100\\
4 & 0100 & & 2 & 0010\\
5 & 0101 & & 10 & 1010\\
6 & 0110 & & 6 & 0100\\
7 & 0111 & \begin{large}$\Longrightarrow$\end{large} & 14 & 1110\\
8 & 1000 & & 1 & 0001\\
9 & 1001 & & 9 & 1001\\
10 & 1010 & & 5 & 0101\\
11 & 1011 & & 13 & 1101\\
12 & 1100 & & 3 & 0011\\
13 & 1101 & & 11 & 1011\\
14 & 1110 & & 7 & 0111\\
15 & 1111 & & 15 & 1111\\
\cline{1-2}\cline{4-5}
\end{tabular}
\caption{The FFT bit reversal sorting}
\label{tab:fft_bit_decompo}
\end{table}

This decomposition can be greatly
simplified. The decomposition is nothing more than a reordering of the
samples in the signal. Table~\ref{tab:fft_bit_decompo} shows the rearrangement pattern required.
On the left, the sample numbers of the original signal are listed along with
their binary equivalents. On the right, the rearranged sample numbers are
listed, also along with their binary equivalents. The important idea is that the
binary numbers are the reversals of each other. For example, sample 3 (0011)
is exchanged with sample number 12 (1100). Likewise, sample number 14
(1110) is swapped with sample number 7 (0111), and so forth. The FFT time
domain decomposition is usually carried out by a bit reversal sorting
algorithm. This involves rearranging the order of the N time domain samples
by counting in binary with the bits flipped left-for-right (such as in the far right
column in table~\ref{tab:fft_bit_decompo}).\\

The next step in the FFT algorithm is to find the frequency spectra of the
1 point time domain signals: the frequency
spectrum of a 1 point signal is equal to itself, each of the 1 point signals being now a frequency spectrum, and not a time
domain signal.\\

The last step in the FFT is to combine the $N$ frequency spectra in the exact
reverse order that the time domain decomposition took place. In the first stage, 16
frequency spectra (1 point each) are synthesized into 8 frequency spectra (2
points each). In the second stage, the 8 frequency spectra (2 points each) are
synthesized into 4 frequency spectra (4 points each), and so on. The last stage
results in the output of the FFT, a 16 point frequency spectrum.\\

Figure~\ref{fft_syn} shows how two frequency spectra, each composed of 4 points,
are combined into a single frequency spectrum of 8 points. This synthesis
must undo the interlaced decomposition done in the time domain. In other
words, the frequency domain operation must correspond to the time domain
procedure of combining two 4 point signals by interlacing. Consider two
time domain signals, \texttt{abcd} and \texttt{efgh}. An 8 point time domain signal can be
formed by two steps: dilute each 4 point signal with zeros to make it an
8 point signal, and then add the signals together. That is, \texttt{abcd} becomes
\texttt{a0b0c0d0}, and \texttt{efgh} becomes \texttt{0e0f0g0h}. Adding these two 8 point signals
produces \texttt{aebfcgdh}. As shown in Figure~\ref{fft_syn}, diluting the time domain with zeros
corresponds to a duplication of the frequency spectrum. Therefore, the
frequency spectra are combined in the FFT by duplicating them, and then
adding the duplicated spectra together.\\

\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{fft_synthesis.png}
\caption{The FFT synthesis}
\label{fft_syn}
\end{figure}

In order to match up when added, the two time domain signals are diluted with
zeros in a slightly different way. In one signal, the odd points are zero, while
in the other signal, the even points are zero. In other words, one of the time
domain signals (\texttt{0e0f0g0h}) is shifted to the right by one sample.
This time domain shift corresponds to multiplying the spectrum by a sinusoid.
To see this, recall that a shift in the time domain is equivalent to convolving
the signal with a shifted delta function. This multiplies the signal's spectrum
with the spectrum of the shifted delta function.\\

Figure~\ref{fft_flow} shows a flow diagram for combining two 4 point spectra into a
single 8 point spectrum. To reduce the situation even more, notice that Figure~\ref{fft_flow}
is formed from the basic pattern in Figure~\ref{fft_butterfly} repeated over and over.
\begin{figure}[h!]
\centering
\tikzstyle{n}= [circle, fill, minimum size=4pt,inner sep=0pt, outer sep=0pt]
\tikzstyle{mul} = [circle,draw,inner sep=-1pt]
\newcounter{x}\newcounter{y}
\begin{tikzpicture}[yscale=0.5, xscale=1.2, node distance=0.3cm, auto]
    \foreach \y in {0,...,15}
        \node[n, pin={[pin edge={latex'-,black}]left:$x(\y)$}]
              (N-0-\y) at (0,-\y) {};
    \foreach \y / \idx in {0/0,1/8,2/4,3/12,4/2,5/10,6,7/14,
                           8/1,9,10/5,11/13,12/3,13/11,14/7,15}
        \node[n, pin={[pin edge={-latex',black}]right:$X(\idx)$}]
              (N-10-\y) at (7,-\y) {};
    \foreach \y in {0,...,15}
        \foreach \x / \c in {1/1,2/3,3/4,4/6,5/7,6/9}
            \node[n, name=N-\c-\y] at (\x,-\y) {};
    \foreach \y in {0,...,15}
        \foreach \x / \c  in {1/2,4/5,7/8}
            \node[mul, right of=N-\x-\y] (N-\c-\y) {${\times}$};

    \foreach \y in {0,...,15}
        \foreach \x in {0,1,3,4,6,7,9}
        {
            \setcounter{x}{\x}\stepcounter{x}
            \path (N-\x-\y) edge[-] (N-\arabic{x}-\y);
       }
    \setcounter{y}{0}
    \foreach \i / \j in {0/0,1/0,2/0,3/0,4/0,5/0,6/0,7/0,
                            0/1,1/1,2/1,3/1,4/1,5/1,6/1,7/1}
    {
        \path (N-2-\arabic{y}) edge[-] node {\tiny $W^{\i\cdot\j}_{16}$}
                (N-3-\arabic{y});
        \stepcounter{y}
    }
    \setcounter{y}{0}
    \foreach \i / \j in {0/0,1/0,2/0,3/0,0/1,1/1,2/1,3/1,
                         0/0,1/0,2/0,3/0,0/1,1/1,2/1,3/1}
    {
        \path (N-5-\arabic{y}) edge[-] node {\tiny $W^{\i\cdot\j}_{8}$}
              (N-6-\arabic{y});
        \addtocounter{y}{1}
    }

    % Draw the W_4 coefficients
    \setcounter{y}{0}
    \foreach \i / \j in {0/0,1/0,0/1,1/1,0/0,1/0,0/1,1/1,
                            0/0,1/0,0/1,1/1,0/0,1/0,0/1,1/1}
    {
        \path (N-8-\arabic{y}) edge[-] node {\tiny $W^{\i\cdot\j}_{4}$}
              (N-9-\arabic{y});
        \stepcounter{y}
    }
    % Connect nodes
    \foreach \sourcey / \desty in {0/8,1/9,2/10,3/11,
                                   4/12,5/13,6/14,7/15,
                                   8/0,9/1,10/2,11/3,
                                   12/4,13/5,14/6,15/7}
       \path (N-0-\sourcey.east) edge[-] (N-1-\desty.west);
    \foreach \sourcey / \desty in {0/4,1/5,2/6,3/7,
                                   4/0,5/1,6/2,7/3,
                                   8/12,9/13,10/14,11/15,
                                   12/8,13/9,14/10,15/11}
        \path (N-3-\sourcey.east) edge[-] (N-4-\desty.west);
    \foreach \sourcey / \desty in {0/2,1/3,2/0,3/1,
                                   4/6,5/7,6/4,7/5,
                                   8/10,9/11,10/8,11/9,
                                   12/14,13/15,14/12,15/13}
        \path (N-6-\sourcey.east) edge[-] (N-7-\desty.west);
    \foreach \sourcey / \desty in {0/1,1/0,2/3,3/2,
                                   4/5,5/4,6/7,7/6,
                                   8/9,9/8,10/11,11/10,
                                   12/13,13/12,14/15,15/14}
        \path (N-9-\sourcey.east) edge[-] (N-10-\desty.west);

\end{tikzpicture}
\caption{FFT synthesis flow diagram}
\label{fft_flow}
\end{figure}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
[inner sep=2mm,
input/.style={circle,draw=blue!50,fill=blue!20,thick},
output/.style={rectangle,draw=blue!50,fill=blue!20,thick},
void/.style={circle,draw=white,fill=white},]

\node[void] (v2) at (4,1)   {};
\node[void] (v3) at (4,-1)  {};
\node[void] (v0) at (-2,-1) {};
\node[void] (v1) at (-2,1)  {};

\node[output] (y1) at (2,-1)   {$y_{1} = x_{0}-x_{1}$};
\node[output] (y0) at (2,1)  {$y_{0} = x_{0}+x_{1}$};
\node[input] (x1) at (-1,-1) {$x_{1}$};
\node[input] (x0) at (-1,1)  {$x_{0}$};


\draw [->] (v0) to (x1);
\draw [->] (v1) to (x0);
\draw [->] (y0) to (v2);
\draw [->] (y1) to (v3);

\draw [->, color=red] (x0) to (y0);
\draw [->, color=blue] (x0) to node[above, pos=.8] {-} (y1);
\draw [->, color=red] (x1) to (y0);
\draw [->, color=blue] (x1) to node[below, pos=.8] {+} (y1);

\end{tikzpicture}
\caption{The FFT butterfly}
\label{fft_butterfly}
\end{figure}

This simple flow diagram is called a \textbf{butterfly} due to its winged appearance.
The butterfly is the basic computational element of the FFT, transforming two
complex points into two other complex points.\\

Figure~\ref{fft_chart} shows the structure of the entire FFT. The time domain
decomposition is accomplished with a bit reversal sorting algorithm.
Transforming the decomposed data into the frequency domain involves nothing
and therefore does not appear in the figure.\\

The frequency domain synthesis requires three loops. The outer loop runs
through the $\log$ stages (i.e., each level in figure~\ref{fft_decompo}, starting from the bottom $2N$
and moving to the top). The middle loop moves through each of the individual
frequency spectra in the stage being worked on (i.e., each of the boxes on any
one level in figure~\ref{fft_decompo}). The innermost loop uses the butterfly to calculate the
points in each frequency spectra (i.e., looping through the samples inside any
one box in figure~\ref{fft_decompo}). The overhead boxes in Fig. 12-7 determine the
beginning and ending indexes for the loops, as well as calculating the sinusoids
needed in the butterflies. Now we come to the heart of this chapter, the actual
FFT programs.

\begin{figure}[h!]
\centering
\tikzstyle{block} = [rectangle, draw, fill=blue!20, 
    text width=7em, text centered, rounded corners, minimum height=3em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{void} = [rectangle]
\tikzstyle{info} = [rectangle, font=\itshape]

    
\begin{tikzpicture}[node distance = 2cm, auto]
    % Place nodes
    \node [void] (TDD) {Time Domain Data};
    \node [block, below of=TDD] (BitRev) {Bit Reversal Data Sorting};
    \node [block, below of=BitRev] (over1) {Overhead};
    \node [block, below of=over1] (over2) {Overhead};
    \node [block, below of=over2] (butterfly) {Butterfly Calculation};
    \node [void,  below of=butterfly, node distance=3cm] (FDD) {Frequency Domain Data};
	\node [info,  left  of=butterfly, node distance=19mm, rotate=90] (b1) {Loop for each Butterfly};
	\node [info,  left  of=over2, node distance=26mm, rotate=90] (subdft) {Loop for each sub-DFT};
	\node [info,  left  of=over2, node distance=35mm, rotate=90] (stage) {Loop for $\log_{2}N$ stages};

	\path [line] (TDD) -- (BitRev);
	\path [line] (BitRev) -- (over1);
	\path [line] (over1) -- (over2);
	\path [line] (over2) -- (butterfly);
	\path [line] (butterfly) -- (FDD);

	\draw [->]
	($ (butterfly.south) + (-9mm,-5pt) $)
	-- ++(9mm,0)
	-| ($ (butterfly.north) - (18mm,-5pt) $)
	-- ++(18mm,0);


	\draw [->]
	($ (butterfly.south) + (-12mm,-8pt) $)
	-- ++(12mm,0)
	-| ($ (over2.north) - (24mm,-5pt) $)
	-- ++(24mm,0);
	
	\draw [->]
	($ (butterfly.south) + (-16mm,-11pt) $)
	-- ++(16mm,0)
	-| ($ (over1.north) - (32mm,-5pt) $)
	-- ++(32mm,0);

	\draw[decorate,decoration=brace] let \p1=(BitRev.north), \p2=(BitRev.south) in
    ($(2, \y1)$) -- ($(2, \y2)$) node[midway,right=2pt]{Time Domain Decomposition};

	\draw[decorate,decoration=brace] let \p1=(over1.north), \p2=(butterfly.south) in
    ($(2, \y1)$) -- ($(2, \y2)$) node[midway,right=2pt]{Frequency Domain Synthesis};


\end{tikzpicture}
\caption{Flow chart diagram of the FFT}
\label{fft_chart}
\end{figure}

\chapter{Equalization, quantization and sampling}

\section{Equalization}

\indent Equalization, equalisation or EQ is the process of using passive or active electronic elements or digital algorithms for the purpose of altering (originally flattening) the frequency response characteristics of a system.\\

Amplitude equalization is usually meant when it is stated without qualification but any frequency dependent response characteristic is capable of having equalization applied. Most notably there is phase and time-delay equalizations.\\

There is also spatial directivity equalization.\\

\subsection{Overview}

\indent There are many kinds of EQ. Each has a different pattern of attenuation or boost. A peaking equalizer raises or lowers a range of frequencies around a central point in a bell shape. A peaking equalizer with controls to adjust the level (Gain), bandwidth (Q) and center frequency (Hz) is called a parametric equalizer. If there is no control for the bandwidth (it is fixed by the designer) then it is called a quasi-parametric or semi-parametric equalizer.\\

A pass filter attenuates either high or low frequencies while allowing other frequencies to pass unfiltered. A high-pass filter modifies a signal only by taking out low frequencies; a low-pass filter only modifies the audio signal by taking out high frequencies. A pass filter is described by its cut-off point and slope. The cut-off point is the frequency where high or low-frequencies will be removed. The slope, given in decibels per octave, describes a ratio of how the filter attenuates frequencies past the cut-off point (eg. 12 dB per octave). A band-pass filter is a combination (in series) of one high-pass filter and one low-pass filter which together allow only a band of frequencies to pass, attenuating both high and low frequencies past certain cut-off points.\\

Shelving-type equalizers increase or attenuate the level of a wide range of frequencies by a fixed amount. A low shelf will affect low frequencies up to a certain point and then above that point will have little effect. A high shelf affects the level of high frequencies, while below a certain point, the low frequencies are unaffected.\\

Variable equalization was first used by John Volkman working at RCA in the 1920s. They were used to equalize a motion picture theater playback systems.\\

\subsection{Graphic equalizer}
\indent 31-band Behringer 1/3-octave graphic equalizer with LEDs that show signal presence per frequency band.\\

One common type of equalizer is the graphic equalizer which consists of a bank of sliders for boosting and cutting different bands (or frequencies ranges) of sound. The number and width of filters depends on application. A simple car audio equalizer might have one bank of filters controlling two channels for easy adjustment of stereo sound, and contain five to ten filter bands. A typical equalizer for professional live sound reinforcement has some 25 to 31 bands, necessary for quick control of feedback tones and room modes. Such an equalizer is called a 1/3-octave equalizer (spoken informally as "third-octave EQ") because the center frequency of each filter is spaced one third of an octave away from its neighbors, three filters to an octave. Equalizers with half as many filters per octave are common where less precise general tone-shaping is desired-this design is called a 2/3-octave equalizer. Stereo graphic equalizer, 2/3-octave, 15 bands per channel\\

Historically, the first use of slide controls in an equalizer was in the Langevin Model EQ-251A which featured two passive equalization sections, a bass shelving filter and a pass band filter. Each filter had switchable frequencies and used a 15 position slide switch to adjust cut or boost.The first true graphic equalizer was the type 7080 developed by Art Davis's Cinema Engineering. It featured 6 bands with a boost or cut range of 8 dB. It used a slide switch to adjust each band in 1 dB steps. Davis's second graphic equalizer was the Altec Lansing Model 9062A EQ. In 1967 Davis developed the first 1/3 octave variable notch filter set, the Altec-Lansing "Acousta-Voice" system.\\

\subsection{Uses}

\indent In Multitrack recording and sound reinforcement systems, individual channels have equalization for aesthetic reasons, while the combined mix of sound is processed through equalization for practical reasons. Any acoustic space will cause some sound frequencies to be louder than others. This is due to standing waves produced by the size of the room and the materials in it. Equalization is used to compensate for the discrepancies of a room's acoustics. Ideally, a sound system would produce a flat frequency response. The frequency response of a room is examined with a Spectrum analyzer and usually a graphic equalizer, with matching frequency bands, is used to compensate for the room acoustics. This is standard practice for sound recording studios, live sound reinforcement systems and some High fidelity sound systems.\\

One of the most direct uses of equalization is at a live event, where microphones and speakers operate simultaneously. An equalizer is used to ensure that there are no frequency bands where there is a round trip gain of greater than 1, as these are heard as audible feedback. Those frequencies are cut at the equalizer to prevent this.\\

Most audio records, have had equalization applied to the sound waveform before the consumers' record was made because of the limitations of equipment for recording and manufacturing the record. One scheme was used prior to 1940. Some 100 formulae were used until 1955, when the RIAA standard formula was implemented. As an example of the use of equalization in record production, low frequencies are reduced before the sound is imprinted onto the vinyl, making the groove take up less physical space so that more music can fit on the record. For this reason, record players boost the low frequencies back up to their original level before playback, to compensate for the reduction during printing.\\

Early telephone systems used equalization to correct for the reduced level of high frequencies in long cables, typically using Zobel networks. These kinds of equalizers can also be used to produce a circuit with a wider bandwidth than the standard telephone band of 300 Hz to 3.4 kHz. This was particularly useful for broadcasters who needed "music" quality, not "telephone" quality on landlines carrying program material. It is necessary to remove or cancel any loading coils in the line before equalization can be successful. Equalization was also applied to correct the response of the transducers, for example, a particular microphone might be more sensitive to low frequency sounds than to high frequency sounds, so an equalizer would be used to increase the volume of the higher frequencies (boost), and reduce the volume of the low frequency sounds (cut).\\

Modern digital telephone systems have less trouble in the voice frequency range as only the local line to the subscriber now remains in analog format, but DSL circuits operating in the MHz range on those same wires may suffer severe attenuation distortion which is dealt with by automatic equalization or by abandoning the worst frequencies. Picturephone circuits also had equalizers.\\

The individual channels of a mixing board and the sound of electric instruments are equalized for aesthetic reasons. Some guitar effects units, in particular, the wah-wah pedal is based on equalization. Equalization is used to manipulate the timbre of musical instruments and sounds.\\


\section{Quantization}

\indent In signal processing, quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values) by a relatively-small set of discrete symbols or integer values. This article describes aspects of quantization related to sound signals.\\
After sampling, sound signals are usually represented by one of a fixed number of values, in a process known as pulse-code modulation (PCM). Some specific issues related to quantization of audio signals follow.\\

\subsection{Audio quantization}

\indent Telephony applications frequently use 8-bit quantization. That is, values of the analogue waveform are rounded to the closest of 256 distinct voltage values represented by an 8-bit binary number. This crude quantization introduces substantial quantization noise into the signal, but the result is still more than adequate to represent human speech.\\

By comparison, compact discs use a 16-bit digital representation, allowing 65,536 distinct voltage levels. This is far better than telephone quantization but CD audio representing low signal levels would still sound noticeably 'granular' because of the quantizing noise, were it not for the addition of a small amount of noise to the signal before digitization. This deliberately-added noise is known as dither. Adding dither eliminates this granularity, and gives very low distortion, but at the expense of a small increase in noise level. Measured using ITU-R 468 noise weighting, this is about 66dB below alignment level, or 84dB below FS (full scale) digital, which is somewhat lower than the microphone noise level on most recordings, and hence of no consequence (see Programme levels for more on this).\\

\subsection{Optimizing dither waveforms}

\indent In a seminal paper published in the AES Journal, Lipshitz and Vanderkooy pointed out that different noise types, with different probability density functions (PDF's) behave differently when used as dither signals, and suggested optimal levels of dither signal for audio.\\

Gaussian noise requires a higher level for full elimination of distortion than rectangular PDF or triangular PDF noise. Triangular PDF noise has the advantage of requiring a lower level of added noise to eliminate distortion and also minimizing 'noise modulation'. The latter refers to audible changes in the residual noise on low-level music that are found to draw attention to the noise.\\

\subsection{Noise shaping for lower audibility}

\indent An alternative to dither is noise shaping, which involves a feedback process in which the final digitized signal is compared with the original, and the instantaneous errors on successive past samples integrated and used to determine whether the next sample is rounded up or down. This smooths out the errors in a way that alters the spectral noise content.\\

By inserting a weighting filter in the feedback path, the spectral content of the noise can be shifted to areas of the 'equal-loudness contours' where the human ear is least sensitive, producing a lower subjective noise level (-68/-70dB typically ITU-R 468 weighted).\\

\subsection{24-bit quantization}

\indent 24-bit audio is sometimes used undithered, because for most audio equipment and situations the noise level of the digital converter can be louder than the required level of any dither that might be applied.\\

There is some disagreement over the recent trend towards higher bit-depth audio. It is argued by some that the dynamic range presented by 16-bit is sufficient to store the dynamic range present in almost all music. In terms of pure data storage this is often true, as a high-end system can extract an extremely good sound out of the 16-bits stored in a well-mastered CD. However, audio with very loud and very quiet sections can require some of the above dithering techniques to fit it into 16-bits. This is not a problem for most recently produced popular music, which is often mastered so that it constantly sits close to the maximum signal (see loudness war); however, higher resolution audio formats are already being used (especially for applications such as film soundtracks, where there is often a very wide dynamic range between whispered conversations and explosions).\\

For most situations, the advantage given by higher-resolution audio than 16-bits are mainly to do with processing the audio. No digital filter is perfect, but if the audio is upsampled and the audio is done in 24-bit or higher, then the distortion introduced by filtering will be much quieter (as the errors always creep into the least significant bits) and a well-designed filter can weight the distortion more towards the higher inaudible frequencies (but you need a sample rate higher than 48kHz so that these inaudible ultrasonic frequencies are available for soaking up errors).\\
There is also a good case for 24-bit (or higher) recording in the live studio, because it enables greater headroom (often 24dB or more rather than 18dB) to be left on the recording without encountering quantization errors at low volumes. This means that brief peaks are not harshly clipped, but can be compressed or soft-limited later to suit the final medium.\\

Environments where large amounts of signal processing are required (such as mastering or synthesis) can require even more than 24 bits. Some modern audio editors convert incoming audio to 32-bit (both for an increased dynamic range to reduce clipping, and to minimize noise in intermediate stages of filtering), and some DAW environments (such as recent versions of REAPER and SONAR) use 64-bit audio for their underlying engine.\\



\section{Sampling}

\indent In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave (a continuous-time signal) to a sequence of samples (a discrete-time signal).\\

A sample refers to a value or set of values at a point in time and/or space.\\

A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.\\

\subsection{Theory}

\indent For convenience, we will discuss signals which vary with time. However, the same results can be applied to signals varying in space or in any other dimension.\\

Let $x(t)$ be a continuous signal which is to be sampled, and that sampling is performed by measuring the value of the continuous signal every $T$ seconds, which is called the sampling interval. Thus, the sampled signal $x[n]$ given by:
$$
    x[n] = x(nT), with n = 0, 1, 2, 3, ...
$$ 
The sampling frequency or sampling rate $fs$ is defined as the number of samples obtained in one second, or $fs = \frac{1}{T}$. The sampling rate is measured in hertz or in samples per second.\\

We can now ask: under what circumstances is it possible to reconstruct the original signal completely and exactly (perfect reconstruction)?
A partial answer is provided by the Nyquist-Shannon sampling theorem, which provides a sufficient (but not always necessary) condition under which perfect reconstruction is possible. The sampling theorem guarantees that bandlimited signals (i.e., signals which have a maximum frequency) can be reconstructed perfectly from their sampled version, if the sampling rate is more than twice the maximum frequency. Reconstruction in this case can be achieved using the Whittaker-Shannon interpolation formula.\\

The frequency equal to one-half of the sampling rate is therefore a bound on the highest frequency that can be unambiguously represented by the sampled signal. This frequency (half the sampling rate) is called the Nyquist frequency of the sampling system. Frequencies above the Nyquist frequency $fN$ can be observed in the sampled signal, but their frequency is ambiguous. That is, a frequency component with frequency f cannot be distinguished from other components with frequencies $NfN + f$ and $NfN - f$ for nonzero integers $N$. This ambiguity is called aliasing. To handle this problem as gracefully as possible, most analog signals are filtered with an anti-aliasing filter (usually a low-pass filter with cut off near the Nyquist frequency) before conversion to the sampled discrete representation.

\subsection{Observation period}

\indent The observation period is the span of time during which a series of data samples are collected at regular intervals.More broadly, it can refer to any specific period during which a set of data points is gathered, regardless of whether or not the data is periodic in nature. Thus a researcher might study the incidence of earthquakes and tsunamis over a particular time period, such as a year or a century.\\

The observation period is simply the span of time during which the data is studied, regardless of whether data so gathered represents a set of discrete events having arbitrary timing within the interval, or whether the samples are explicitly bound to specified sub-intervals.

\subsection{Practical implications}

\indent In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a non-ideal device with various physical limitations. This results in deviations from the theoretically perfect reconstruction capabilities, collectively referred to as distortion.
Various types of distortion can occur, including:

\begin{itemize}
\item 	Aliasing. A precondition of the sampling theorem is that the signal be bandlimited. However, in practice, no time-limited signal can be bandlimited. Since signals of interest are almost always time-limited (e.g., at most spanning the lifetime of the sampling device in question), it follows that they are not bandlimited. However, by designing a sampler with an appropriate guard band, it is possible to obtain output that is as accurate as necessary.
\item	Integration effect or aperture effect. This results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. The integration effect is readily noticeable in photography when the exposure is too long and creates a blur in the image. An ideal camera would have an exposure time of zero. In a capacitor-based sample and hold circuit, the integration effect is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width.
\item	Jitter or deviation from the precise sample timing intervals.
\item	Noise, including thermal sensor noise, analog circuit noise, etc.
\item	Slew rate limit error, caused by an inability for an ADC output value to change sufficiently rapidly.
\item	Quantization as a consequence of the finite precision of words that represent the converted values.
\item	Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).
\end{itemize}

The conventional, practical digital-to-analog converter (DAC) does not output a sequence of dirac impulses (such that, if ideally low-pass filtered, result in the original signal before sampling) but instead output a sequence of piecewise constant values or rectangular pulses. This means that there is an inherent effect of the zero-order hold on the effective frequency response of the DAC resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency). This zero-order hold effect is a consequence of the hold action of the DAC and is not due to the sample and hold that might precede a conventional ADC as is often misunderstood. The DAC can also suffer errors from jitter, noise, slewing, and non-linear mapping of input value to output voltage.\\

Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.

\subsection{Sampling rate}

\indent When it is necessary to capture audio covering the entire 20 to 20,000 Hz range of human hearing, such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz (professional audio), or 96kHz. The approximately double-rate requirement is a consequence of the Nyquist theorem.\\

There has been an industry trend towards sampling rates well beyond the basic requirements; 96 kHz and even 192 kHz are available.This is in contrast with laboratory experiments, which have failed to show that ultrasonic frequencies are audible to human observers; however in some cases ultrasonic sounds do interact with and modulate the audible part of the frequency spectrum (intermodulation distortion). It is noteworthy that intermodulation distortion is not present in the live audio and so it represents an artificial coloration to the live sound.\\

One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling sigma-delta converters this advantage is less important.

\subsection{Bit depth}

\indent Audio is typically recorded at 8-, 16-, and 20-bit depth, which yield a theoretical maximum signal to quantization noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB, 98.09 dB and 122.17 dB. Eight-bit audio is generally not used due to prominent and inherent quantization noise (low maximum SQNR), although the A-law and u-law 8-bit encodings pack more resolution into 8 bits while increase total harmonic distortion. CD quality audio is recorded at 16-bit. In practice, not many consumer stereos can produce more than about 90 dB of dynamic range, although some can exceed 100 dB. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB; consequently, few situations will require more than 20-bit quantization.\\

For playback and not recording purposes, a proper analysis of typical programme levels throughout an audio system reveals that the capabilities of well-engineered 16-bit material far exceed those of the very best hi-fi systems, with the microphone noise and loudspeaker headroom being the real limiting factors.

\subsection{Speech sampling}

\indent Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 5Hz-4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications.
\chapter{Sound compression techniques and formats}


\section{Introduction}

\indent Audio compression is a form of data compression designed to reduce
the size of audio files. Audio compression algorithms are implemented
in computer software as audio codecs. Generic data compression algorithms
perform poorly with audio data, seldom reducing file sizes much below
87\% of the original, and are not designed for use in real time. Consequently,
specific audio \textquotedbl{}lossless\textquotedbl{} and \textquotedbl{}lossy\textquotedbl{}
algorithms have been created. \\

Lossy algorithms provide far greater compression ratios and are used
in mainstream consumer audio devices. As with image compression, both
lossy and lossless compression algorithms are used in audio compression,
lossy being the most common for everyday use.\\

In both lossy and lossless compression, information redundancy is
reduced, using methods such as coding, pattern recognition and linear
prediction to reduce the amount of information used to describe the
data.\\

The trade-off of slightly reduced audio quality is clearly outweighed
for most practical audio applications where users cannot perceive
any difference and space requirements are substantially reduced.


\section{Lossless audio compression}

\indent Lossless audio compression allows one to preserve an exact copy of
one's audio files, in contrast to the irreversible changes from lossy
compression techniques such as Vorbis and MP3. Compression ratios
are similar to those for generic lossless data compression, and substantially
less than for lossy compression.\\


\subsection{why ?}

\indent The primary use of lossless encoding are: ArchivesFor archival purposes,
one naturally wishes to maximize quality. Editing lossily compressed
data leads to digital generation loss, since the decoding and re-encoding
introduce artifacts at each generation. Thus audio engineers use lossless
compression. Lossless codecs completely avoid compression artifacts.
Audiophiles thus favor lossless compression. A specific application
is to store lossless copies of audio, and then produce lossily compressed
versions for a digital audio player. As formats and encoders improve,
one can produce updated lossily compressed files from the lossless
master. As file storage and communications bandwidth have become less
expensive and more available, lossless audio compression has become
more popular.


\subsection{Existing formats}

\indent Shorten was an early lossless format; newer ones include Free Lossless
Audio Codec (FLAC), Apple's Apple Lossless, MPEG-4 ALS, Monkey's Audio,
and TTA. Some audio formats feature a combination of a lossy format
and a lossless correction; this allows stripping the correction to
easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable
to Lossless), WavPack, and OptimFROG DualStream. Some formats are
associated with a technology, such as: Direct Stream Transfer, used
in Super Audio CD Meridian Lossless Packing, used in DVD-Audio, Dolby
TrueHD, Blu-ray and HD DVD.


\subsection{Difficulties}

\indent It is difficult to maintain all the data in an audio stream and achieve
substantial compression.\\

First, the vast majority of sound recordings are highly complex, recorded
from the real world. As one of the key methods of compression is to
find patterns and repetition, more chaotic data such as audio doesn't
compress well. In a similar manner, photographs compress less efficiently
with lossless methods than simpler computer-generated images do. But
interestingly, even computer generated sounds can contain very complicated
waveforms that present a challenge to many compression algorithms.
This is due to the nature of audio waveforms, which are generally
difficult to simplify without a (necessarily lossy) conversion to
frequency information, as performed by the human ear.\\

The second reason is that values of audio samples change very quickly,
so generic data compression algorithms don't work well for audio,
and strings of consecutive bytes don't generally appear very often.
However, convolution with the filter {[}-1 1{]} (that is, taking the
first difference) tends to slightly whiten (decorrelate, make flat)
the spectrum, thereby allowing traditional lossless compression at
the encoder to do its job; integration at the decoder restores the
original signal. Codecs such as FLAC, Shorten andTTA use linear prediction
to estimate the spectrum of the signal. At the encoder, the estimator's
inverse is used to whiten the signal by removing spectral peaks while
the estimator is used to reconstruct the original signal at the decoder.


\section{Lossy audio compression}


\subsection{why ?}

\indent Lossy audio compression is used in an extremely wide range of applications.
Lossy compression typically achieves far greater compression than
lossless compression, by discarding less-critical data. The innovation
of lossy audio compression was to use psychoacoustics to recognize
that not all data in an audio stream can be perceived by the human
auditory system.


\subsection{how ?}

\indent Most lossy compression reduces perceptual redundancy by first identifying
sounds which are considered perceptually irrelevant, that is, sounds
that are very hard to hear. Typical examples include high frequencies,
or sounds that occur at the same time as louder sounds. Those sounds
are coded with decreased accuracy or not coded at all. While removing
or reducing these 'unhearable' sounds may account for a small percentage
of bits saved in lossy compression, the real savings comes from a
complementary phenomenon: noise shaping. Reducing the number of bits
used to code a signal increases the amount of noise in that signal.\\


In psychoacoustics-based lossy compression, the real key is to 'hide'
the noise generated by the bit savings in areas of the audio stream
that cannot be perceived. This is done by, for instance, using very
small numbers of bits to code the high frequencies of most signals,
so that softer sounds 'hidden' there simply aren't heard. If reducing
perceptual redundancy does not achieve sufficient compression for
a particular application, it may require further lossy compression.\\

Depending on the audio source, this still may not produce perceptible
differences. Speech for example can be compressed far more than music.
Most lossy compression schemes allow compression parameters to be
adjusted to achieve a target rate of data, usually expressed as a
bit rate. Again, the data reduction will be guided by some model of
how important the sound is as perceived by the human ear, with the
goal of efficiency and optimized quality for the target data rate.
Hence, depending on the bandwidth and storage requirements, the use
of lossy compression may result in a perceived reduction of the audio
quality that ranges from none to severe, but generally an obviously
audible reduction in quality is unacceptable to listeners.\\

Because data is removed during lossy compression and cannot be recovered
by decompression, some people may not prefer lossy compression for
archival storage. In addition, the technology of compression continues
to advance, and achieving a state-of-the-art lossy compression would
require one to begin again with the lossless, original audio data
and compress with the new lossy codec. The nature of lossy compression
(for both audio and images) results in increasing degradation of quality
if data are decompressed, then recompressed using lossy compression.


\subsection{The two methods}


\subsubsection{Transform domain methods}

\indent In order to determine what information in an audio signal is perceptually
irrelevant, most lossy compression algorithms use transforms such
as the modified discrete cosine transform (MDCT) to convert time domainsampled
waveforms into a transform domain. Once transformed, typically into
the frequency domain, component frequencies can be allocated bits
according to how audible they are. Audibility of spectral components
is determined by first calculating a masking threshold, below which
it is estimated that sounds will be beyond the limits of human perception.
The masking threshold is calculated using the absolute threshold of
hearing and the principles of simultaneous masking and, in some cases,
temporal masking - where a signal is masked by another signal separated
by time. Equal-loudness contours may also be used to weight the perceptual
importance of different components. Models of the human ear-brain
combination incorporating such effects are often called psychoacoustic
models.


\subsubsection{Time domain methods}

\indent Other types of lossy compressors, such as the linear predictive coding
(LPC) used with speech, are source-based coders. These coders use
a model of the sound's generator (such as the human vocal tract with
LPC) to whiten the audio signal (i.e., flatten its spectrum) prior
to quantization. LPC may also be thought of as a basic perceptual
coding technique; reconstruction of an audio signal using a linear
predictor shapes the coder's quantization noise into the spectrum
of the target signal, partially masking it.


\part{Image}

\chapter{General image theory}

\indent Nowadays the term Image is usually used to design a 2D (two dimensional)
representation of an object or a person. In fact, images can be natural,
artificial or psychological:
\begin{itemize}
\item Natural image : In the book 6 \textquotedbl{}The republic\textquotedbl{}
written by Plato, the philosopher defines the image as being first
the shadows then the reflections we see in the waters, or at the surface
of opaque's bodies, polished and shining and all representations of
this kind.
\item Artificial: The image artificial may be registered or manufactured.
We are talking about images recorded when they are captured by an
optical device such as cameras, mirrors, lenses, telescopes, microscopes.
Otherwise we are talking about manufactures images when they are produced
manually such as a picture, drawing, or a Computer-Generated Imagery.
\item Psychological: Metaphor, mental representation, dream, imagination.
\end{itemize}

In our study, we will focus on artificial images and more precisely
on images that have been produced via an optical sensor. Now, we speak
about photography. If we analyze this term mark, it is composed of
two roots of Greek origin. The prefix \textquotedbl{} photo \textquotedbl{}
meaning light, clarity and the suffix \textquotedbl{}graphy\textquotedbl{}
which means paint, draw, write. We have literally \textquotedbl{}painted
with light\textquotedbl{}. Photography is the process of creating
a picture by recording radiation on a sensitive medium, such as film,
or an electronic sensor. 
\newpage{}
Digital imaging uses an electronic
image sensor to record the image as a set of electronic data rather
than as chemical changes on film.
\newline
\parpic{\includegraphics{Sensor}} \vspace{3mm}
A charge-coupled device (CCD) is a photosensitive electronic component
used to convert an electromagnetic radiation to analog signals (electric
charges). This signal is then amplified, and then digitized by an
Analog-digital converter and finally transformed to obtain a digital
image.\\


We'll discuss briefly here of the main formats commonly used for the
representation of digital images in two dimensions, defined by a matrix
of pixels. These representations are images called bitmap.\\
\newline
\newline
In the framework of this document, we'll rank image formats into four
categories:
\begin{itemize}
\item The raw format: the format is designed by a simple matrix; there is
no compression.
\item Lossless format: these formats carry out a compression on the image
matrix. The transformation between a raw format and a lossless compression
is a bijective function.\\
 \parpic{\includegraphics{bijection}} \emph{Every pixel 1,2,3,4
in the raw format (X set) has a one-to-one correspondence in the lossless
compression (Y set).}
 \\
 \\
 \\
 \\
 \\
 \\
 \\
 \\
\item The lossy compression: they permit to achieve better compression rate
but there is an image degradation. They generally use a quantization
on a frequency transformed, such as the Direct Cosine Transform (DCT)
for the JPEG format.
\item The composites: these formats may be considered as containers which
include multiple images.
\end{itemize}

In practice, image formats can generally be distinguished by their
header characteristic (with presence of magic word for example) or
by the extension of files names. These specifications allow the computer
to select the adequate viewer to open the image.


\section{Colorimetric Spaces}

\indent A pixel (basic unit on the matrix picture) is characterized by its
color. The color is a luminous perception received by the eye and
interpreted by the brain: this perception is different from an animal
to another or within a group of individuals of a same species. A color
corresponds physically to a mixture of lights issued by several wavelengths:
we have several colorimetric bases to represent colors visible by
human.\\

\parpic(7.5cm,4cm){\includegraphics[width=7.5cm,height=4cm]{Wavelength}}
In physics, wavelength is the distance between repeating units of
a waveform. \\
 \\
 \\
 \\
 \\
 \\
 \\
 \textbf{Wavelengths are a common way of describing light waves.}
\\
 \\
\indent The speed of light is the velocity of electromagnetic wave in
vacuum, which is 300,000 km/sec. Light travels slower in other media,
and different wavelengths travel at different speeds in the same media.
When light passes from one media to another, it changes speed, which
causes a deflection of light called refraction.
\newpage
Wavelength = Speed of light in vacuum/Frequency. The index of refraction, $n$, is defined as the following ratio:\\
$n$=Speed of light in vacuum / Speed of light in a specific medium.\\

\begin{figure}[h]
\centering
\includegraphics{visibleColors}
\caption{Visible Colors}
\end{figure}

The visible light is the part of the electromagnetic spectrum which
is visible to the human eye. There is no exact limit to the visible
spectrum: the human eye adapted to the light has generally a maximum
sensitivity to the wavelength of the light which is approximately
550 nm, which corresponds to a yellow-green. Generally, it considers
that the eye's response covers the wavelengths of 380 nm to 780 nm
although a range of 400 nm to 700 nm is more common. This range of
wavelength is important for us because of wavelengths shorter than
380 nm damage the structure of organic molecules while those longer
than 720 nm would be absorbed by water, which is an important element
for the human body.\\

\textbf{Approximate wavelength (in
vacuum) and frequency ranges for the various colors} \\
\includegraphics{waveToFrequency} 1 terahertz (THz) = 103 GHz
= 106 MHz = 1012 Hz\\
 1 nm = 10-9 um = 10-6 mm = 10-3 m.\\
The white light is a mixture of the colors of the visible spectra.

\section{The Human eye}
\indent The varying sensitivity of different cells in the retina to light
of different wavelengths allows us to distinguish several colors.
There are three types of color receptor cell (cones) in the retina.
\begin{itemize}
\item The cones L, sensitive to long-wavelength (700 nm), therefore red
\item The cones M, sensitive to medium wave (546 nm), the greens
\item The cones S, sensitive to short wave (436 nm), therefore the blues
\end{itemize}
\parpic{\includegraphics{oeil}} Although the composition of the
light is very complex, the eye reduced this complexity to three colors
thanks to these cones. The other type of light-sensitive cell in the
eye, the rod is only sensitive to the intensity of light and is active
only in the twilight (it's saturated from 500 photons per second).
The cones are activating themselves from 10 photons by seconds, which
explains why we see in black and white when the light is low. \newpage{}

\indent There are two principles of the colors restitution, the subtractive
calculation of colors and the addendum calculation of colors.

\begin{itemize}
\item The subtractive calculation of colors (or subtractive synthesis) is
the calculation made by the withdrawal of some wavelengths of light,
and therefore on what is not a source of light. For example, the grass
or flowers appear green, because they absorb the blue and red. These
are the waves that they use in the photosynthesis.
\item The additive calculation of colors (or additive synthesis) is the
calculation made by the addition of wavelengths of sources light.
\end{itemize}
The principle of the additive synthesis of the colors is to rebuild,
for a human eye, the equivalent (appearance) of any visible color
by the addition of lights from three monochromatic sources.
\newline
In observing the rainbow, we can see that the droplets of rain break
down light in six colors, as a prism could do.
\newline
\parpic{\includegraphics{prism}} Newton copied this phenomenon
by breaking down the sunlight through an optical prism (a prism made
from glass with a triangular basis). He succeeded in breaking down
the white light in all the different colors of the spectrum. \\
The physicist Young did the opposite of Newton. He re-composed
the light. He has made the six colors of the spectrum converged to
obtain the white light. He went even further by demonstrating that
the six colors of the spectrum could be reduced to three.\\

\input{rgb.tex}

Additive color mixing: adding red to green yields yellow; adding all
three primary colors together yields white. \\

He can reconstitute the white light with these three colors (Figure~\ref{rgb}). He
showed that by mixing two by two, he could obtain the other. It is
the way to distinguish the primary color from the secondary color.\\

This system of lights mixture means that the clarity is obtain
in adding more and more colors. For example, green and red give the
yellow unquestionably more clear. We are talking about additive system.\\

The three optimal wavelengths, the so-called primary colors meet
two criteria:
\begin{itemize}
\item Colors must correspond as closely as possible to wavelengths which
the cones are the most sensitive (like the green and the blue)
\item The wavelengths must enable in a specific way some cones (red)
\end{itemize}

The three primary colors are the following: red, green and blue. The
RGB system may also, in an equivalent way, be expressed according
to three other components which are the hue, the value and the saturation
and correspond in French in the TSL system (Teinte, Saturation et
Luminosit\'{e} ou valeur) and in English in the HSL system (according
to the three English words Hue, Saturation and Lightness).\\

There are mathematical formulas allowing us to transform the three
RGB components to the three TSL components (and conversely).\\

We called secondary colors (in the additive system) the lights
of saturated colors obtained by mixing two colors in pairs and in
equal shares the lights of primary colors. Complementary colors are
the colors which when combined; contain all colors of the spectrum.\\

The three secondary colors in the additive system are:
\begin{itemize}
\item Cyan (lights green and blue, complementary to the red);
\item Magenta (red lights and blue, complementary to the green);
\item Yellow (lights green and red, complementary to the blue);
\end{itemize}

These colors are in fact the primary colors of the subtractive system
and give the CMJ system (in the English language CMY or YMC). \\

When we mix more than two primary colors, these alter the color.
It thus loses in saturation and earns in value, to bring closer the
white.\\

In printing, painting and in the art of stained
glass, it's impossible to obtain a color throw an addition of lights.
The solution is to obtain the desired color by pigments of colors.\\

When they are enlightened, opaque objects reflect a part or all light
they receive and absorb the rest. We can obtain the colors of the
spectrum either by mixing pigments either by filtering a part of the
spectrum which illuminates the object.\\

The pigments that mix absorb more and more light and are becoming
darker. For example, the yellow and magenta give the orange-red.\\

We are talking about in this case of subtractive synthesis. And in
this case the primary colors associated to differentiate of primary
colors of the additive system because they correspond to secondary
colors of the additive system.\\

These colors, cyan, magenta, and yellow give the CMY system. In theory,
and if we had perfect pigments, the use of the three fundamental colors
would obtain:
\begin{itemize}
\item Blue by mixing cyan and magenta ;
\item The green by mixing cyan and yellow ;
\item The red in mixing the magenta and yellow.
\end{itemize}

\input{cmy.tex}

In practice, the subtractive synthesis from the currents coloring
agents does not allow to obtain all colors that are visible by the
human eye. In addition, even perfect dyes would continue to pose a
problem because they often overlap with a chemical reaction which
alters the color final.\\

Indeed, when we mix two colored materials, you get the desired hue,
but the color loses in vivacity, and the addition of white to offset
this loss is not acceptable because the white alter the shade, so
we don't obtain the desired value. That's why several ink-jet printers
added two pastel shades to the three fundamental colors in order to
obtain a better rendering. The addition of these pastel shades (around
2 to 5 colors) also permits to reduce the perception of the ink points
in light areas.\\

Finally, a black obtained by mixing of the three fundamental would
be of higher density, would achieve more details, but it is expensive
and the quality is not good if the proportion in mixing inks is inaccurate.
The inks overlay is never perfect, neither the opacity. In addition,
cyan, magenta and yellow do not give black. In printing, we use always
at least the black as the fourth color, which corresponds to the Quadrichromy.


\chapter{BMP storage and other formats}

\indent In computer science, an image can be built in different ways to have
an optimized storage. Indeed, a black and white image and a colored
image will not be stored in the same way.\\

A binary digital image is characterized by pixels coded on a byte
defined by two values: 0 or 1. The use of binary image can for example
be used for the representation of scanned documents into black and
white or be used to faxes some documents. Formats commonly used are
PBM (The Portable Bitmap Utilities) and TIFF (Tagged Image File Format)
for the faxes.

\parpic{\includegraphics{monochrome}}It's also possible to encode
a pixel by a single component of luminosity, for example to represent
images in grey levels. Cells of the human eyes specialized in the
perception of the brightness allow us to distinguished approximately
200 intensities. That's why we use generally 8 bytes (256 values)
for coding a monochromatic pixel. \\
 \\
We can see on the right a photograph rendered with a small monochrome
palette.\\
 \\
 \\
 For most of digital applications consisting to represent an image
on the screen, the additive synthesis RGB (Red, Blue, Green) is used.
Screens with cathode tubes or light-emitting diodes use these primary
colors for the synthesis of colors visible by human. Each color is
represented by a triplet of bit (0-255, 0-255, 0-255). The triplet
(255, 255, 255) is the white color and the triplet (0, 0, 0) is the
black color. \newpage{} \parpic{\includegraphics{slider}} A typical
RGB color selector in graphic software. Every slider ranges from 0
to 255.
 \\
 \\
\indent There are several raw formats in RGB basis. We quote here some
examples of raw formats using the RGB (this list has no vocation to
be exhaustive) in storing sequentially the pixels line by line. A
header is present for specifying the size and the depth of the picture
\section{BMP Format}
\indent The BMP file format has been created by Microsoft to represent graphics
images in any of several different display and compression options.
\newline
\indent The main advantage of BMP format is the fact that each pixel can be
modify independently of the others. This modification doesn't degrade
the image because lossy compression is not used.
\newline
\indent The main disadvantage of the BMP format is the file size. It's very
huge compared to JPEG, GIF or other lossy compression format.
\begin{itemize}
\item 2 colors BMP has one bit per pixel (1 byte per 8 pixels)
\item 16 colors BMP has four bits per pixel (1 byte per 2 pixels)
\item 256 colors BMP has eight bits per pixel, (1 byte per 1 pixel)
\item Image with reals colors, (3 bytes per 1 pixel)
\end{itemize}

BMP is the format we used for our tool. Multimedia-Toolbox can manipulate
24 and 32 bits BMP images. We have chosen this format to facilitate
the image manipulation in that is usually uncompressed and non-lossy.
So we will focus only with the uncompressed RGB color which has three
8-bit bytes per pixel.
\newline
\newpage{}
\indent The following table contains a description of the contents
of the bmp file. For every field, the file offset, the length and
the contents will be given. Lines in blue are informations used by
Multimedia-Toolbox to read or write a BMP file.


\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{bmpData}
\caption{BMP Data}
\end{figure}

When we display this file in a viewer application, we obtain:

\begin{table}[h]
\centering
\includegraphics[scale=0.75]{picture}
\caption{RGB Data}
\end{table}

\newpage{}

Let's focus on the important information of the file:\\

Offset 0 is used by our tool to check the format of the file. In fact,
lot of images which are exchanged on the Internet possess a bad extension.
In order to not generate fatal errors, the program must verify the
information.\\

Offset 10 is used to know where the data will be read.\\

The width and the height in offset 18 and 22 are used to read all
the pixels of the Image.\\

Offset 26 is checked to verify if the image is a correct RGB or RGBa
(with alpha layer) format. 24 bits for a basic uncompressed image
and 32 bits for images with an alpha layer. The alpha layer is used
for the transparency of a pixel. Like the red, blue and green, the
layer is coded on one byte (0-255). When the alpha layer is set to
0, the pixel is completely transparent, and when the alpha is set
to 255, the pixel is completely opaque.\\


Offset 30 is used to check if the image is not compressed. Our tool
check if the value is 0(no compression).\\
Our interest deals with the 24-bit uncompressed RGB color mode.
In this mode, there is no palette of color. Each pixel consists of
an 8-bit blue byte, a green byte, and a red byte in that order. As
you can see, pixel are stored from left to right upwards line by line
starting at the lower left. The first pixel is red and the last pixel
is blue.
\newline
\newline
When you read a BMP file, there is crucial information that must be
attended. Each line must be divisible by 4. If you don't respect this
boundary, the display of your image will be shifted, teared, or an
error will be displayed. The solution is to add some padding bits
at the end the lines. Because three does not divide into four, zero,
one, two, or three padding bytes must be added to the end of each
line. The exact number of padding bits is set by the number of horizontal
pixels per line.
\newline
\newline
\indent For example, in BMP image which has 230 pixels in the width must have
(230 modulo 4 equals to 2) two adding bytes per line.


\chapter{DCT and other image transforms}

\section{Introduction to DCT}

\indent Direct Cosine Transform (DCT) is widely used in signal and image manipulation, especially as a component of a compression algorithm. This is because DCT is very efficient at grouping energy: most information is held by only a few coefficients, allowing the neglect of zero or near-zero coefficients and thus reduce the size of the stored data.\\

The different variants of DCT are used by such algorithms as JPEG, MPEG, AAC, Vorbis, MP3, etc.\\

DCT is similar to DFT (Discrete Fourier Transform). However, where DFT uses complex exponentials and therefore generates complex coefficients, DCT uses real numbers, generating real coefficients.\\

Our software allows the performing of DCT-II simultaneously with three $8\times8$ data structures (typically three $8\times8$ pixel matrix, each representing one color component) which is the most used in compression algorithms, including JPEG.\\

\section{DCT Optimization}

\indent The theoretical formula for this algorithm is:\\
$$
DCT(i,j) = \frac{1}{\sqrt{2}}
	C(i)C(j)
	\displaystyle\sum_{x=0}^{N-1}
	\displaystyle\sum_{y=0}^{N-1}
	pixel(x,y)
	\cos\left( \frac{(2x+1)i\pi}{2N}\right)
	\cos\left( \frac{(2y+1)j\pi}{2N}\right)
$$
where $C(x)=\frac{1}{\sqrt{2}}$ if $x$ equals 0 and 1 if $x>0$ \\

However, such a double loop is not optimized for efficient computation. To improve the computation speed, we have used a cosine transform matrix~$C$:\\
$$
C_{i,j} = \left\{
	\begin{array}{ll}
	\frac{1}{\sqrt{N}} & \mbox{if } i=0 \\
		\sqrt{\frac{2}{N}} \cos\left( \frac{(2j+1)i\pi}{2N}\right) & \mbox{if } i>0
	\end{array}
\right.
$$

Then, the DCT by block can be expressed as:\\
$$DCT = C \ast Pixels \ast Ct$$
This only requires $2N$ multiplications and $2N$ sums. We have obtained an $O(N)$ cost per block instead of an $O(N^{2})$ cost.

\section{Other image transforms}

\indent The Direct Cosine Transform (DCT) is not the only transform usable on images.
In particular, the Discrete Fourier Transform (DFT) and Discrete Wavelet Transform (DWT) are also used.

We have already discussed the DFT in the sound part of this report.

DWT, which is used in some particular JPEG formats, uses the same principles as DCT and DFT. However, as with other wavelet transforms, it captures both frequency and location information when most transforms only capture frequency.



\chapter{Lossless and lossy compression}

\indent An image stored in the form of matrix of pixels contains
many redundancies in its structure (a large band of uniform pixels,
grounds repeated) which may benefit from algorithms compression. The
theoretical limit of the performance of lossless compression is defined
by the entropy of the source: the obtaining of better rates of compression
requires the use of compression techniques with loss degrading the
information by quantification processes or transformation.\\

The use of lossy compression is very useful to reduce the size of
images such as the images from a real signal (photographs). We seek
to reduce the entropy of the image while minimizing the degradation
perceptible by the man (very subjective). We can use as a measure
of objective distortion the average quadratic error on the pixels.
\chapter{JPEG}

\section{Introduction to JPEG}

\indent JPEG is one of the most used image compression formats. It can be used either as a lossless algorithm (with a compression ratio around 2) or as a lossy algorithm (with compression ratios typically between 3 and 10 depending of image and quality).

There is many different variants of JPEG. We will only present it in a general manner, not going into the details of specific formats.

\section{JPEG algorithm}

\indent This compression algorithm uses a number of steps to obtain such compression. Decompression is made up of the same steps in reverse.

These steps typically are (they might vary depending on the exact JPEG algorithm used) :
\begin{enumerate}
  \item Cutting up the picture: The image is cut up in $8\times 8$ pixel blocks (or in some cases $16\times 16$).
  \item Transforming colors: Typically color storage of images is RGB (Red Green Blue). 
  However, YUV or YCbCr image storage (who store luma and chroma components separately) allow for better compression ratios because the eye is more sensitive to luma than to chroma. 
  Therefore, many JPEG algorithms transform JPEG to YUV or YCbCr before the actual compression.
  \item Under-sampling: When the image has been transformed into YUV or YCbCr luma and chroma data, chroma data can be under-sampled, typically by a factor of 2, to make use of the human's eye lesser sensitivity to chroma than to luma.
  \item DCT: A DCT (Direct Cosine Transform) is then applied on the image blocks. This is very calculation intensive and does not compress in itself. However, switching to the frequency domain is necessary for the next compression steps.
  \item Quantification: Quantification is the process of ignoring or reducing the importance of higher frequencies to which the human eye is less sensitive.
This allows to change a number of coefficients into a zero value, which will take much less space to code in the next steps of the compression than non-zero values would.
This is main step where data is lost and is tehrefore ignored when doing lossless compression.
  \item Diagonal coding: The data is coded in diagonal rather than in line sor columns.
  \item RLE: RLE (Run-Length Encoding) is applied on the 0 value (made very frequent by quantization).
For example this transforms 00000000 into $8\times 0$ which is much shorter.

  \item Entropic coding: A coding minimizing the storage size without losing data is then applied, typically a Huffmann or arithmetic coding.
 Huffmann coding is used most of the time because of its efficient both in compression and ratio and compression time.
\end{enumerate}


\part{Implementation}


\chapter{Qt}

\begin{figure}[h!]
\centering
\includegraphics[scale=0.75]{qt_logo.png} 
\caption{Qt logo}
\end{figure}

\section{Introduction}
\indent The Qt toolkit is a C++ class library and a set of tools for building multiplatform
GUI programs using a "write once, compile anywhere" approach. Qt lets
programmers use a single source tree for applications that will run on Windows
95 to XP, Mac OS X, Linux, Solaris, HP-UX, and many other versions of
Unix with X11.A version of Qt is also available for Embedded Linux, with the
same API.\\

In our project, we used Qt 4.4.3 version.

\section{History}
\indent Haavard Nord and Eirik Chambe-Eng (the original developers of Qt and the CEO and President, respectively, of Trolltech) began development of "Qt" in 1991, three years before the company was incorporated as Quasar Technologies, then changed the name to Troll Tech, and then to Trolltech.\\

The toolkit was called Qt because the letter Q looked appealing in Haavard's Emacs font, and "t" was inspired by Xt, the X toolkit.\\

Controversy erupted around 1998 when it became clear that KDE was going to become one of the leading desktop environments for Linux. As KDE was based on Qt, many people in the free software movement worried that an essential piece of one of their major operating systems would be proprietary.\\

This gave rise to two efforts: the Harmony toolkit, which sought to duplicate the Qt Toolkit under a free software license, and the GNOME desktop, which intended to supplant KDE entirely. The GNOME Desktop uses the GTK+ toolkit, which was originally written for the GIMP, and primarily uses the C programming language.\\

The first two versions of Qt had only two flavours: Qt/X11 for Unix and Qt/Windows for the Windows platform. The Windows platform was only available under the proprietary license which meant free/open source applications written in Qt for X11 could not be ported to Windows without purchasing the QPL edition. In the end of 2001, Trolltech released Qt 3.0 which added support for the Mac OS X platform. The Mac OS X support was available only in the proprietary license, until June 2003, where Trolltech released Qt 3.2 with Mac OS X support available under the GPL.\\

In 2002 members of the KDE on Cygwin project began porting the GPL licensed Qt/X11 code base to Windows.[20] This was in response to Trolltech's refusal to license Qt/Windows under the GPL on the grounds that Windows was not a free software/open source platform.[21][22] The project achieved reasonable success although it never reached production quality.\\

This was resolved when Trolltech released Qt/Windows 4 under the GPL in June 2005. Qt 4 now supports the same set of platforms in the free 
software/open source editions as in the proprietary edition, so it is now possible to create GPL-licensed free/open source applications using Qt on all supported platforms. Nokia acquired Trolltech ASA in 2008 and changed the name to Qt Software.\\

\section{License}
\indent Until version 1.45, source code for Qt was released under the FreeQt license - which was viewed as not compliant with the open source principle by the Open Source Initiative and the free software definition by Free Software Foundation, because while the source was available it did not allow the redistribution of modified versions.\\

With the release of version 2.0 of the toolkit, the license was changed to the Q Public License (QPL), a free software license but one regarded by the Free Software Foundation as incompatible with the GPL. Compromises were sought between KDE and Trolltech whereby Qt would not be able to fall under a more restrictive license than the QPL, even if Trolltech were bought out or went bankrupt. This led to the creation of the KDE Free Qt foundation, which guarantees that Qt would fall under a BSD-style license should no free software/open source version of Qt be released during 12 months.\\

Later Qt became available under a dual license, the GPL v2 or v3 with special exception[15] and a proprietary commercial license on all supported platforms. The commercial license allows the final application to be licensed under various free software/open source licenses such as the LGPL or the Artistic License, or a proprietary software license.\\

As announced on January 14, 2009, Qt version 4.5 adds another option, the LGPL, which should make Qt more suitable for non-GPL open source projects and for commercial users.\\

All editions support a wide range of compilers, including the GCC C++ compiler and the Visual Studio suite.

\section{Example}
\subsection{Hello Word}

\indent This program is a simple "Hello world" example. It contains only the bare minimum you need to get a Qt application up and running. The picture\ref{hw} is a screenshot of this program.

\begin{figure}
\centering
\includegraphics[scale=0.8]{helloword.png} 
\caption{"Hello Word" with Qt}
\label{hw}
\end{figure}

\lstset{
	backgroundcolor=\color{lbcolor},
	tabsize=4,
	rulecolor=,
	language=C++,
    basicstyle=\scriptsize,
    upquote=true,
    aboveskip={1.5\baselineskip},
    columns=fixed,
    showstringspaces=false,
    extendedchars=true,
    breaklines=true,
    prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}},
    frame=single,
    showtabs=false,
    showspaces=false,
    showstringspaces=false,
    identifierstyle=\ttfamily,
    keywordstyle=\color[rgb]{0,0,1},
    commentstyle=\color[rgb]{0.133,0.545,0.133},
    stringstyle=\color[rgb]{0.627,0.126,0.941},
}


\begin{lstlisting}
#include <QApplication>
#include <QLabel>

int main(int argc, char *argv[])
{
    QApplication app(argc, argv);
    QLabel label("Hello, world!");
    label.show();
    return app.exec();
}
\end{lstlisting}


\subsection{Line by Line Walkthrough}

\begin{lstlisting}
#include <QApplication>
\end{lstlisting}

This line includes the \texttt{QApplication} class definition. There has to be exactly one \texttt{QApplication} object in every GUI application that uses Qt. QApplication manages various application-wide resources, such as the default font and cursor.

\begin{lstlisting}
#include <QPushButton>
\end{lstlisting}

This line includes the \texttt{QPushButton} class definition. For each class that's part of the public Qt API, there exists a header file of the same name that contains its definition.\\

\texttt{QPushButton} is a GUI push button that the user can press and release. It manages its own look and feel, like every other \texttt{QWidget}. A widget is a user interface object that can process user input and draw graphics. The programmer can change both the overall look and feel and many minor properties of it (such as color), as well as the widget's content. A \texttt{QPushButton} can show either a text or a \texttt{QIcon}.

\begin{lstlisting}
int main(int argc, char *argv[])
{
\end{lstlisting}

The \texttt{main()} function is the entry point to the program. Almost always when using Qt, \texttt{main()} only needs to perform some kind of initialization before passing the control to the Qt library, which then tells the program about the user's actions via events.\\
The \texttt{argc} parameter is the number of command-line arguments and \texttt{argv} is the array of command-line arguments. This is a standard C++ feature.\\

\begin{lstlisting}
	QApplication app(argc, argv);
\end{lstlisting}

The \texttt{app} object is this program's \texttt{QApplication} instance. Here it is created. We pass \texttt{argc} and \texttt{argv} to the \texttt{QApplication} constructor so that it can process certain standard command-line arguments.\\

The \texttt{QApplication} object must be created before any GUI-related features of Qt are used.

\begin{lstlisting}
	QPushButton hello("Hello world!");
\end{lstlisting}

Here, after the \texttt{QApplication}, comes the first GUI-related code: A push button is created.\\

The button is set up to display the text "\textbf{Hello world!}". Because a parent window is not specified (as second argument to the \texttt{QPushButton} constructor), the button will be a window of its own, with its own window frame and title bar.

\begin{lstlisting}
	hello.resize(100, 30);
\end{lstlisting}

The button is set up to be 100 pixels wide and 30 pixels high (excluding the window frame, which is provided by the windowing system). We could call \texttt{QWidget::move()} to assign a specific screen position to the widget, but instead we let the windowing system choose a position.

\begin{lstlisting}
	hello.show();
\end{lstlisting}

A widget is never visible when it is created. \texttt{QWidget::show()} must be called to make it visible.

\begin{lstlisting}
     return app.exec();
 }
\end{lstlisting}

This is where \texttt{main()} passes control to Qt. \texttt{QCoreApplication::exec()} will return when the application exits. (\texttt{QCoreApplication} is \texttt{QApplication}'s base class. It implements \texttt{QApplication}'s core, non-GUI functionality and can be used when developing non-GUI applications.)

\subsection{Compiling}
\indent To compile a C++ application, a makefile is needed. The easiest way to create a makefile for Qt is to use the \texttt{qmake} build tool supplied with Qt.

\begin{lstlisting}
qmake -project
qmake
\end{lstlisting}

The first command tells \texttt{qmake} to create a project file (a \texttt{.pro} file). The second command tells it to create a platform-specific makefile based on the project file.

\chapter{Qwt}

\section{Presentation}
\indent Qwt\footnote{Qt Widgets for Technical Applications} is a Qt extension. Its main widget, \texttt{QwtPlot}, is used de represent graphically data in 2D.\\

Once the object is created, items can be added to the drawing: curves, markers, grids, images, SVG objects, scales...\\
The flexibility of the library allows to easily add any new type of item.\\

A \texttt{QwtPlot} can have up to 4 axes (vertical left or right, and horizontal at the top or bottom), each item is attached to an X-axis (up or down) and a Y-axis (left or right) to map the actual item (e.g. $y=f(x)$ for a curve) the system of coordinates Qt (\texttt{QPoint(x, y)}).\\

Visually, each axis can be adjusted very precisely (extrema, setting the display graduations, mapping linear, logarithmic or user-defined).\\

A selection box can also select (with the mouse or keyboard) graphically a point, a rectangle, with the advantage that the items are not drawn again, which leads to good performance. The details screen selection (pixels) are converted into actual coordinates (for the definition of an item).
This method is used to implement a zoom.\\

There are other widgets than \texttt{QwtPlot}: the sliders with graduations, compasses, knobs, thermometers, etc. Scales graduations can be used independently of the subject \texttt{QwtPlot}.\\

Qwt is distributed under the terms of the Qwt License, Version 1.0, and it can be usable in all environments where Qt is installed. It is compatible with Qt 3.3.x and Qt 4.x.
\section{Some Screenshots}

\begin{figure}[h!]
\centering
\includegraphics[scale=.55]{sinus.png} 
\caption{Sinusoid plots}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=.55]{histogram.png} 
\caption{Histogram}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=.55]{spectrogram.png} 
\caption{Spectrogram}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=.55]{plot.png} 
\caption{A complex plot}
\end{figure}


\part{Conclusion}

\indent In this report, we described Multimedia Toolbox as a project that helps us to manipulate
sounds and images and display them graphically.\\

With all the accumulated effort invested in developing Multimedia Toolbox, there are reasons to believe that at
the end of the project that it is quite difficult to manipulate and compress sounds and images. We summarize the progress with respect to the main objectives of the project: simplicity, efficiency, effectiveness, and portability.\\

\begin{itemize}

\item  Simplicity:

 The main objective of our project was the simplicity of use, in other words, we sought to create a software easy to understand and to exploit by everybody. The pleasant user interface of the software clearly makes the display of sounds and images much more intuitive for the user. Besides,we were confronted to many problems but specialized solutions tailored for scheduling problems and optimizations of existing tools have led to significant improvements.

\item Efficiency and effectiveness:

 As we know, the two primary objectives of project management are that the project should be effective and efficient and this is what we were searching to do during the software programming. We were seeking for an efficient application which be able to exploit maximally our resources,to avoid unnecessary idle time, delays or wasted time brought about by undertaking tasks or activities. \\
 However, to have have an effective software, we had had to complete the project tasks, to establish objectives including the required needs of the user producing quality standards that have been specified to satisfy the needs. In addition, our application had had to be capable of responding to the changes in the environment in which the system will operate accordingly to the change in the requirement of the user.


\item Portability:

 The portability of the application represented the aim of our programming processus, we wanted an application able to be implemented in many OS, this is why we used C++ as programming language. In spite of that, we had had to test it many times in Linux and Windows to be sure that the software don't show any anomaly.

\end{itemize}

 This project has taught us how to manage a project even by sharing tasks between us or by helping each other to correct the various difficulties encountered or by learning from the mistakes of others.

\part{Glossary}

\begin{itemize}
\item Multimedia :media and content that uses a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms.
\item PCM: Pulse-code modulation is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code.
\item Sound :is a traveling wave which is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the sensation stimulated in organs of hearing by such vibrations.
\item Compression: Data compression, the process of encoding information using fewer bits
    \begin{itemize}
    \item Image compression
    \item Audio data compression
    \item Video compression
    \end{itemize}
\item Audio data compression:form of data compression designed to reduce the size of audio files. Audio compression algorithms are implemented in computer software as audio codecs.
\item WAV (or WAVE): short for Waveform audio format, also known as Audio for Windows, is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs.
\item LPCM :Linear pulse code modulation is a method of encoding audio information digitally. The term also refers collectively to formats using this method of encoding. The term PCM, though strictly more general, is often used to describe data encoded as LPCM.
\item Chunk:A fragment of information which is used in many multimedia formats, such as PNG, MP3, AVI etc.
\item Riff: Resource Interchange File Format (RIFF) is a generic meta-format for storing data in tagged chunks.
\item Sound pressure: the difference between the average local pressure of the medium outside of the sound wave in which it is traveling through (at a given point and a given time) and the pressure found within the sound wave itself within that same medium.
\item Decibel:is a logarithmic unit of measurement that expresses the magnitude of a physical quantity (usually power or intensity) relative to a specified or implied reference level. Since it expresses a ratio of two quantities with the same unit, it is a dimensionless unit.
\item FFT: Fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse. There are many distinct FFT algorithms involving a wide range of mathematics, from simple complex-number arithmetic to group theory and number theory.
\item DFT: Discrete Fourier transform (DFT) is a specific kind of Fourier transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function (which is often a function in the time domain). But the DFT requires an input function that is discrete and whose non-zero values have a limited (finite) duration.
\item Equalization:he process of using passive or active electronic elements or digital algorithms for the purpose of altering (originally flattening) the frequency response characteristics of a system.
\item Quantization :the procedure of constraining something from a continuous set of values (such as the real numbers) to a discrete set (such as the integers).
\item Sampling: converting a continuous signal into a discrete signal
\item Audio noise:unwanted residual electronic noise signal that gives rise to acoustic noise heard as 'hiss'. This signal noise is commonly measured using A-weighting or ITU-R 468 weighting
\item Lossless data compression: class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows an approximation of the original data to be reconstructed, in exchange for better compression rates.
\item Lossy compression: is the method where compressing data and then decompressing it retrieves data that is different from the original, but is close enough to be useful in some way. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony.
\item Image: is an artefact, or has to do with a two-dimensional (a picture), that has a similar appearance to some subject-usually a physical object or a person.
\item CCD: charge-coupled device is an analog shift register that enables the transportation of analog signals (electric charges) through successive stages (capacitors), controlled by a clock signal.
\item Raw image format:it contains minimally processed data from the image sensor of either a digital camera, image or motion picture film scanner. Raw files are so named because they are not yet processed and therefore are not ready to be used with a bitmap graphics editor or printed.
\item Pixel:(or picture element) is the smallest item of information in an image. Pixels are normally arranged in a 2-dimensional grid, and are often represented using dots, squares, or rectangles.
\item RGB color model: is an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
\item CMJ(FR)/CMYK(EN) system:  (short for cyan, magenta, yellow, and key (black), and often referred to as process color or four color) is a subtractive color model, used in color printing, also used to describe the printing process itself. Though it varies by print house, press operator, press manufacturer and press run, ink is typically applied in the order of the abbreviation.
\item BMP: Bitmap or DIB file format (for device-independent bitmap), is an image file format used to store bitmap digital images, especially on Microsoft Windows and OS/2 operating systems.
\item TIFF: Tagged Image File Format is a file format for storing images, including photographs and line art. It is as of 2009 under the control of Adobe Systems.
\item DCT: Discrete cosine transform  expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations.
\item JPEG:A commonly used method of compression for photographic images. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.
\item QT: Qt (pronounced as the English word "cute") is a cross-platform application development framework, widely used for the development of GUI programs (in which case it is known as a widget toolkit), and also used for developing non-GUI programs such as console tools and servers.
\item Qwt: library contains GUI Components and utility classes which are primarily useful for programs with a technical background.
\end{itemize}

\part{Bibliography}

\begin{itemize}
\item University of Manitoba website, \url{http://umanitoba.ca/}
\item zur Mikrofonaufnahmetechnik und Tonstudiotechnik website
\item A practical handbook of speech coders, Randy Goldberg, Lance Riek, CRC Press, 2000
\item The transforms and applications handbook Second edition, Alexander D. Poularikas, CRC Press, IEEE Press, 2000
\item The Scientist and Engineer's Guide to Digital Signal Processing, Second edition, Steven W. Smith, California Technical Publishing, 1999
\item \url{http://www.sonicspot.com/guide/wavefiles.html}
\item The Book of Qt 4: The Art of Building Qt Applications, Daniel Molkentin, Open source Press, No Strach Press, 2007
\item \url{http://en.wikipedia.org/wiki/Color}
\item \url{http://en.wikipedia.org/wiki/BMP_file_format}
\item \url{http://ezinearticles.com/?Project-Efficiency-and-Effectiveness:-The-IT-Project-Management&id=384733}
\item C++ GUI Programming with Qt 3, Jasmin Blanchette, Mark Summerfield
\end{itemize}

\end{document}
