One important measure of a cryptographic text is the {\em coincidence index}.
For random text (of uniformly distributed characters) in an alphabet of size 26,
the coincidence index is approximately $0.0385$.  For English text, this value
is closer to $0.0661$.  Therefore we should be able to pick out text which is
a simple substitution or a transposition of English text, since the a
coincidence index remains unchanged.

\noindent
The Sage crypto string functions
\begin{center}
\verb!coincidence_index! and \verb!frequency_distribution!
\end{center}
provide functionality for analysis of the ciphertexts in the exercises.
Moreover, for a Sage string {\tt s} the $k$-th {\it decimation} of
period $m$ for that string is given by {\tt s[k::m]} (short for
{\tt s[k:len(s):m]}).

\begin{exercise}
\label{ex:cryptanalysis:decipher_Vigenere_cipher}
Complete the deciphering of the Vigen\`ere ciphertext of Section~\ref{Vigenere-Cryptanalysis} .
What do you note about the relation between the text and the enciphering or deciphering key?
A useful tool for this task could be the following javascript application for analyzing
Vigen\`ere ciphers:
\begin{center}
\url{http://echidna.maths.usyd.edu.au/~kohel/tch/Crypto/vigenere.html}
\end{center}
Consider those ciphertexts from previous exercises which come from a Vigen\`ere cipher, and
determine the periods and keys for each of the ciphertext samples.
\end{exercise}

\begin{exercise}
\label{ex:cryptanalysis:compute_coincidence_index}
For each of the cryptographic texts from the course web page, compute the coincidence
index of the ciphertexts.  Can you tell which come from simple substitution or
transposition ciphers? How could you distinguish the two?
\end{exercise}

\begin{exercise}
\label{ex:cryptanalysis:identify_periods}
For each of the cryptographic texts from the course web page, for
various periods extract the substrings of $im + j$-th characters.
For those which are not simple substitutions, can you identify a
period?
\end{exercise}

\begin{exercise}
\label{ex:cryptanalysis:consider_frequency_distribution}
For each of the ciphertexts which you have reduced to
simple substitutions, consider the frequency distribution of the
simple substitution texts.  Now recover the keys and original
plaintext.
\end{exercise}

%\course{ICE-EM/AMSI Summer School {\it Cryptography}}
%\heading{Summer}{Code Breaking II}{2007}

\begin{exercise}[Correlations of sequence translations]
\label{ex:cryptanalysis:correlation_sequence_translations}
Suppose that {\tt pt} and {\tt ct} are plaintext and ciphertext whose
frequency distributions are to be compared.  Assume we have defined:
\emph{\input{code/tutorial05.0.sage}}
The following code finds the correlations between the affine translations
of two sequences.
\emph{\input{code/tutorial05.1.sage}}
What does \verb!frequency_distribution! return, and what are the ciphers $e$
constructed in the {\tt for} loop?  What does \verb!translation_correlation!
return?  Note that \rY must be created as a discrete random variable on the
probability space \rX in order to compute their correlations.
\end{exercise}

\begin{exercise}[Breaking Vigen\`ere ciphers]
\label{ex:cryptanalysis:break_Vigenere_ciphers}
A Vigen\`ere cipher is reduced to an translation cipher by the process
of decimation.  How does the above exercise solve the problem of finding
the affine translation?

Apply this exercise to the Vigen\'ere ciphertext sample {\tt cipher01.txt}
from the course web page, and the break the enciphering.  Recall that you
will have to use the decimation (by {\tt ct[i::m]}) and \verb!coincidence_index!
to first reduce a Vigen\`ere ciphertext to the output of a monoalphabetic cipher.
\emph{\input{code/tutorial05.2.sage}}
\end{exercise}

\begin{exercise}[Breaking substitution ciphers]
\label{ex:cryptanalysis:break_substitution_ciphers}
Suppose that rather than an affine translation, you have reduced to an
arbitrary simple substitution.  We need to undo an arbitrary permutation
of the alphabet.  For this purpose we define maps into Euclidean space:

\begin{enumerate}
\item
$\cA \rightarrow \cA^2 \rightarrow \R^2$ defined by
$$
x \longmapsto xx \longmapsto \big(P(x),P(xx)\big).
$$
\item
$\cA \rightarrow \cA^2 \rightarrow \R^3$ defined by
$$
x \longmapsto xy \longmapsto \big(P(x),P(xy\,|\,y),P(yx\,|\,y)\big),
$$
for some fixed character $y$.
\end{enumerate}
See the document
\begin{center}
\url{http://echidna.maths.usyd.edu.au/~kohel/tch/Crypto/digraph_frequencies.pdf}
\end{center}
for standard vectors for the English language.
\end{exercise}

\begin{exercise}[Breaking transposition ciphers]
\label{ex:cryptanalysis:break_transposition_ciphers}
In order to break transposition ciphers it is necessary to find the
period $m$, of the cipher, and then to identify positions $i$ and $j$
within each block $1+km \le i,j \le (k+1)m$ which were adjacent prior
to the permutation of positions.  Suppose we guess that $m$ is the
correct period.  Then for a ciphertext sample $C = c_1c_2\dots$, and
a choice of $1 \le i < j \le m$, we can form the digraph decimation
sequence $c_ic_j, c_{i+m}c_{j+m}, c_{i+2m}c_{j+2m}, \dots$.

Two statistical measures that we can use on ciphertext to determine
if a digraph sequence is typical of the English language are a digraph
{\it coincidence index}
$$
\sum_{x\in\cA}^n\sum_{y\in\cA}^n \frac{n_{xy}(n_{xy}-1)}{N(N-1)}
$$
where $N$ is the total number of character pairs, and $n_{xy}$ is
the number of occurrences of the pair $xy$, and the
{\it coincidence discriminant}:
$$
\sum_{x\in\cA}\sum_{y\in\cA}
    \left(\frac{n_{xy}}{N} - \big(\sum_{z\in\cA}\frac{n_{xz}}{N}\big)
                             \big(\sum_{z\in\cA}\frac{n_{zy}}{N}\big)\right)^2.
$$
The first term is the frequency of $xy$, and the latter is the product over
the frequencies of $x$ as a first character and $y$ as a second character.
The coincidence discriminant measures the discrepancy between the probability
space of pairs $xy$ and the product probability space.

What behavior do you expect for the coincidence index and coincidence discriminant
of the above digraph decimation, if $i$ and $j$ were the positions of originally
adjacent characters?  Test your hypotheses with decimations of ``real'' English
text, using the Sage implementations of \verb!coincidence_index! and
\verb!coincidence_discriminant!.

Why can we assume that $i < j$ in the digraph sequence?  What is the obstacle to
extending these statistical measures from two to more characters?
\end{exercise}

\ignore{
\begin{exercise}[Breaking 3-time pads]
\label{ex:cryptanalysis:break_3_time_pads}
Given $\Delta_1 = XY \ominus ZW$ and $\Delta_2 = XY \ominus QR$,
the matrix of relative probabilities $P(XY|\Delta_1,\Delta_2)$
can be computed with this function.

%\input{code/tutorial06.1.m}
\input{code/tutorial06.1.sage}

Apply this function to find the plaintexts $PT_1$, $PT_2$, and
$PT_3$, where
$$
\begin{aligned}
\Delta_1 = PT_1 \ominus PT_2 = {\tt AHXCOYFBAMKUE}\\
\Delta_2 = PT_1 \ominus PT_3 = {\tt XHXRGEUHPRAHN}
\end{aligned}
$$
You may use {\it blackcat.txt} as the sample plaintext.
\end{exercise}

\begin{proof}[Solution]
Using the above function, the following lines of input:
%\input{code/solution06.1.m}
\input{code/solution06.1.sage}
generates the output:
%\input{code/solution06.2.m}
\input{code/solution06.2.sage}

By varying the value of {\tt eps} (epsilon), we piece together
likely matching strings for the original plaintexts.

\begin{center}
\begin{tabular}{cccccccccccccc} \hline
$PT_1:$&\tT&\tO&\tB&\tE&\tO&\tR&\xx&\xx&\xx&\xx&\xx&\xx&\xx\\
$PT_2:$&\tT&\tH&\tE&\tC&\tA&\tT&\xx&\xx&\xx&\xx&\xx&\xx&\xx\\
$PT_3:$&\tW&\tH&\tE&\tN&\tI&\tN&\xx&\xx&\xx&\xx&\xx&\xx&\xx\\ \hline
$PT_1:$&\xx&\xx&\xx&\xx&\xx&\tL&\tY&\tI&\tT&\tT&\tO&\xx&\xx\\
$PT_2:$&\xx&\xx&\xx&\xx&\xx&\tN&\tT&\tH&\tT&\tH&\tE&\xx&\xx\\
$PT_3:$&\xx&\xx&\xx&\xx&\xx&\tH&\tE&\tB&\tE&\tC&\tO&\xx&\xx\\\hline
$PT_1:$&\xx&\xx&\xx&\xx&\xx&\xx&\tN&\tO&\tT&\tT&\tO&\xx&\xx\\
$PT_2:$&\xx&\xx&\xx&\xx&\xx&\xx&\tI&\tN&\tT&\tH&\tE&\xx&\xx\\
$PT_3:$&\xx&\xx&\xx&\xx&\xx&\xx&\tT&\tH&\tE&\tC&\tO&\xx&\xx\\\hline
\end{tabular}
\end{center}

We conjecture that the correct plaintexts are {\tt TOBEORNOTTO**},
{\tt THECATINTHE**}, and {\tt WHENINTHECO**}, leaving the final
characters to pure guesswork.

\end{proof}
}


