%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% This file is part of the book
%%
%% Cryptography
%% http://code.google.com/p/crypto-book/
%%
%% Copyright (C) 2007--2010 David R. Kohel <David.Kohel@univmed.fr>
%%
%% See the file COPYING for copying conditions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\chapter{Information Theory}
\label{InformationTheory}

Information theory concerns the measure of information contained in data.
The security of a cryptosystem is determined by the relative content of the
key, plaintext, and ciphertext.

For our purposes a {\it discrete probability space} -- a finite set $\rX$
together with a probability function on $\rX$ -- will model a language.
Such probability spaces may represent the space of keys, of plaintext, or
of ciphertext, and which  we may refer to as a space or a language, and
to an element $x$ as a {\em message}.
The probability function $P: \rX \rightarrow \R$ is defined to be a non-negative
real-valued function on $\rX$ such that
$$
\sum_{x\in\rX} P(x) = 1.
$$
For a naturally occurring language, we reduce to a finite model of it by
considering finite sets consisting of strings of length $N$ in that
language.  If $\rX$ models the English language, then the function $P$
assigns to each string the probability of its appearance, among all
strings of length $N$, in the English language.

\section{Entropy}

The {\em entropy} of a given space with probability function $P$ is
a measure of the information content of the language.  The formal
definition of entropy is
$$
H(\rX) = \sum_{x\in\rX} P(x) \log_2(P(x)^{-1}).
$$
For $0 < P(x) < 1$, the value $\log_2(P(x)^{-1})$ is a positive real
number, and we define $P(x) \log_2(P(x)^{-1}) = 0$ when $P(x) = 0$.
The following exercise justifies this definition.

\noindent{\bf Exercise.}  Show that the limit
$$
\lim_{x \rightarrow 0^+} x \log_2(x^{-1}) = 0.
$$
What is the maximum value of $x \log_2(x)$ and at what value of $x$
does it occur?

An {\em optimal encoding} for a probability space $\rX$ is an injective
map from $\rX$ to strings over some alphabet, such that the expected
string length of encoded messages is minimized.
The term $\log_2(P(x)^{-1})$ is the expected bit-length for the encoding
of the message $x$ in an optimal encoding, if one exists, and the entropy
is the expected number of bits in a random message in the space.

As an example, English text files written in $8$-bit ASCII can typically
be compressed to 40\% of the original size without loss of information,
since the structure of the language itself encodes the remaining information.
The human genome encodes data for producing sequences of 20 different
amino acid, each with a triple of letters in the alphabet
$\{\tA,\tT,\tC,\tG\}$.  The 64 possible ``words'' (codons in genetics)
includes more than 3-fold redundancy, in specifying one of these 20
amino acids.  Moreover, huge sections of the genome are repeats such
as $\tA\tA\tA\tA\tA\tA\dots$, whose information can be captured by an
expression like $\tA^n$.
More accurate models for the languages specified by English or by
human DNA sequences would permit greater compression rates for
messages in these languages.

\begin{example}
Let $\rX$ be the probability space $\{\tA,\tB,\tC\}$ of three
elements, and assume that $P(\tA) = 1/2$, $P(\tB) = 1/4$, and
$P(\tC) = 1/4$.  The entropy of the space is then
$$
P(\tA)\log_2(2) + P(\tB)\log_2(4) + P(\tC)\log_2(4) = 1.5.
$$
An optimal encoding is attained by the encoding of $\tA$ with $0$,
$\tB$ with $10$, and $\tC$ with $11$.  With this encoding one expects
to use an average of 1.5 bits to transmit a message in this encoding.
\end{example}

The following example gives methods by which we might construct
models for the English language.

\begin{example}[Empirical models for English]
First, choose a standard encoding --- this might be an encoding as
strings in the set $\{\tA,\dots,\tZ\}$ or as strings in the ASCII
alphabet.  Next, choose a sample text.  The text might be the complete
works of Shakespeare, the short story {\em Black cat} of Edgar Allan
Poe, or the U.S. East Coast version of the {\em New York Times} from
1 January 2000 to 31 January 2000.  The following are finite
probability spaces for English, based on these choices:
\begin{enumerate}
\item
Let $\rX$ be the set of characters of the encoding and set $P(c)$ to
be the probability that the character $c$ occurs in the sample text.
\item
Let $\rX$ be the set of character pairs over the encoding alphabet
and set $P(x)$ to be the probability that the pair $x = c_1c_2$
occurs in the sample text.
\item
Let $\rX$ be the set of words in the sample text, and set $P(x)$ to
be the probability that the word $x$ occurs in the sample text.
\end{enumerate}
For each of these we can extend our model for the English language
to strings of length $n$.
\end{example}

How well do you think each of these model the English language?

%\begin{center}{\Large\bf MATH3024: Lecture 08}\end{center}

\section{Rate and Redundancy}

Let $\rX$ be a discrete probability space.  We define the {\em rate}
of $\rX$ to be
$$
r(\rX) = \frac{H(\rX)}{\log_2(|\rX|)},
$$
and the {\em redundancy} to be $1-r({\rX})$.
\ignore{ % I don't know how to make this precise -- develop within the
% context of compression coding theory.  Note that the rate of a space
% is not the same as the rate of an embedding, and an optimal encoding
% should surject on (\cA^n)^* but may not surject on \cA^*.
Now suppose we have any encoding $\phi$ of $\rX$ in $\cA^*$, which extends
to an injective function $\phi: \rX^* \rightarrow \cA^*$. Then we define
the rate of $\phi$ to be
$$
r(\phi) = \limsup_{n\rightarrow\infty}
   \frac{H(\rX_{n,\phi})}{n\log_2(m)}
$$
where $\rX_{n,\phi} = \phi^{-1}(\cA^n)$ is the subspace of $\rX^*$
with the product probability.
% I just don't see how to define this product probability without
% some means of extending a probability to all of \rX^*.
We can now formally define an {\em optimal encoding} $\phi$ of $\rX$
to be any encoding such that $r(\phi) = 1$.
}%end ignore
The redundancy in a language derives from the structures such as
character frequency distributions, digram frequency distributions
(the probabilities of ordered, adjacent character pairs), and
more generally $n$-gram frequency distributions. Global structures
of a natural language such as vocabulary and grammar rules determine
yet more structure, adding to the redundancy of the language.

\section{Conditional Probability}

We would now like to have a concept of conditional probability
for cryptosystems.  Let $E$ be a cryptosystem, $\cM$ a plaintext
space, $\cK$ a key space, and $\cC$ a ciphertext space.
For a symmetric key system the space of plaintext and ciphertext
coincide, but the probability distributions on them may differ
in the context of the cryptosystem.

We use $P$ for both the probability function on the plaintext
space $\cM$ and on $\cK$.  We can now define a probability function
on $\cC$ relative to the cryptosystem~$E$:
$$
P(y) = \sum_{K\in\cK} P(K) \!\!\!\!
       \sum_{\stackrel{\scr x\in\cM}{E_K(x)=y}} \!\!\!\! P(x).
$$

We can now define $P(x,y)$, for $x\in\cM$ and $y\in\cC$ to be the
probability that the pair $(x,y)$ appears as a plaintext--ciphertext
pair.  Assuming the independence of plaintext and key spaces, we can
define this probability as:
$$
P(x,y) = \!\!\!\!
\sum_{\stackrel{\scr K\in\cK}{E_K(x)=y}} \!\!\!\! P(K)P(x).
$$
$x$ and $y$ are said to be {\em independent} if $P(x,y) = P(x)P(y)$.
For ciphertext $y$ and plaintext $x$, define the conditional probability
$P(y|x)$ by
$$
P(y|x) = \left\{
\begin{array}{cl}
\frac{\dsp P(x,y)}{\dsp P(x)} & \hbox{if $P(x) \ne 0$} \\
\\
0           & \hbox{if $P(x) = 0$} \\
\end{array}
\right.
$$

\section{Conditional Entropy}

We can now define the conditional entropy $H(\cM|y)$ of the
plaintext space with respect to a given ciphertext $y \in \cC$.
$$
H(\cM|y) = \sum_{x\in\cM} P(x|y)\log_2(P(x|y)^{-1})
$$
The conditional entropy $H(\cM|\cC)$ of a cryptosystem (more
precisely, of the plaintext with respect to the ciphertext) as
an expectation of the individual conditional entropies:
$$
H(\cM|\cC) = \sum_{y\in\cC} P(y) H(\cM|y)
$$
This is sometimes referred to as the {\em equivocation} of the
plaintext space $\cM$ with respect to the ciphertext space $\cC$.

\section{Perfect secrecy and one-time pads}

\noindent
{\bf Perfect Secrecy.}
A cryptosystem is said to have {\em perfect secrecy} if the
entropy $H(\cM)$ equals the conditional entropy $H(\cM|\cC)$.

Let $K = k_1 k_2\dots$ be a key stream of random bits, and let
$M = m_1 m_2\dots$ be the plaintext bits.  We define a ciphertext
$C = c_1 c_2\dots$ by
$$
c_i = m_i \oplus k_i,
$$
where $\oplus$ is the addition operation on bits in $\Z/2\Z$.
In the language of computer science, this is the {\tt xor} operator:
$$
\begin{array}{ccc}
0 \oplus 0 = 0, & & 1 \oplus 0 = 1, \\
0 \oplus 1 = 1, & & 1 \oplus 1 = 0. \\
\end{array}
$$
In general such a cryptosystem is called the {\em Vernam cipher}.
If the keystream bits are generated independently and randomly, then
this cipher is called a {\em one-time pad}.

Note that neither the Vernam cipher nor the one-time pad has to be defined
with respect to a binary alphabet.  The bit operation {\tt xor} can be
replaced by addition in $\Z/n\Z$, where $n$ is the alphabet size, using any
bijection of the alphabet with the set $\{0,\dots,n-1\}$.

%\noindent{\bf Perfect secrecy of one-time pads.}
\subsection*{Perfect secrecy of one-time pads}

Recall that $P(x|y)$ is defined to be $P(x|y) = P(x,y)/P(y)$ if $P(y) \ne 0$
and is zero otherwise.  If $\cM$ is the plaintext space and $\cC$ the
ciphertext space (with probability function defined in terms of the
cryptosystem), then the conditional entropy $H(\cM|\cC)$ is defined to be:
$$
%H(\cM|\cC) = \sum_{x\in\cM} \sum_{y\in\cC} P(x,y) \log_2(P(x|y)^{-1}).
\begin{aligned}
H(\cM|\cC) & = \sum_{y\in\cC} P(y) H(\cM|y) \\
           & = \sum_{y\in\cC} P(y) \sum_{x\in\cM} P(x|y) \log_2(P(x|y)^{-1})\\
           & = \sum_{y\in\cC} \sum_{x\in\cM} P(x,y) \log_2(P(x|y)^{-1}).\\
\end{aligned}
$$
If for each $x\in\cM$ and $y\in\cC$ the joint probability $P(x,y)$
is equal to $P(x)P(y)$ (i.e. the plaintext and ciphertext space are
independent) and thus $P(x|y) = P(x)$, then the above expression
simplifies to:
$$
\begin{aligned}
H(\cM|\cC)
   & = \sum_{x\in\cM} \sum_{y\in\cC} P(x)P(y) \log_2(P(x)^{-1}) \\
   & = \Big(\sum_{y\in\cC} P(y)\Big) \sum_{x\in\cM} P(x) \log_2(P(x)^{-1}) \\
   & = \sum_{x\in\cM} P(x) \log_2(P(x)^{-1}) = H(\cM).
\end{aligned}
$$
Therefore the cryptosystem has perfect secrecy.

\subsection*{Entropy of the key space}

It can be shown that perfect secrecy (or unconditional security) requires
the entropy $H(\cK)$ of the key space $\cK$ to be at least as large as
the entropy $H(\cM)$ of the plaintext space $\cM$.  If the key space is
defined to be the set of $N$-bit strings with uniform distribution, then
the entropy of $\cK$ is $N$, and this is the maximum entropy for a space
of $N$-bit strings (see exercise).  This implies that in order to achieve
perfect secrecy, the number of bits of strings in the key space
should be at least equal the entropy $H(\cM)$ of the plaintext space.

\section*{Exercises}

\input{exercises/InformationTheory}
