%
% 6.857 homework template
%
% NOTE:
% Be sure to define your team members with the \team command
% Be sure to define the problem set with the \ps command
% Be sure to use the \answer command for each of your answers 
\documentclass[11pt]{article}

\newcommand{\team}{ Victor Chan \\ Perry Hung\\ Aleksander Zlateski}
\newcommand{\ps}{ Problem Set 3 }

%\pagestyle{headings}
\usepackage[dvips]{graphics,color}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{latexsym}
\usepackage[pdftex]{graphicx}
\setlength{\parskip}{1pc}
\setlength{\parindent}{0pt}
\setlength{\topmargin}{-3pc}
\setlength{\textheight}{9.5in}
\setlength{\oddsidemargin}{0pc}
\setlength{\evensidemargin}{0pc}
\setlength{\textwidth}{6.5in}

\newcommand{\answer}[1]{
\newpage
\noindent
\framebox{
	\vbox{
		6.857 Homework \hfill {\bf \ps} \hfill \# #1  \\ 
		\team \hfill \today
	}
}
\bigskip

}

\begin{document}

\answer{3-1 - Side Channel Attacks on AES}
The approach used is the following: We consider only the initial round. In the inital round and the first subBytes routine after it. In the initial round, we binary XOR our text with the round key, which in this case (first round) equals the initial key.

First, we assign all possible values for each of the 16 bytes of the key. Initially each byte can have any value 0-255. Now we want to use the text on which AES is performed to eliminate some possibilities. For each byte of the key we have a set of possible values. Now for each possible value, we XOR the corresponding byte of the text with that value, and check if the corresponding box that must have been used in the first subBytes routine is actually used. If it is not used, we conclude that this byte can not have that particular value. This way, using each of the texts we eliminate some possible values for each byte of the key. 

Finally we will end up with very few possible values for each byte of the key, so that we can then try all possible values of the key and check it against the given ciphertext. After running our algorithm, we have reduced the set of keys to only one possible value, which turned out to be correct - produced the correct cipher text.

b) On average we needed 12 triples in order to reduce the number of possible keys to only one key. I've tried the algorithm with multiple sets of triples and plotted the average number of possible keys versus the number of triples processed.

\includegraphics[scale=0.60]{problem1}\\

Since we are using only the initial round and the first subBytes routine, we would be able to improve the results by using more data. However, in order to use more rounds, we have to know the whole key (in order to generate the corresponding roundKey. This might be possible with some more advanced techniques.

However, there are other improvements that can be done. For example, after $n$ rounds we have set of possible values for each of the keys. Now we can try all the possible values of the key (Cartesian product of all the possible values for each of the bytes). But we do not have to actually try all the possibilities. For example, we can have a case in which setting the first byte to something, and the second byte to something would require using some SBox value twice, and we know that that value is used only once. This way we can reduce the number of possible trials. In general we would have a bipartite graph matching problem, where each byte of the key connects to different SBoxes, and we have to find a perfect match (each byte of the key is connected to some (different) SBox). For each solution to the maximal matching problem, we have to do only one trial.

c) Our algorithm relies on the fact that the plaintexts are generated randomly. In general this would not be the case. The encripted plaintext would usually be a text made of only a subset of the possible values for each of the bytes. In this case our algorithm will reduce the number of possible keys more slowly. Also, we assume that all the messages are at least 16 bytes long. If this is not true, for example if messages were only 1 byte long, this approach is quite poor, since we get very little info about the last 15 bits of the key.

\subsection*{Source Code of the Algorithm}

\begin{scriptsize}\begin{verbatim}
#include <iostream>
#include <vector>
#include <fstream>
#include <cmath>

using namespace std;

int possible_[16][256];

int reduce(int text[16], int cipher[16], int regs[256]) {

    int reduced = 0;

    for (int i=0;i<16;i++)
        for (int j=0;j<256;j++)
            if (possible_[i][j]) {
                int A = j ^ text[i];
                if (regs[A] == 0) {
                    possible_[i][j] = 0;
                    reduced++;
                }
            }

    return reduced;
}

int totKeys = 0;

void recSaveKey(int r, int* k) {

    if (r == 16) {
        char fname[1024];
        sprintf(fname, "testData/key.%d.bin", totKeys);
        ofstream fkey(fname);
        for (int j=0;j<16;j++)
            fkey << (unsigned char)k[j];
        fkey.close();
        totKeys++;
    } else {

        for (int i=0;i<256;i++)
            if (possible_[r][i]) {
                k[r] = i;
                recSaveKey(r+1, k);
            }

    }

}

double countPossibleKeys() {
    double a = 1;
    for (int i=0;i<16;i++) {
        double t = 0;
        for (int k=0;k<256;k++)
            if (possible_[i][k])
                t++;
        a += log(t);
    }
    return a;
}

int main() {

    // Mark all as possible.
    for (int i=0;i<16;i++)
        for (int j=0;j<256;j++)
            possible_[i][j] = 1;

    cout << "L = [";

    for (int i=0;i<20;i++) {

        int text[16];
        int cipher[16];
        int regs[256];


        for (int j=0;j<16;j++) {
            cin >> text[j];
        }

        for (int j=0;j<16;j++) {
            cin >> cipher[j];
        }

        int totReg = 0;
        for (int j=0;j<256;j++) { 
            cin >> regs[j];
            totReg += regs[j];
        }

        //cout << i << " " << totReg << endl;
        reduce(text, cipher, regs);

        /*
        for (int i=0;i<16;i++) {
            int tot = 0;
            cout << i << ": ";
            for (int j=0;j<256;j++) {
                tot += possible_[i][j];
                if (possible_[i][j]) {
                    cout << (unsigned char)j << " ";
                }
            }
            cout << endl;
        }
        */
        cout << countPossibleKeys() << " ";


    }

    cout << "]" << endl;
    
    for (int i=0;i<16;i++) {
        int tot = 0;
        cout << i << ": ";
        for (int j=0;j<256;j++) {
            tot += possible_[i][j];
            if (possible_[i][j]) {
                cout << (unsigned char)j << " ";
            }
        }
        cout << endl;
    }

}
\end{verbatim}
\end{scriptsize}



\answer{3-2 - Time/Memory Tradeoffs and Generic Attacks}
We consider the classic Hellman Time-Memory Tradeoff (TMTO) attack,
disregarding distinguished points. The classic algorithm describes a general attack against
a block cipher given a chosen plaintext and its corresponding ciphertext, recovering the key
in the attack. In this problem, we utilize the Hellman TMTO attack to invert a generic hash function
given a message digest to find preimages. 

The problem is as follows: 

Given a block box function $h : \{0, 1\}^n \rightarrow \{0, 1\}^k$,
where $n$ is the block length and $k$ is the digest size, we want to be able to invert the function $h$ such that
given a digest $y_0$, find a $x_0 = h(y_0)$. In constructing this problem as a TMTO problem, we restrict the
message space $n$ and consider the one-way function 
$$h: k \rightarrow k$$
Given this $h$, we would like to invert $h$ on arbitrary digests $y_0, y_1...y_{128}$ with a reasonably high probability.

%Given a $k$-bit digest, an unknown hash function $h : \{0, 1\}^n \rightarrow $

The attack is divided into two portions, an offline
precomputation phase, and an online search phase.

\textbf{ Offline Portion }

The offline portion of this attack involves a precomputation phase. In this phase,
we construct \textit{chains} of length $t + 1$ by randomly selecting $m$ starting points $SP_1, SP_2...SP_{m}$, where
$SP \in \{0, 1\}^k$. For each $SP_i$, we compute $h(SP_1), h(h(SP_2))...h^{t}(SP_{t}$). In order to avoid
the problem of \textit{overlap}[1], we further define simple reduction functions 
$g_1, g_2...g_r$, where $g_i$ is some simple reduction, such as dropping some bits from the ciphertext.

At this point, from [1], we know that in order to achieve of a success probability of about 0.55,
we select $m = t = r = 2^{k/3} = 2^{64/3} \approx 2^{21}$. To choose exact values, we inspect the 
available hardware. 

We attempt to utilize all of our available space.
We have 1024 machines with 64 GB of memory each. 64 GB * 1024 computers = $2^49$ bits. The total memory $M$
required for the tables is given by 
$$ M = rme $$
where $e$ is the size of each entry, in this case $2*k = 128$ bits, because we need to store both start
and end points. If we choose  the number of tables $r = 2^{21}$, then given $M = 2^{39}, e = 2^7$,  the number
of chains per table $m = 2^{21}$. It follows that the chain length is $t = 2^{22}$.

The precomputational complexity is as follows:

In order to generate each table, we have to make $rmt$ hashes, and store $2mt$ points. Assuming
$1$ microsecond per hash and store operation, the total number of operations $T$ is
$$T = rmt + 2mt$$
$$T = 1.84 \times 10^{19}$$ 
We are also given 1024 computers, so the expected time for precomputing the $T/1024$,
giving us a computational time of about $570$ years.

\textbf{Online Portion}

The online portion of the attack involves $t$ storage lookups for $r$ tables, so the time
complexity T of this portion is $\frac{rt}{1024}$, given 1024 computers working in 
parallel. Thus
$$T = rt/1024 = 2^{21+21-10} = 2^{32}$$
At 1 microsecond per operation, this results in an expected time of $\approx 71$ minutes.

Because we are given $128$ message digests, the time to invert all the digests with 
probablity 0.55 is $128 \times 71$ minutes, or about $6.3$ days.

\textbf{References}
J. Borst, B. Preneel, Joos V. \textit{On the time-memory tradeoff between exhaustive key search and table precomputation}. Proc. of the 19th Symposium in Information Theory in the Benelux, WIC 1998

\answer{3-3 - Meet-in-the-Middle Attacks and Collision Finding}

a)  Let $f(K,b)$ be a function that takes in an $n$-bit key K and a single bit $b$. The output will be $\{0,1\}^n\times\{0,1\}$. $f$ will be defined as follows:\\
 \begin{center}
	$f(K,b) = R[\,AES_K(P) \,]\times0$  $\quad if \;b = 1$ \\   
	$f(K,b) = R[\,AES_K^{-1}(C) \,]\times1$ $\quad if \;b = 0$\\
 
		\end{center}


where the $P$ and $C$ are a given pair of plaintext and ciphertext and fixed for function $f$.
$R[\:\:]$ is a function that will truncate the output of AES encrypt and decrypt to length $n$ (the same length as K).\\

We see that for a pair of $K$'s  $i, j$, if $AES_i(P) = AES_j^{-1}(C)$, then $R[\,AES_i(P) \,] = R[\,AES_j^{-1}(C) \,]$ and finally $f(i,1) = f(j,0)$. Our problem is now simply to find a pair of $i,j$'s that will result in a collision in $f$, where $b_i \not= b_j$. Since AES produces results that are "fairly" random, due to the cipher block chaining, $f$'s output should be sufficiently random. Note, since $f$ is the truncated output of AES, it is possible that even when a collision is found in $f$, the key pair is not the true keys. However this can be easily verified by applying the double encryption to $P$ and checking with $C$.\\ 

b) We use the Hellman approach to finding the collision in $f$. We let $x_i = f(x_{i-1})$ where $x$ is a tuple that contains a $K$ and a bit $b$. Calculate the $x_i$ for $ i = 1, 2, 3, ... , t$. Once we have $x_t$, we store $<x_0, x_t>$ in memory, and randomly choose a new $x_0$ and repeat the process $w$ times. If anytime in the process we find $x_i =  x_t$ for any previous $x_t$ and $b_i \not= b_t$ we have found a collision. Then in order to recover the key pair, we simply retrace the application of $f$ on the two different $x_0$ until the value right before the first collision. The probability of finding the correct key pair, is then really dependent on the size of $w$ and $t$. The probability of finding the right pair of keys is then $wt/2^{n+1}$, where $2^{n+1}$ is the size of the range of $f$'s output and $wt$ is the number of possible inputs we examine for $f$. Note, if $wt$ = $2^{n+1}$ then we are guaranteed to find the key pair, since we will try all possible inputs for $f$.


c) For a given probability of success $p$, where $p = wt/2^{n+1}$ the time complexity and space complexity depends only on $w$ and $t$. We look at the case where $p=1$ for evaluating complexity. In the case that we are guaranteed success, meaning the true keys are in the set of inputs to be evaluated by $f$, we can exploit the birthday paradox. If the an collision were to occur, we would expect it to occur within $2^{{n+1}/2}$ applications of $f$ by the birthday paradox. 


If everything was stored in memory, meaning $t = 1$, the space complexity would be $w = 2^{n+1}$. This essentially stores each output of $f$ in memory, and reduces the time complexity to back track and find the colliding keys to constant time. The other extreme is where $w=2$ and $t = 2^n$, in this case the space complexity is small, however the time complexity for back tracing is $t = 2^n$. The compromise should be in between where $w$ = $t$ = $ 2^{{n+1}/2}$. In this case the back tracing is also $2^{{n+1}/2}$. So the time complexity is $2^{{n+1}/2}$ added with the the backtracing complexity. 


Meeting in the middle has a time complexity of $2^{n+1}$ and space complexity of $2^n$, in the case where the space complexity is not bound. To compare this to the collision finding attack, we look at the case where the space complexity is $w$ = $2^{{n+1}/2}$. In this case, the expected time complexity is $2^{{n+1}/2}$ + $2^{{n+1}/2}$ = $2*2^{{n+1}/2}$. As we can see in this example both the time complexity and space complexity is orders of magnitude lower than that of the meet in the middle attack. This can be attributed to the collision finding attack being reducible to that of the birthday attack.


\end{document}

