\documentclass{article}

\marginparwidth 0pt
\oddsidemargin  0pt
\evensidemargin  0pt
\marginparsep 0pt
\topmargin   -.50in
\textwidth   6.5in
\textheight  9in

\usepackage{pdfpages}
\usepackage{cite}

\let\OLDthebibliography\thebibliography
\renewcommand\thebibliography[1]{
    \OLDthebibliography{#1}
    \setlength{\parskip}{0pt}
    \setlength{\itemsep}{0pt plus 0.3ex}
}

\usepackage{titling}
\setlength{\droptitle}{-4em}     % Eliminate the default vertical space
\addtolength{\droptitle}{-24pt}   % Only a guess. Use this for adjustment

%\usepackage{titlesec}
%\titlespacing\section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
%\titlespacing\subsection{0pt}{8pt plus 4pt minus 2pt}{1pt plus 2pt minus 2pt}
%\titlespacing\subsubsection{0pt}{6pt plus 4pt minus 2pt}{1pt plus 2pt minus 2pt}
%\renewcommand{\section}{\subsubsection}

\title{CIS700 Project Final Report: \\ {\large Identifying Random Number Generation Flaws in Practice}}
\author{Shaanan Cohney and Luke Valenta}
\date{}

\begin{document}
\maketitle

\section*{Problem}
We propose to analyse captured encrypted SSL/TLS traffic for potential sources of insecurity. While SSL/TLS is considered to be a sufficiently strong and practical protocol in theory, there is a large gap between protocol design and implementation in both hardware and software. It is currently unknown which commonly used implementations suffer from flaws and on this we aim to provide insight.

\section*{Approach}
Our focus will be on searching for issues in random number generation that supports fields such as client/server randoms and TCP sequence numbers. We aim to process around 400GB of pcap (packet capture) files that have been made available to us from an academic source. In order to detect issues with randomness we will look for duplication in captured values. We will then count duplicates and sort by frequency. Following that, our plan is to manually check the hostnames and software behind the most frequent values to see if we can discover the largest sources of implementation error. If we have time we will also attempt to factor keys to determine the extent to which security is broken.

\section*{Timeline}
\begin{itemize}
\item Nov-3 Completed Proposal
\item Nov-9 Basic Analysis Tools Coded
\item Nov-16 All data sorted
\item Nov-30 Most common sources analysed
\item Dec-1 Presentation Complete
\item Dec-2 Review Paper Complete
\end{itemize}

\section*{Expected Results}
We expect to see many thousands of duplicated ``random" values. We also expect (from a cursory inspection of the data) to see a number of values consisting of all zero bytes. Additionally, we expect to see a large number of duplicate values from various attempts to scan servers to extract their keys, as the easiest way to do so is to repeat client values. Other than that, we expect to be able to pinpoint a number of configurations that result in poor randomness.

\nocite{*}
\bibliographystyle{plain}
\bibliography{citations}

\end{document}
