\section{Approach}

For the first part of this ongoing project, our focus has been on searching for ‘interesting’ events such as random values being repeated more frequently than expected or the use of weak parameter choices for cryptographic protocols. Our hope is that this analysis will allow us to identify issues in random number generation or other implementation issues. Our primary data source is 2.5TB of pcap (packet capture) files that have been made available to us from an academic source.  The files contained the first [how many] bytes of data from each tcp session, filtered using tcpdump, the standard packet capture tool. Overall there were 3,571,230,315 packets within the pcaps. \\

A significant challenge we faced was designing infrastructure to maintain privacy and data-control while still achieving run-times for our analysis on the order of days rather than weeks or months. Restrictions on the transport of the data precluded us from parallelizing across a cluster or multiple machines, and thus optimizing a single machine workflow was key. Thus the constraints that we faced were developing a multi-core process that could run within our systems 64GB of RAM and efficiently across twenty-four 2.2Ghz cores. \\

We performed our analysis in multiple stages. First, we parsed the pcap files into a more usable format, pulling out fields that could potentially be useful for our analysis. We then further reduced the size of our dataset by parsing the generated CSVs and extracting only the fields necessary to analyse a given interesting value (such as random bytes or sequence numbers). \\ 

Next, we searched for interesting values within these fields. Depending on the field, a value could be considered interesting if it was either seen repeated across multiple packets, or was a weak parameter choice such as a low RSA public exponent. \\

  To detect repeats of a value, we invoke the collision-detecting procedure described below. Once repeats are found, we count them and sort by frequency. The rationale behind sorting by frequency is that the most common sources of implementation errors are likely to have the most repeated values and will appear near the top of the sorted list. We can cluster different types of repeats based on their frequency as well to help with analysis.   \\

  To find weak parameter choices, we extract the relevant fields from the parsed files, and sort by frequency. Another technique that we have not yet employed is to analyse packets pairwise. For example, we can compute the GCD of RSA exponents or employ Coppersmith's method to decrypt messages that are encrypted by RSA with a low public exponent, repeated random padding, and the same modulus. \\

  We now describe each of the above steps more fully. \\

\paragraph{Parsing.} 
Our first task was to parse the pcap files and extract relevant fields. To achieve this we utilized tshark, the open-source command line version of wireshark \footnote{\url{https://code.wireshark.org/review/gitweb?p=wireshark.git;a=summary}}, brached at commit $I113d60b$. This efficiently extracted all useful fields that we could wish to filter on, while reducing the amount of extraneous data collected. We additionally utilized standard BPF (Berkeley Packet Filter) instructions to filter out retransmissions and corrupted packets. We output to CSV format and compressed the results as our disk space was limited. A small custom piece of software then further reduced the size of results on a per-analysis basis by extracting only the fields relevant to a given query.

\paragraph{Collision-detection.}
The next step in the pipeline is a process we refer to as binning. In order to find repeats for a particular field, we invoked a collision-detection procedure. Given the size of our data set, using a hash table to find collisions over the entire data set was not a feasible solution. To solve this problem, we first assigned values to bins based on the first two bytes of the field, in essence performing a truncated radix sort. The size of each bin was small enough to fit into memory, so we used a hash table to find duplicates within each bin.

\paragraph{Analysis.}
We focused on finding duplicates in the following fields: TCP sequence numbers, SSL handshake random values, and DH key exchange values. Additionally, we looked for weak parameter choices in the RSA public exponent field.
