\section{Results}

Unfortunately, late on in the process, our access to the data was restricted and therefore we can only present survey results, and not more in-depth analysis. We expect our access to be restored in the coming weeks. \\

We now present results of the analysis that we have performed thus far. For each of the fields that we investigated, we describe our findings and potential explanations for the observed behaviour when possible. \\

Our final toolchain ran overall in under forty-eight hours per analysis run (one for each interesting value type). However, due to our limited access to the data, we were only able to perform two such runs, one on the sequence numbers and one on the random values. Further broken down, the initial parsing time takes approximately two minutes per gigabyte, and similarly for the extraction of important values. The binning takes a further four hours and then depending on specific needs, sorting of values can take up to a further hour. \\

\subsection{Client/Server Random Values}
As part of the SSL handshake, the client and server each generate a 32-byte nonce consisting of 4 bytes for the timestamp and 28 random bytes. These nonces are used along with a client-generated \emph{pre-master secret} as input to a pseudo-random function to generate the \emph{master} secret for a session. The master secret is then used to generate the keys that the client and server use throughout the communication session. The random bytes field helps to ensure that the master secret is generated with randomness from both the client and server. There are legitimate reasons for why the random bytes field can be non-random or repeated, but it could be indicative of an implementation flaw. The following are some explanations for this behavior.
\begin{enumerate}
\item Internet scans. For activities such as scanning, secure randomness is not necessary. Using the same value in the random bytes field is faster and cheaper than generating cryptographically random values for each new connection.
\item Broken SSL implementations. We witness a large number of random bytes fields that contain only zeros. These implementations fail to set the random bytes field at all.
\item Value unused by application. In some instances, we discovered that the negotiation of keys occurred at the application layer, and the random bytes field was not used.
\item Weak random number generation. This is the most interesting cause of repeats. An implementation could be using a weak source of randomness to set the random bytes. Our hope is that our system is able to discover these issues so that we can investigate what else is affected and fixes can be made.
\end{enumerate}

The repeated values we had time to evaluate by hand came from a number of sources. The top source of repeats was a value hard-coded into the Microsoft Skype client. This accounts for approximately 35\% of all the repeats captures. This is not an immediate vulnerability in the protocol however as SSL in this case is only a wrapper around the actual encrypted Skype traffic. \\

The next most frequent value was all zeros. This value could be generated from a number of different sources. Identifying the source of this repeated value was difficult as we expect that it was generated as the result of numerous issues, none of them obviously accounting for the bulk of results. Aside from potential implementation issues, another potential source is scans such as those undertaken by Heninger \emph{et al.} \cite{heninger2012mining}.  \\ 

The third most frequent was sourced as coming from \texttt{talk.google.com} domains, and though one of our colleagues reached out to Google for comment, as yet no explanation has been found. Similarly, the fourth most frequent value stemmed from the 'gotomypc' service, a remote desktop client sold by Citrix. We also found more troubling repeats from other assorted Google services including Doubleclick and YouTube. \\

The repeat that was potentially most interesting from the perspective of identifying broken implementations or images was one originating from the Linode assigned IP range, a cloud hosting provider. All the servers repeating the same value were running on the same version of Ubuntu and nginx, which may be an indication of a software flaw, but we have not at this time investigated this further. \\

Many of the other common repeats we successfully identified as scans, which we could do based off source IP and published ranges from which various research groups have operated scans. \\

A last curiosity we found before our access was cut off, were a series of repeated values with cipher suite $0x0003$, which is RSA Export (a weakened version of RSA for export outside of the USA) with RC4 and MD5, both of which are known to be broken for this use case. An additional curiosity was that on initiating a connection the client would always send 'CLIENT\_HELLO' and 'FINISHED' in the same packet (across two records), not even waiting to receive the server's HELLO. This seems to rule out scans, however other potential causes for this behaviour are unknown at the current time. \\

One potentially interesting source of repeated values could be RAM snapshot restores of virtual machines. After a reboot or restore from a snapshot, an OS needs some time to collect entropy before the values the OSRNG will produce can be considered 'random' for the purposes of SSL. Snapshots are commonly used in cloud environments to restore the exact same server state across numerous instances. We suspect that use of these snapshots without paying attention to reseeding their PRNGs is causing some of the repeated values, across both the sequence numbers and the repeated client server randoms. \\

Figure \ref{fig:random-cdf} shows the distribution of the number of times each random value is seen to be repeated in our data set. Note the log scale on the x-axis. We see that most of the repeats occur fewer than 10 times, but a few values are repeated hundreds of thousands of times. Table \ref{table:random} shows the top repeated values, as well as our best explanations for their causes.

\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{../images/2-dups-cdf.png}
\caption{Distribution of duplicate counts for client/server random values}
\label{fig:random-cdf}
\end{figure}

\begin{table}[ht]
    \centering
    \begin{tabular}{cll}
    Count & Prefix & Explanation \\
    \hline \\
    3445702     & 02ade029...   & Skype statically set \tablefootnote{\url{http://is.muni.cz/th/324682/fi_b_b1/thesis.pdf}} \\
    637005  & 00000000...   & Broken SSL implementation \\
    461506  & 27a95da0...   & Google Talk \\
    317593  & 23f73a4e...   & Seen on Linode servers \\
    152524  & ac9e552e...   & gotomeeting - server never fully negotiates key exchange \\
    119603  & 06632731...   & Unknown. Seen repeated from many different IP addresses \\
    113199  & 3c47d1aa...   & Scanning \\
    80834   & 9d9b720b...   & Hard coded into a Heartbleed vulnerability detection scanning tool \tablefootnote{\url{https://github.com/sensepost/heartbleed-poc/blob/master/ssl-heartbleed.nse}}\\
    70054   & 73225705...   & Unknown \\
    47301   & 6b7b0a7c...   & Unknown \\
    \end{tabular}
    \caption{Most frequent client/server random values}
    \label{table:random}
\end{table}

\subsection{TCP Sequence Number Repeats}

In terms of sequence numbers, the distribution was more encouraging. Though there were repeated values, they accounted for a far lower percentage of total sequence numbers. The mode was sequence number one at 1038 repeats (1\% of total repeats). The RFC specifies that the sequence numbers should begin randomly (though not unformly) due to known attacks \cite{rfc6528}. \\
 
 Interestingly, the more common sequence numbers were all in the sub two-million region, as seen in Table \ref{table:sequence}. A notable exception were a number of captured repeats around $2^{32}$, likely again due to poor implementations. \\

 For analysis we grouped repeats into groups based on their frequency of repeat. As displayed in Figure \ref{fig:sequence} each frequency accounted for a roughly equal bulk of packets. Packets with 7 or fewer repeats were arbitrarily left out of the analysis for the purposes of simplification. One motivating reason in retrospect might be that the lower valued repeats are accounted for that our capture of each flow would span multiple sequence numbers and thus we would expect more repeats than for the SSL random values. \\

\begin{table}[ht]
    \centering
    \begin{tabular}{cc}
    Count & Sequence Number \\
    \hline \\
    1038 & 1 \\
    401 &1090161 \\
    400 & 1603177 \\
    358 & 1731431 \\
    356 & 1026034 \\
    355 & 2436828 \\
    341 & 1474923 \\ 
    338 & 1667304 \\
    335 & 1539050 \\ 
    \end{tabular}
    \caption{Most frequent TCP sequence numbers}
    \label{table:sequence}
\end{table}

\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{../images/sn-cdf.png}
\caption{Cumulative Distribution of TCP sequence number frequencies}
\label{fig:sequence}
\end{figure}

\subsection{RSA Public Exponents}
When we examined the choices of RSA public exponents that were used in practice in server certificates, we noticed a significant number of low exponents. Although it is more efficient to encrypt messages using a low exponent, this can potentially allow attacks such as Coppersmith's method. We present some interesting public exponents that we saw in Table \ref{table:exponent}.

\begin{table}[ht]
    \centering
    \begin{tabular}{cc}
    Count & Exponent \\
    \hline \\
    29277901 & 65537 \\
    38839 & 3 \\
    16365 & 17 \\
    11337 & 47 \\
    6391 & 59 \\
    1807 & 35 \\
    1121 & 4097 \\
    1035 & 65535 \\
    1 & 65536 \\
    2 & 2102467769 \\
    1 & 1 \\
    \end{tabular}
    \caption{RSA exponents from server certificates}
    \label{table:exponent}
\end{table}

\subsection{Algorithm Selection}
Another interesting fact that we discovered when examining server certificates is the selection of public key and signature algorithms used. Tables \ref{table:signature} and \ref{table:publickey} show the counts for some of the algorithms that we witnessed. A large number of these algorithms, such as SHA1, are considered weak and no longer recommended for use \footnote{\url{http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html}}.

\begin{table}[ht]
    \centering
    \begin{tabular}{cl}
    Count & Signature Algorithm \\
    \hline \\
    47152859 &    sha1WithRSAEncryption \\
    8595136 &    md5WithRSAEncryption \\
    2690044 &    sha256WithRSAEncryption \\
    203079 &    sha1WithRSA \\
    26090 &    shaWithRSAEncryption \\
    24574 &    sha512WithRSAEncryption \\
    13762 &    dsaWithSHA1 \\
    13530 &    rsassaPss \\
    1092 &    GOST \\
    774 &    sha384WithRSAEncryption \\
    184 &    ecdsa-with-SHA1 \\
    \end{tabular}
    \caption{Signature algorithms from server certificates}
    \label{table:signature}
\end{table}

\begin{table}[ht]
    \centering
    \begin{tabular}{cl}
    Count & Public Key Algorithm \\
    \hline \\
    29355288 & rsaEncryption \\
    4778 & dsaEncryption\\
    486 & GOST\\
    215 & id-ecPublicKey \\
    7 & dsaEncryption-old \\
    4 & Id-tc26-sign \\
    3 & Id-tc26-gost3410-12-256 \\
    3 & 1.3.6.1.4.1.12656.1.33 \\
    1 &    1.2.840.1335.1.1.1.1 \\
    1 &    1.6.840.113549.1.1.1 \\
    1 & itu-t \\
    \end{tabular}
    \caption{Public key algorithms from server certificates}
    \label{table:publickey}
\end{table}

\subsection{Diffie-Hellman public keys}
In the Diffie-Hellman key exchange, after the public parameters $g$, the group generator, and $p$ the prime modulus are chosen, each party chooses a random value $r$ as their secret key. Then, they generate their public key $g^r \mod p$ and send that across the network. Since the secret keys are generated randomly for each key exchange, the public keys should not be repeated. However, we observed a significant number of repeats of this value in our collected data. Table \ref{table:dh} shows counts of the top repeated values. \\

We have not yet had a chance find an explanation for these repeats. We expect some of them can be attributed to multiple packets from the same flow, but this has not yet been verified.

\begin{table}
    \centering
    \begin{tabular}{ccl}
    Count & Size(bytes) & Prefix \\
    \hline \\
    17075 & 64 & 91139dbc...\\
    5958 & 128 & 700ff09b... \\
    4499 & 64 & 44a991dd... \\
    3325 & 128 & 126b3f8f... \\
    3041 & 128 & 3bf7e2fe... \\
    2642 & 128 & 54533ffe... \\
    2367 & 128 & 8689d6a6... \\
    1285 & 128 & 7a7d546d... \\
    1454 & 128 & 13d42079... \\
    1639 & 128 & 4a7b527f... \\
    \end{tabular}
    \caption{Most frequent Diffie Hellman public keys}
    \label{table:dh}
\end{table}
