\documentclass[10pt,a4paper,onecolumn]{article}
\usepackage{url}
\usepackage{graphicx}
\usepackage{listings}
\usepackage[english]{babel}
\begin{document}
\title{GSOC 2010}
\date{\today}
\author{Neha Jain}
\maketitle
\section{Goals of the proposal}
The initial goal of the proposal was to wrap the anomaly detection library, libAnomaly, designed by Wepawet guys, in Cython and feed it the features extracted from the JavaScript scripts collected from the web pages to train the classifiers. These trained classifiers perform the classification later. The idea is influenced by the Wepawet white paper 2010. As suggested in wepawet as well as in other papers, it would be possible for this method to detect the 0 day attacks, as well as attain low false positive and false negative rate.
\subsection{Issues of the Proposal}
Following steps I had in mind to work out the proposal, install libAnomaly on a system, wrap it in cython, integrate this with phoneyc feature extractor. But there were a couple of issues in this as libAnomaly is written in C++ compatible with g++3. The recent developments in the gcc and g++ made it particularly unusable in any of the recent Linux distro's of Ubuntu(in fact any of them). I got it installed in Knoppix 3.8 in a Virtual Machine. Installing cython and phoneyc in Knoppix 3.8 wasn't trivial. Thus while discussing the things and issues with mentor, we decided to revise the proposal. And therefore dropped the idea of training libAnomaly, and decided on writing a naive Bayes classifier for PhoneyC itself. 
\subsection{Revised Proposal}
By the start of June we had the new idea and I started working on it. The goals were to design a Naive Bayesian Classifier which feeds on the features extracted from the scripts collected from the urls of the web pages submitted to Phoneyc. The design of the new proposal has been illustrated in the figure ade.jpeg.
As shown in the figure, the goal is to identify the features which might work in distinguishing a safe JS from a malicious JS, write the extractor for the same features and feed a classifier. Make the design end to end, i.e., feed in url, get the features out.
\section{Design of the Solution}
\subsection{Malicious Javascript detection using Bayesian Classifier}
\begin{figure}[hb]
\includegraphics[scale=0.5]{figure1.png}
\caption[Design of the solution]{Design of the solution of the Bayesian Classifier for Javascript and Iframe Classifier}
\end{figure}
The figure 1 explains the design of the solution. The user issues the command with the anomaly detection switch giving it a value 1 or 3 or 5. Based on the value issued the Bayesian Classifier is turned on in one of the three modes, viz., Learning malicious file features, learning safe file features, classify the file features respectively. 
Features are extracted from JS obtained from the web page from two different places. This is done in order to avoid the features which escape evaluation due to obfuscation. The script is collected from PageParser.py, Window.py and Document.py. The result is summarised below. 
\begin{enumerate}
	\item The script that is extracted in do\_end\_script() of PageParser.py, is in its original form: packed or obfuscated or none.
	\item The script resulting from the eval() or document.write() calls are unobfuscated and clear.
\end{enumerate}
The extraction of features is done in js\_features.py, by a handful of regular expressions, they are written here too for completeness.
\lstset{language=Python}
\begin{lstlisting}[frame=single]
func = re.compile("(\w+|\w+\.\w+|\w+\.\w+\.\w+)\s*\(")
function_call2 = re.compile("\[\'?\"?(\w+)\'?\"?\]\(")
var = re.compile("var\s+(\w+)\s*[=;]\s*(.*?)[,;]")
\end{lstlisting}


The call\_from variable in the js\_features() specifies where the script is coming from. Following features are extracted from the javascript, in the present implementation.
\begin{enumerate}
	\item No. of Redirections:\\
	An exploit may not be served directly on the page but rather on some other page to which the user is redirected from the current page using the various redirection mechanism like HTTP status code, document.location, location.replace property or iframes as well as <meta> tag directed. We count the no. of redirections that occur whenever a page is visited by counting the presence of such properties.
	\item String definition to string use ratio:\\
	An exploit page has a high string definition to the string use ratio. Various string defintion functions include string.fromCharCode, string.split etc. and the use of string include eval, document.write or unescape etc. functions.
	\item No. of dynamic execution calls:\\
	Though most of the webpages these days have dynamic execution calls to provide the interactive front for the users, but the malicious web pages use a lot of such calls, and many a time some specific calls are found in prominence on these pages. The calls such as eval, document.write, arguments.callee.tostring are found in prominence on these pages. though the current implementation scores each function call equally but, I think scoring the function calls that are found mainly on the malicious webpages could be scored a bit high..(comments?)
	\item Argument length of the dynamic functions:\\
	The pages that serve malware have unusually long arguments passed in eval(), unescape() functions. Though I have no mechanism in place to catch the length of the strings which may be passed in the functions, yet. This feature focusses only on the argument length of the eval(), unescape() and document.write() functions.
	\item Percentage of human readable function and variable names:\\
	This feature is designed to find the readability of the user defined function and variable names. The implementation doesn't scores the readable and unreadable names equally, since I found that the function names that the regular expresseion captures also include the JS functions like document.write() so, it the result did not find the \% of user defined functions' readability. therefore I tried to score the readable functions half as compared to the non-readable functions and variables. A name is readable when:\begin{enumerate}
                \item length \textless 15 chars
                \item 20\% \textless vowels \textless 60 \%
                \item not more than 2 repetitions of the same character
                \item more than 70 \% alphabetical
        \end{enumerate}
I had started with implementing this feature as percentage of human readable characters, scoring readable names 0.5 , but even this didn't seem to capture the exact feature from the fact that malicious JS has variable and functions names which are arbitrary and make little sense to the reader. The actual purpose of such names is to obfuscate the code. So, I have changed the way this feature was evaluated, now this feature stores the percentage of function and variable names which are not readable.
	\item No. of Hidden Iframes that redirect to another domain:\\
	This feature uses the ``score'' evaluated by the Iframe Classifier, (section 3.2). The classifier scores the Iframes hidden + redirecting to another domain 3 (2+1), and such Iframes which have a score == 3 are the iframes that are generally present in malicious pages. The no. of such iframes in a page are evaluated and stored in by this feature.
	\item No. of ActiveX instances:\\
	This feature stores the ActiveX instances created in a web page. Though the no. of ActiveX instances doesn't convey anything in special as these are these days used to enhance the interactivity and functionality of the web page. An aside from this feature could have been based on the type of the ActiveX plugin being instantiated. For the current gaussian model based Bayesian classification, it seemed only the no. of instances of activex control could be evaluated. This feature gets its value from the ActiveX.py, when an activex control is instantiated, the \_\_init\_\_() function of the ActiveX.py code is initialised, from here a call to js\_features.py is made with call\_from variable set to ``activex''. The count in js\_features.py is increased by 1, whenever such calls are made.
\end{enumerate}
\subsection{Iframe Classifier}
It has been found malicious web pages employ iframes frequently as the hidden iframes provide a neat and easy solution to redirect the user unknowingly to another page which possibly serves exploit or fingerprints the user's browser to redirect to a page which exploits the vulnerability in the user's browser or its plugins.
The iframe classifier raises alert when it discovers: \begin{enumerate}
\item An iframe that has been deliberately hidden by setting the following property \begin{itemize}\item \textless iframe style='visibility: hidden' \textgreater
\item \textless iframe height = [0/1] width = N frameborder = 0 \textgreater
\item \textless iframe height = N width = [0/1] frameborder = 0 \textgreater
\end{itemize}
\item An iframe that redirects to a page in another domain.
\end{enumerate}
The "score" which this classifier assigns to each iframe it comes across can be broken into two main constituents:
\begin{enumerate}\item Hidden: 
\lstset{language=Python}
\begin{lstlisting}[frame=single]
if hidden:
    score = 2
else:
    score = 0
\end{lstlisting}
	\item Cross Domain: An iframe that redirects to a domain other than the parent domain is a cross-domain iframe. The score for this is:
\lstset{language=Python}
\begin{lstlisting}[frame=single]
if Xdomain:
    score = 1
else:
    score = 0
\end{lstlisting}
\end{enumerate}
Thus clearly, we end up with 4 situations:
\begin{center}
\begin{tabular}{|l|p{9cm}|l|}
\hline
score & type & status\\
\hline
3 & Hidden iframe redirecting to another domain & Alert\\
\hline
2 & Hidden iframe redirecting same domain & Warning\\
\hline
1 & Visible iframe redirecting to another domain & Warning\\
\hline
0 & Visible iframe redirecting to same domain & Debug message\\
\hline
\end{tabular}
\end{center}
\subsection{Bayesian Classifier}The bayesian classifier trained here are naive bayes classifier which assume class conditional independence. The training and testing of the classifier is done in the following manner:
\subsubsection{Training}The extracted feature values are maintained separately for two different sorted file sample corpuses - mal\_corpus.txt and safe\_corpus.txt For each corpus, a dictionary of mean and standard deviation of the extracted feature value is learned for each feature. The final result of the learning is stored as a dictionary in files mal\_gauss\_dist.pkl and safe\_gauss\_dist.pkl The dicitionary is of the form:
\{\_\_redirect:\{mean:4,std\_deviation:0.98\}\ldots\}
The gauss\_dist() method of the Bayesian\_Classifier carries out the task of learning for the classifier. It is called when the value of the anomaly detection switch is 1 or 3.
\subsubsection{Classification of samples}The test samples of JS code input for classification are continuous-valued features. Thus, they are assigned probabilities based on the following gaussian probability function:
\[
{g(x,}\mu,\sigma) = \frac{1}{\sqrt(2\pi)\sigma}\exp(-\frac{(x-\mu)^2}{2(\sigma)^2})
\]
The posterior probability for the features conditioned on its being in one class is thus:
\[
{P(}x_k|C_i) = {g(}x_k,\mu_{C_i},\sigma_{C_i})
\]
The prior probability is evaluated based on the no. of samples in each corpus, in the usual manner. Finally the probability of the sample to belong to malicious class as well as safe class is calculated using the Bayes algorithm, and the sample is classified to the class to which it belongs with higher probability.
The classify\_script() method carries out this classification task, by placing call to classify() which calls estimate\_prob() method from both safe\_post\_prob() and mal\_post\_prob() methods. The classifier operates in the classification mode when the anomaly detection switch is set to 5.
\subsection{Persistent storage}
I have used pickle module of python for the serialization of the dictionary that are generated as a result of learning of by the classifier. The extracted features for each js file sample is stored as a dictionary in python with the feature-name as the key and value of the feature as the key value. this dictionary is futher assigned as the value of another dictionary which has sample no., js file sample \#, as its key. the final dictionary is pickled and written to a binary file. The dictionary structure is like:
\{\_\_redirects:[2,3,2\ldots],\_\_dynamic\_calls:[34,45,23\ldots]\ldots\}

\section{Testing the solution}
I have tested the solution by training on a set of malicious pages and a set of safe pages.
\subsection{Safe Set}Getting a safe set is easy. I tested some of the Alexa top 300 sites, of these I left the sites which were of the same source but hosted in different countries, like I used \url{www.orkut.com} and not \url{www.orkut.br} and similar instance of other domains. The final URLs that were used to train the classifier were only the ones that were marked benign by both phoneyc and wepawet and had Javascript functionality in them i.e. presence of \textless script \textgreater tag. I have found around 3 web pages among the alexa list that I scanned from 12 August 2010 to 17 August 2010.  I used these 3 web pages to train for the malicious set.
In all I trained the classifier with 204 urls collected using the above approach from the alexa top 500 list.
\subsection{Malicious Set} Collecting a malicious Javascript is not at all a trivial job. I collected some scripts in the following manner: (Anybody having a better idea for the same could consider suggesting me.)
\begin{itemize}
\item 6 scripts from various blogs like isc diaries, Marco Cova's blog, Neils Provos blog. Though I didn't use these to train the classifier, but they were anyhow useful in getting me to grips to some of the techniques that attackers commonly employ to launch a DBD attack. I have put the collected scripts in the corpus-mal directory in the test/ directory in  my svn branch.
\item I searched various blogs and forums which are known to have links and information about exploits etc. Some of them were malwaredomainlist.com, malekal (used google translate to make sense out of this one, its I think in French), danchodanchev's blog, contagiodump, etc. I found a lot of links and scanned many of these but none of these turned active at the time of my testing. I also found a list on mdl of URLs and IPs of which the URLs didn't seem to launching DBD but the from the ip.txt file I found 6 malicious links of 867 ips. This ips list is quite an extensive one with over 2000 ips. I am still doing this scan to find more such malicious ips. 
\item Simple google search. I searched for the keywords - ``Top ten websites that use Javascript''. This results with some million search results, the first one is a blog that has a list of such 10 websites. I checked those links with phoneyc and wepawet. wepawet results the links to be beningn, but phoneyc raises \textgreater 3 heap spray alerts for the first 5 URLs of the top 10 webpages! Now that is something cool. Then a looking into the HTML , things are quite normal. the features that are extracted for each of these URLs too are like a safe looking web page has. So, whats fishy?
Well, phoneyc has some VBscript test in place. It uses vb2py for converting VB to python and then tests it like JS for heap-spray and shellcode alerts. And thats the most probable reason why phoneyc found the alerts in a seemingly safe website. Now thats a cool find!
\end{itemize}
Summarising the collection of the malicious set:\\
\fbox{1. Script posted to blogs.  2. Script and links from forums.  3. Google search}\\
\subsubsection{Some Discoveries}
While training the current classifier for I came across some web pages that used to raise heap spray alerts with phoneyc. But the features that js\_features() calculated for the scripts found on these pages was quite similar to that of safe JS, and this was also supported by the fact that Wepawet marked all these web pages ``benign''. Some of the urls are \url{http://booking.com/}, \url{http://adcapitalindustries.com}, \url{http://51.com} etc.
This discovery shows that complete dependency on only anomaly detection approach would lead to an increased no. of false negatives.
\subsection{Training result}
After testing the urls I selected around 201 urls for training the classifier for safe JS samples. The feature dictionary of the safe JS samples can be found in safe\_list.pkl in the log/ directory, the standard deviation and mean of the learned values are put in the table below. Similarly I am scanning a ip's list obtained from malware domain list forum. It has 2267 ip's. I have scheduled this scan for around 4 days starting 13 August, when I found this list. The machine I am testing on, was working continuously day and night. In the process the adapter of my machine also burnt.. I bought a new one to keep on the scan, still \textgreater 900 ip's are left to be scanned. Out of the scanned ip's I've found some 6 ip's malicious, at the time of scan. So, i trained the classifier for malicious samples. I have been able to collect only 11 url's that cause heap spray alerts. I didn't get url or JS that raises the shellcode alerts. Despite the small size of malicious sample and immature design, the result, it is compiled below in a form of table, show that the approach is quite promising. In case the training set for the malicious samples had been broad just as it was for the safe set, we could have got a better and complete idea.
\begin{center}
\begin{tabular}{p{3cm}llll}
\hline
\multicolumn {2}{r}{SAFE}&\multicolumn{2}{r}{MAL}\\
\cline{2-5}
Feature & Std Deviation & mean & Std Deviation & mean\\
\hline
Str definition: Str use & 165.570434318 &	120.916158388 & 97.7222499321 &	150.658654755 \\ \hline  
\# redirects & 18.0443844973  &	3.52558535085 &	4.24875621891 &	2.54545454545\\ \hline  
\# dynamic execution calls & 408.884862548	 & 435.446327499 &	252.995024876	 & 408.363636364\\ \hline	
\# ActiveX instances & 14.7183318129 &	2.03685059113 &	3.30348258706 &	1.18181818182\\ \hline  
\# hidden Iframes & 0.812881124995 &	0.962091385842 &	0.189054726368 &	0.727272727273\\ \hline  
Avg Argument length & 1615.27649359	 & 544.407165656 &	936.232429221 &	501.877922078\\ \hline  
\% non-readable names & 1443.06000014 &	1159.06622169 &	981.698792943 &	1561.51877981\\
\hline
\end{tabular}
\end{center}
\section{Background}
The work done so far may not be comparable to what has already been done by others, but it could considered successful if learning is considered to be the goal of this program. 

The program taught me a lot of things. Most important being patiently working out stuff. Every thing I have been doing so far and will continue to do, is pure experimentation. You never know what techniques, the dark side is employing to get their job done. Thus the features of JS which might be working for some people at some point of time may not work for others at a different point in time. The learning of such systems is a continuous process failing which the product turns outdated and less useful. The Bayesian classifier which has been implemented in this program is based on Gaussian model. It has lead to the identification of a number of characteristics of the features and the properties which might work in distinguishing a malicious JS sample from the safe JS sample. 

Apart from this the program taught me a lot of cool stuff. This has been like a dream come true. Since 2009, when I started using Linux and some other Open Source Software, I had a desire to contribute to this community and serve the community. I have been successful in learning that. Learning about regular expressions and the power they give to doing text processing is simply great. \LaTeX, Python, Vim and many others, are programs with unusual raw potential just waiting to be utilised by skilled hands\ldots

I am totally obliged to my mentor, Jose Nazario, and other phoneyc contributors, particularly Zhijie and Angelo Dell'aera, for their encouragement and patience. They never let me feel an outcast due to the limited experience and knowledge I had. I will always remember my mentor's words, ``Everybody starts as a novice.I just want you to have fun learning.'' I think, I did that. 

\end{document}
