\documentclass[letterpaper,twocolumn,10pt]{article}
\usepackage{graphicx}
\usepackage{algpseudocode}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{authblk}
\usepackage[usenames,dvipsnames]{color}
\usepackage{hyperref}
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}

\graphicspath{{/home/harper/Repository/research-papers-hao/phishing-detection/image/}}

\usepackage{usenix,epsfig,endnotes}

\begin{document}
\date{}

\title{\Large \bf Feature-based phishing website detection}

%for single author (just remove % characters)
\author{
{\rm Hao Jiang}\\
Department of Computer Science\\
Clarkson University\\
hajiang@clarkson.edu
\and
{\rm Jeanna N. Matthews}\\
Department of Computer Science\\
Clarkson University\\
jnm@clarkson.edu
% copy the following lines to add more authors
% \and
% {\rm Name}\\
%Name Institution
} % end author

\maketitle

\thispagestyle{empty}

%#################
% JSW - 2/20/14 - Started reviewing and making changes here
%#################

\subsection*{Abstract}
Phishing website creators and anti-phishing defenders are in a arms race. Cloning a website is fairly easy and can be automated by any junior programmer. 
Automated phishing website detection, on the other hand, attempts to recognize phishing links posted in the wild e.g. on social media sites or in email.
Detection of phishing websites can also be done manually through voluntary user reporting, but this is much slower.
Thus the speed and accuracy of these automated phishing website detection systems is fundamentally important for the defenders to win.
We propose a new method of detecting phishing websites. Our method uses image feature recognition to extract the most prominent visual part of a webpage, and uses these parts as the basis for comparison of other suspect webpages. Comparing to existing methods, our approach has a much wider application range and higher detection accuracy. We successfully recognized 90\% of the suspicious webpages that previously evaded detection. We also show that our method of SVM-based text identification is capable of over 98\% accuracy for detecting text embedded in images on these potentially malicious sites while maintain a good performance.

\section{Introduction}
Phishing is one of the most successful and prominent attack methods. It also requires very little technical skill on the part of the attacker. Unlike many other attack methods, 
phishing does not even require inflitrating a victm's machine where it can leave traces of its methods and activity. This alone makes phishing sites extremely hard to detect. A victim may not even be aware that they are under attack. 

Creating a phishing webpage is simple and cheap. If you search Google for the phrase "create a phishing website", there are approximately 208,000 results, most of which are detailed step-by-step tutorials. With a hosting service and free tools that copy a given URL, anyone can setup a phishing website in minutes. With a private server and readily available tools, this process can even be automated. 

To deal with phishing websites that are constantly sprouting up all across the Internet, we need an efficient and automated method for identifying them quickly and accurately. As scanning the entire URL address space is impractical, there have been many innovative ideas of where to look for suspicious phishing URLs. A simple but effective way is to ask everyone to report suspicious URLs that they encountered. PhishTank (\url{http://www.phishtank.com}) is a website that allow user to submit these URLs and verify the status of those submitted by others. Other parties are working to simplify the reporting process. Netcraft (\url{http://www.netcraft.com/} had developed a Firefox plugin that enables users to report the suspicious URL with a single click. 

Some researchers are more interested in automating the URL collection work. J. White et. al \cite{spie2012_jwhite} talk about searching for suspicious URLs in Twitter data. I. Jeun et. al \cite{springer2013_jil} create a honeypot (the "SpamTrap") to collect URLs from spam emails.

Having had reliable methods for collecting suspect URLs, we need a fast and reliable method to identify whether these websites are truly phishing pages. 

\begin{figure*}[pt]
\centering
\includegraphics[width=0.8\textwidth]{google_drive_phishing.png}
\caption{A phishing website of Google Drive}
\label{gdrive_phishing}
\end{figure*}

\section{Previous Work}

Different methods have been used to compare the similarity of two webpages and thus identify a phishing website. We categorize these methods into two types: structure-based comparison and visual-based comparison. 

With the presumption that similar webpage should have similar structure, which in the webpage case, is the DOM tree, structure-based comparison will first parse the HTML webpage to be compared into DOM trees, and conduct comparison on them. 

Visual-based comparison, on the other hand, care more about the appearance of the webpage. They will capture the screenshot of both the original webpage and suspicious webpage, and do comparison between them using image processing techniques. We will review some of the previous work below, as well as their pros and cons. 

We also want to mention an interesting research direction -- the application of machine learning technique in the detection of phishing websites. Most recent work in this direction focuses on the analysis of content-based features of the webpages, such as links included in the page, javascripts, etc. This is different from methods that focuses on the image-based feature extraction and analysis.  In this sense, the application of machine learning techniques to the area of phishing detection and can be a good complement to these existing online detection methods.

\subsection{DOM Tree Comparison}
Rosiello et al. \cite{securecomm2007_rosiello} talk about a method that compares the similarity of webpages’ DOM trees.  They  compare the HTML tags in the DOM tree, looking for similar sub-tree structures. To address the different nodes between sub-trees, they assign weights to them based on the node type, location and other properties. The authors then calculate a penalty value for the difference. By properly choosing the threshold of penalty, this method can successfully avoid the interference introduced by the attacker by intentionally adding or removing some DOM node from the tree.

This method is easy to implement and its execution speed is efficient. It is effective against phishing websites that directly copy the content of original websites. However it has an obvious disadvantage. The appearance of a webpage cannot be uniquely defined by its DOM tree structure. Thus an attacker could easily avoid such detection by using different HTML tags to generate webpages that look similar. For example, using \textless DIV\textgreater  instead of \textless TABLE\textgreater element to do page layout can lead to exactly the same visual effect, while maintaining a totally different DOM Tree. In addition, attackers can choose to dynamically generate DOM trees via script language at runtime. In this case, directly accessing the webpage URL cannot get the DOM tree required, which will lead to false negatives.

\subsection{Visual Similarity}
Another type of phishing site detection through comparison is based on the visual similarity of two webpages. More specifically, the screenshots of webpages are captured using techniques such as a headless browser. Comparison between these images are then conducted using various image processing techniques. 

Visual comparison methods overcome some of the disadvantage of DOM-based methods as it focuses on comparing the visual output of the webpage, which is the exact same image seen by the end users.  This method is especially efficient against phishing websites that look the same as the original website.

J.White et al. talks about using the hash value of images in their work\cite{spie2012_jwhite}. In their paper, the authors introduce the work of capturing suspicious links from Twitter message, which is also very interesting. Although, we are only interested in the method  used to compare webpage content in this research. The authors first capture the screenshot of the webpage using CutyCapt, a headless  browser based on WebKit. They then calculate the pHash of the image file and evaluate the image difference using Hamming Distance between two hash values. With their experiment result, the authors claims that adding a small change to the original image will also lead a small increase in the Hamming Distance.

A.Fu et al. talk about their work \cite{ieee2006_fu} of using Earth's Mover Distance to evaluate the difference between two webpages. The Earth Mover's Distance (EMD) \cite{iccv1998_rubner} is a metric of the similarity between two probability distributions over a region. If we think of an image file as a distribution of pixels with different color, the EMD between two images can be defined as the minimal distance to move the pixels in order to make two images look the same. Generally speaking, the closer two images are, the smaller the EMD value we can get. So it should also be a good candidate of comparing webpage screenshots.

The works described above are efficient against phishing websites that look exactly the same as the original websites. However, in practice we have found some phishing examples that don't like the original websites at all. Thus they evade this kind of detection easily. Figure \ref{gdrive_phishing} shows an example phishing website of Google Drive we retrieved from PhishTank, as well as the actual Google Drive login page. We can see that this phishing webpage does not look like the original legitimate Google Drive page that use Google's Single-Sign-On system. After many manual observations we have noticed that this is not a single rare case. 

Even with  "low-quality" phishing webpages that look very different from the original source, we believe that there is still a possibility that this webpage will catch some victims. This is especially true if they are not familiar with the actual webpage they intended to access. This possibility can be very high when we consider that not all the web-surfers have enough knowledge to compare the phishing webpage to the legitimate one. Failing to catch this type of phishing websites is a big disadvantage of the discussed methods.

%#################
%JSW - 2/22/14 - Started reviewing and making changes here:
%#################

\subsection{Layout-based  Comparison}

Bohunsky et al. describes their work of using the visual structure to do webpage comparison and clustering \cite{www10_bohunsky}. Instead of comparing the entire picture, they split the webpage into small rectangular areas which they call ``visual boxes''. By comparing the visual box structure of two webpages, they attempt to detect correlation.

However, during this process of converting everything into images, the text information contained in the website is lost. This means two webpages that have a similar layout but totally different text content may be categorized as similar. Figure \ref{page_sim} shows one such example; the images are from two different news websites. They each have a title picture and an abstract. Despite the visual similarity, the content they talk about may be completely different. This shows that relying only on visual layout of a webpage for clustering may introduce a high rate of false positives. 

\begin{figure}[h]
	\includegraphics[scale=0.5]{page_layout.png}
	\caption{Page Layout Similarity}
	\label{page_sim}
\end{figure}

\subsection{Application of Machine Learning in Phishing Detection}
Machine learning techniques have been proven efficient in many areas including phishing detection. Y. Zhang et al. \cite{www07_zhang} create a content-based phishing website detection system in which they use TF-IDF \cite{Jones72_tfidf}, an algorithm that is widely adopted in text mining and information retrieval. The authors use this method to extract keyword information from webpage text. The also use other features like age of domain, suspicious URL, suspicious links and the existence of HTML forms within their work. 

R. Basnet et al. use similar feature sets as that in the previous discussed, but apply different machine learning techniques, including SVM, neural networks and SOM to them and evaluate their performance \cite{scai08_basnet} . C. Whittaker et al. present the automatic maintainence of  Google's blacklist of phishing websites using TF-IDF and classifiers \cite{ndss10_google}.

Comparisons between different machine learning techniques have also been conducted. S. Abu-Nimeh et al. presented in their work a performance comparison of different methods for detecting of phishing email \cite{cml2007_abu}. They showed that different methods have different advantages and that there is no single best. D. Miyamoto \cite{anip2009_miyamoto} did similar work on phishing website detection and showed AdaBoost \cite{Freund1997119} works best in their particular case.  

These methods have already presented significant achievements in the application of machine learning techniques to the area of phishing detection. Content-based phishing detection has the advantage of performance when compared to image-based methods. However, due to their nature, content-based methods always have a higher error rate and false positive rate, which limits their accuracy. We believe that the combination of both methods, content-based feature and visual-based feature, will allow us better results in both performance and accuracy. This is fundamental to our future research interest.


\section{Feature-based Comparison}

In this section, we describe the details of a new method, feature-based comparison. 

%Based on our analysis of previous works, we have found that visual similarity is a potentially effective tool in phishing website detection. We begin with this, but to overcome the problems that we discussed in the previous sections, we propose our new method, feature-based comparison.

%\subsection{Overview}


We define the ``feature'' of a webpage to be its most prominent part when comparing it to others. One of the features that best represents a webpage is its logo. In our observations, most phishing websites, even "low-quality" ones that don't look like the original webpage, will at least put the logo of the original website on their pages. In addition, features can also be other images that are rarely used by any other website. 

To extract these features, we first split the screenshot into small pieces separated by spaces. We do this by applying the edge detection algorithm to the original image which results in a grayscale image. After the edge detection processing, background colors are removed. The new grayscale image contains only non-black pixels contain valid information. This makes our subsequent work much easier.

\begin{figure}
\centering
    \includegraphics[scale=0.60]{process.png}
    \caption{System Process}
    \label{sys_proc}
\end{figure}

The next step is to apply our split algorithm on the image, which uses straight lines and rectangles to split the non-black area into small pieces. We then calculate the minimal rectangular boundary of these pieces, that is, the smallest rectangle that contains all non-black pixels in a given area. This process gives us a series of rectangular regions. We then apply a collection of filters to the region set. These filters are the core part of our method. They check the given regions in different aspect and filter out the less important ones. The last step for the rectangles that survive the filter is to crop the original image and get the extracted features. Figure \ref{sys_proc} shows this process. We describe each step in detail in the following sections.

\subsection{Edge Detection}

Edge detection is done by calculating the gradient of each pixel based on its 3x3 neighbors. Consider that we are given a image described by $f(x,y)$,
For each 3x3 matrix $\mathbf{X}$ = 
\[ \left[ \begin{array}{lll}
f(x-1,y-1) & f(x,y-1) & f(x+1,y-1)  \\
f(x-1,y) & f(x,y) & f(x+1,y) \\
f(x-1,y+1) & f(x,y+1) & f(x+1,y+1)\end{array} \right]\] 
we calculate the gradient using the vector $\mathbf{t}$ =$[ -1 , 0 , 1 ]^T$. Thus we have 
\begin{align*}
&\nabla_x(x,y) = f(x+1,y) - f(x-1,y)\\
&\nabla_y(x,y) = f(x,y+1)-f(x,y-1)
\end{align*} We then calculate the gradient value at point $(x,y)$ as 
\begin{displaymath}
\nabla = \frac{1}{\sqrt{2}} \sqrt{(\nabla x)^2 + (\nabla y)^2}
\end{displaymath}
which is then rounded to an integer between 0 and 255. For an RGB image, we repeat the calculation for the three different colors and get the biggest value as the final result. We then generate a new grayscale image $h(x,y)$ by setting $h(x,y) = \nabla$.

Edge detection is good for removing constant background color. However some of the webpages use a background made up of a gradient color, which causes $\nabla$ at the background point to be a small integer bigger than 0. To remove such interference, we setup a threshold $T$ and write h(x,y) as a piece-wise function.
\begin{displaymath}
   h(x,y) = \left\{
     \begin{array}{lr}
       \nabla & : \nabla \ge T\\
       0 & : \nabla < T
     \end{array}
   \right.
\end{displaymath} 
By properly choosing the value of T, we make sure $\nabla$ at background will be 0.

\subsection{Split Region}
The goal of region splitting is to separate the image into small pieces made up of regions that contain non-black pixels. Naturally, we choose rectangular regions as most, if not all, of the HTML elements are rectangular. In practice, we are able to see that a webpages layout is organized in rectangular blocks.

Our splitting method tries to draw a horizontal/vertical straight line or rectangle that only passes the black pixels, splitting the image into smaller regions. This process is repeated until no more splitting can be done. Figure \ref{split_algorithm} shows the pseudo-code of the algorithm we used for this region splitting.
\begin{figure}[h]
\fbox{
\parbox{0.48\textwidth}{
\begin{algorithmic}
\Function{split}{Rectangle region, List result}
\State\textcolor{OliveGreen}{;; Remove excessive space}
\State \Call{lowerBound}{region};
\State\textcolor{OliveGreen}{;; First try to split the region using lines}
\State  vline $\gets$ \Call{vline}{region};
\State  hline $\gets$ \Call{hline}{region};
\State  line $\gets$ \Call{maxMargin}{vline, hline};
\If {line $\neq$ NULL}
	\State region1, region2 $\gets$ \Call{lineSplit}{region, line};
	\State \Call{split}{region1, result};
	\State \Call{split}{region2, result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{;; Split the region use rectangle}
\State region $\gets$ \Call{rectSplit}{region};
\If{region $\neq$ NULL} 
	\State \Call{split}{region,result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{; Not splittable, add the region to result list}
\State result.\Call{add}{region};
\State \Return
\EndFunction

\end{algorithmic}
}
}
\caption{Algorithm of Region Splitting}
\label{split_algorithm}
\end{figure}

Given a region to be split, we first calculate the lower bound of this region, that is the smallest rectangle that contains all the non-black pixels in this region. We then try to use line segments to split this region, in either vertical or horizontal directions. There may be multiple possible lines, and we choose the one that gives a maximal margin. Figure \ref{split} shows an example. If such line can be found, we use it to split the given region into two sub-rectangles, and then repeat the process on each of these sub-rectangles.

If no line is found, for example, a region surrounded by a rectangular border, we switch to rectangular splitting. This method tries to find the biggest sub-rectangle in the given region. Figure \ref{rect_split} shows how a rectangular splitting works.

If none of the previous methods work, we consider the region as unsplittable and add it to the result list. These unsplittable rectangles are candidates for feature extraction.
\begin{figure}
\includegraphics[scale=0.77]{split.png}
\caption{Max Margin Line-Split}
\label{split}
\end{figure}

%###############
% JSW - 2/22/14 - Stopped reviewing here
%###############


\subsection{Filter Region and Combine Regions}
Region splitting gives us a set of regions that contains non-black pixel groups. Now we need to determine which of them are important and which are not. 

It's very natural to realize that a region that is too small or too narrow cannot contain any valid information. One example is the rectangle that encloses a horizontal rule created by \textless BR\textgreater tag. We setup a threshold T for the dimension of the rectangles, and ignore all those have a height or width less than T.
\begin{figure}
\includegraphics[scale=0.85]{split_rect.png}
\caption{Maximal Sub-Rect Split}
\label{rect_split}
\end{figure}
We also noticed that when people have their first sight of a webpage, the thing that catch their eyes is not text but images. This gives us a fair reason to prefer regions contain images than region that contain text. We adopt two methods to recognize whether a region is text or image. 

First, we noticed that image regions generally contain multiple colors while the color of a text region is generally monotone. By treating the image as a distribution of pixels with different color and calculating the entropy of that distribution, we can get a quantitative measurement of how colorful one region is. Given a region $R$ with width $w$ and height $h$, the entropy is defined as following:

\begin{displaymath}
H(R) = -\sum_{c\in R}p(c)log(p(c))
\end{displaymath}
where $c$ is the color contained in the region,  and
\begin{displaymath}
p(c) = \frac{\text{number of pixels with color c}}{wh}
\end{displaymath}

A region with low entropy has a high probability of being a text region.

Another method we employed is SVM classification. We noticed that regions containing only text have the following interesting characteristics: first, the distribution is sparse. Text characters generally occupy a large area but only small percentate of the pixels in the area are non-white. For example, the average percentage of non-white pixels when we draw the lower-case letters in the English alphabet on a white region using font Arial is 0.248. For upper-case letters, this value is 0.267.
 
Secondly, the vertical distribution of a character image shows an interesting pattern. More specifically, consider that when we write on a ruled piece of paper, only character ``j, g, y'' will occupy the lower part of the text region, and only ``h, i, j'' will occupy the higher part. Most of the character are at the center part. Thus if we calculate the percentage of non-black pixels on each row of a text region, we can expect to get a consistent pattern which can be learned by SVM.

To deal with text with different height, we first scale the candidate region to a parameterized height $H$, preserving the ratio of width and height. Then for each row of pixels, we calculate the percentage of non-black pixels. This gives us a vector of $H$ elements, which is then used to do classification after being normalized.

We noticed that many companies use text-based logos(Google, eBay, Flickr, etc). To avoid treating these logos as text, we filter out a candiate region only when a region is reported as text by both of the filters mentioned above. Thus colorful text-based logos, like those for Google and eBay for example, will not be incorrectly excluded.

\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{credit_card_logo.jpg}
\caption{Credit Card Logos}
\label{visa_logo}
\end{figure}

Another one of our filtering methods is based on a data-driven learning technique. If an icon appears on many webpages, it is less likely to be a feature that is specific webpage. One example of this is the credit card logos which are shown in Figure \ref{visa_logo}. These logos are displayed on almost all the financial related websites. However it is clear that they should not be used to identify any of these websites. To recognize this situation, we maintain a database that records the processed websites and their features. If an feature had appeared on too many webpages, it will be filtered out from the candidates.

Our splitting method works well with most of the images, extracting image parts from their background. However, we have a problem of over-splitting, in which a region that should be kept together is split incorrectly. For example, Google's logo has big vertical gap between ``G'' and the first ``o''. This logo will be split by our algorithm if the resolution is too high.

To solve this problem, we add a step that tries to re-combine these over-split regions into a whole. To avoid over-combining that combine separated features into a whole, we setup some rules of combining. First, the two regions must be close enough. Second, we want to make sure that the combination will not introduce too much whitespace. Figure \ref{combine_space} shows an comparison between the case that big whitespace and small whitespace are introduced. We setup a threshold for the percentage of newly introduced whitespace that is allowed. With these rules, we will first group the regions, then for each such group, draw a rectangle to cover it as a combined result.

\begin{figure}[h]
\centering
\includegraphics[scale=1]{combine.png}
\caption{Combination of Regions introduces whitespace}
\label{combine_space}
\end{figure}

\subsection{Image Feature Comparison}
Finally we will talk about our method of feature image comparison. We first restate the problem need to be solved. Given the features extracted from the original website and that from a suspicious webpage, we want to know whether they are the same.

Our method is still based on SVM classification. In \cite{cvpr2005_dalal}, N. Dalal and B. Triggs propose the Histogram of Oriented Gradient (HOG) feature descriptor of image, for the purpose of object detection. It had been proved extremely effective in human  recognition and image categorization. In \cite{sa11_shrivastava}, Shrivastava et. al demonstrate their works of using HOG to do clustering of pictures of natural scenes. In our method, we make use of HOG in our featured image comparison.

Recalled that in the section ``Edge Detection'', we describe how we calculate the gradient value $\nabla(x,y)$ of a given point $(x,y)$. The result we get there is a scalar. In the HOG processing, we also take the direction of the gradient into account and get a normalized vector. 
\begin{displaymath}
\vec\nabla(x,y) =\frac{1}{\sqrt{2}} [\nabla_x,\nabla_y]                                                      
\end{displaymath}
We split the image into $m \times n$ grids. Each cell of the grid is a $k \times k$ square. For each cell of the grid, we calculate the gradient vector of each pixels in that cell, which is $\nabla_1$ to $\nabla_{k^2}$. These $k^2$ vectors are separated into $w$ bucket based on their angle where $w$ is a predefined value. For example, when $w$ is 4, we have four buckets that contains the vector with angle in $[0, \frac{\pi}{2}), [\frac{\pi}{2}, \pi), [\pi, \frac{3\pi}{2})\text{ and }[\frac{3\pi}{2}, 2\pi)$. This forms the HOG of that cell.

For each cell, we sum the value in each bucket up and normalize them to get $w$ values, and for the entire image, we get $m\times n\times w$ values. These values form the descriptor we use for SVM classification.

The size of the HOG descriptor is proportional to the image size given a fixed cell size, thus to compare image features of different size. We need to scale them to a predefined fixed dimension $[M, N]$, whose value will be talked about later in the "Implementation" section. 

To correctly match image features that are the same in content but different in size, we prepare the training data by first stretching the source image to different dimensions, then scaling them all back to dimension $[M,N]$ to generate positive training samples. We then use the target image to generate the test set. From the classification result we can tell whether two image features are the same. This process will be repeated for all pairs of feature images to be compared.

\begin{table*}[ht!]
\renewcommand{\arraystretch}{1.3}%
\centering
\begin{tabular}[t]{|L{2cm}|L{1.5cm}|L{4cm}|L{3cm}|}
\hline
\textbf{Category}& \textbf{Type} & \textbf{Name} & \textbf{Value} \\ \hline
Filter& Size& Minimal Width & 5px\\ \hline 
Filter& Size&Minimal Height & 5px \\ \hline
Filter& Size& Minimal Area & 100px$^2$ \\ \hline
Filter& Text& Height Threshold & 25px \\ \hline
Filter& Entropy & Entropy Threshold & 0.72 \\ \hline
SVM& Image&  Dimension & 500$\times$500px$^2$ \\ \hline 
SVM& Image& HOG Bucket Size & 9 \\ \hline
SVM& Image& HOG Cell Size & 50px \\ \hline
SVM& Text & Height & 50px \\ \hline
\end{tabular}
\caption{Parameter value}
\end{table*}

\section{Implementation}
In previous section, we gave a thorough introduction to the algorithms we user. In this section, we talk about our implementation, including the environment, external tools we used, the parameter choice as well as some optimization work that we have done. 

The main program is written in Java and Javascript. All the source code is readily available on our website. For the webpage screenshot image retrieval, we make use of PhantomJS\cite{phantomjs} developed by A. Hidayat , a headless browser based on WebKit kernel that provides a javascript programming API. For the SVM classification, we use Libsvm\cite{2011_libsvm} from Chang et al..

\subsection{Parameter values}
In this section we talk about the parameter value used in our algorithm. Table 1 lists relevant parameters and their current values.

The first three parameters control the behavior of the size filter. Any region that either has its width/height less than 5px or has a area less than 100px$^2$ is considered unable to hold valid information and is thus ignored.

In order to decrease the false positive in text identification, we introduce a height threshold for text detection. With our observation, most webpages have a text font size between 10 and 16px. With a given candidate, we will first try to split it into rows, and apply text filtering SVM to each row. If a region cannot be split horizontally and it has a height bigger than 25px, we think it is not a normal text and skip the text filter step.

Our claim that entropy value can help distinguish text from image features is supported by the experiment result shown in Figure \ref{entropy}. In the figure, we noticed that the entropy value for text features and image features has distinct distributions. We choose the value 0.72 as the threshold that distinguish a text feature from an image feature, which is the value that gives the maximal margin.

\begin{figure}
\centering
\includegraphics[scale=0.51]{entropy.png}
\caption{Image \& Text Entropy Distribution}
\label{entropy}
\end{figure}

The next two parameters are for the HOG descriptor. As described in previous section, the size of HOG descriptor is proportional to the size of the input image. Thus in order to do comparison between different images, we need to first scale them to the same size.  A size that is too big will lead to a bigger descriptor and affect the performance of SVM classification. A size that is too small will lead to a substantial information loss and impact the accuracy. Table 2 shows the length of HOG descriptors with the same cell size (50$\times$50px$^2$) under some common dimensions. 

Figure \ref{svm_train_time} shows the training time required for training set with different descriptor size. All data is collected from training set with 10089 rows of data. It can be seen that the time needed for SVM training is proportional to the size of descriptor. A feature with dimension 1024$\times$768 thus requires more than 3 times amount of time to train. Considering that the training process will be repeated for each feature, and we already get nearly 100\% accuracy in the test using dimension 500$\times$500, we think it is not worthy to use a bigger size.

\begin{table}
\renewcommand{\arraystretch}{1.2}%
\begin{tabular}{|c|l|}
\hline
1024$\times$768 & 2646\\ 
\hline
800$\times$600 & 1728\\
\hline
500$\times$500 & 900\\
\hline
\end{tabular}
\centering
\caption{HOG descriptor size with different dimension}
\end{table}

\begin{figure}
\includegraphics[scale=0.5]{svm_train_speed.png}
\centering
\caption{SVM Training Time}
\label{svm_train_time}
\end{figure}

Similarly, we tried different size for HOG cells, and choose the biggest one that doesn't impact the performance severely.  In \cite{cvpr2005_dalal}, Dalal and Triggs suggests to use unsigned gradients and set the bucket size to 9, which in their test performed best for human detection. Here we start with the same setting and get a satisfying result. Thus we keep this parameter unchanged.

The last parameter we want to mention here is the height $H$ when we use SVM to identify text region. In this case, the model can be pre-trained and the performance is not a big problem. A small descriptor length will affect the accuracy. We double the threshold value for text height in text filtering, which gives us a setting of 50.  

\subsection{SVM training data}
Our methods relies on SVM classification heavily to identify the text region as well as compare the feature images. 
In both case, it's easy and straightforward to generate positive training data. 

For the text identification, we generate positive samples by creating images that contains strings with random length, font and size. We use four most common fonts: Geogia, Sans-Serif, Arial and Courier. The font size varies between 12 and 20, which we believes is the most common font size in webpages. We use both text from books (e.g the Bible) and random words from dictionary as our content. We trained our model with 45541 positive samples.  

For feature image comparison, we scale the feature image under comparison into different dimensions. For each image to be compared, we scale it into 100 different dimensions, from 100$\times$10 to 1100$\times$1100. These scaled images are then used to generate positive training samples for recognizing this image, which means each model are trained with 100 positive samples.

For the negative training data, we need some random picture data. We gather these data from Flickr (\url{http://www.flickr.com/}).
Flickr is a picture sharing image that provides a place for people around the world to upload and share the picture they shoot. This makes it a perfect place to retrieve random picture images. It doesn't provide a function for downloading packed pictures, but we used PhantomJS to automatically download pictures from the Flickr photo streams. Our script will keep refreshing the webpage, getting a new set of pictures every time. We retrieve the image URL by looking for \textless IMAGE\textgreater tags with CSS class ``defer'', which is the tag Flickr used to hold their photos. We repeat this process and collect 10843 random unique pictures. The negative training samples for image feature comparison are then generated from them. The Javascript that is used to download these pictures can also be found in our source code.

\section{Experiment Result}
We conduct our experiment on a test machine with AMD A10-6800K Quad-core APU and 8GB memory, installed with Ubuntu Desktop 13.10 64-bit and Oracle JDK 1.7.0\_45 64-bit for Ubuntu. 

Our experiment result consists of three parts. First, we test the accuracy of our text detection algorithm then we apply the entire algorithm to phishing websites extracted from PhishTank.com to verify the eligibility of our method. The test result is described below. Second, we test the speed of our methods and discuss their suitability to be deployed in real-time phishing detection scenarios.

\subsection{Accuracy of Text Filtering and Image Comparison}
Our test data of text identification algorithm consist of two parts:  9468 rows of data generated from images that contains a single row of text and 11942 rows of data generated from random pictures downloaded from Flickr. Each of the text image contains a single row of text of 80 characters. All the text is extracted from Mark Twain's \textit{Adventure of Tom Sawyer}, Lewis Carol's \textit{Alice's Adventure in Wonderland} and Leo Tolstoy's \textit{Peace and War}. The test result is listed in Table 3.

\begin{table}
\renewcommand{\arraystretch}{1.2}%
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Type} & \textbf{Input} & \textbf{Correct} & \textbf{Accuracy} \\
\hline
Text Image & 9468 & 9465 & 99.97\% \\
\hline
Non-text Image & 11942 & 11764 & 98.51\% \\
\hline
\end{tabular}
\caption{Text Identification Accuracy}
\end{table}

For image comparison accuracy, our test is designed as following: each round we will randomly choose a picture from the image library as the original one. We then generate different size of images by scaling the original image, and use them together with the pre-calculated negative set generated from random picture to train a model. This model is then applied to a test set that consists of the original images in different size and random pictures from image library. We generate 300 positive sample, working together with 9997 positive examples. We repeat the test 200 times and always get 100\% accuracy. This is a strong supporting the effectiveness of our algorithm. 

\subsection{Application to phishing websites}

We test our detection method using 25 URLs fetched from Phishtank, and observe the checking result manually. These phishing webpages are all different from the original webpages, which means they cannot be detected by traditional method. We think our method is successful if the logo of the website is included in the extracted features. In our test, 24 out of 25 test examples are successful. The only one that failed is because of a irregular logo (not a rectangle), but we still are about to extract a rectangle that contains the logo in it.

This test also provide sufficient evidences that our methods are better than the existing methods. In Figure \ref{paypal_detect}, we show a phishing website of Paypal that doesn't look like the real Paypal. This means that traditional methods will all fail to recognize the phishing websites when comparing them to the original one. However our method successfully located the PayPal logo in this case, which is the region marked by red rectangle. This shows that our method overcomes the inborn limitation of existing method and is immune to webpage layout change -- as long as the phishing websites use the logo of the original website, which we believe they surely will do -- our method can locate the logo and thus detect the phishing website.


\begin{figure}
\centering
\includegraphics[scale=0.3]{paypal_detect.png}
\caption{Detection of Paypal phishing website}
\label{paypal_detect}
\end{figure}
We also have some analysis to other features extracted by our website, which are considered useless in the phishing detection. We have extracted in average 6 image features from each webpage. The maximal value is 11 and the minimal value is 2. Out of these features, 62\% are text images that are of big font size and are failed to be filtered. Others are small picture parts contained in the webpage. 

\subsection{Performance Analysis}
In order for our method to be available to online real-time phishing detection, we also test the performance of our method. Running on the test machine,  the time required to do feature extraction is in average 38 seconds, with maximal value 52 seconds and minimal value 16 seconds. We noticed that the time required for image extraction is primarily related to the complexity of page layout.

Image Feature comparison is a primary weak point for the performance of our method because the comparison of any two features requires us to train a model separately. The dataset generating time and model training time is in average 35 seconds. So if we assume that we can extract 6 features from each webpage, the average time needed to compare two webpages are 1260 seconds, which is around 21 minutes. This is too slow for an online phishing detection system.

To overcome this shortage, we regularly train a multi-class classification model. The model first only contains well-known logos such as Google , Paypal, Amazon, etc. that are often copied by attackers. Similarly,  any organization interested in identifying phishing site versions of their own site could simply compare to their own logo and website characteristics.  If we can identify the image feature using this pre-trained model, we can directly mark this webpage as phishing. Otherwise, we will put the features into database and waits for human inspectors to manually process it. Whenever suspected webpages are confirmed to be phishing websites, we add the new feature to our training data and re-train the model. This method performs much better than the previous one. With the model pre-trained, the time required to recognize a known features is only less than 1 second, which means we can recognize a phishing website within 6 seconds in average. So it takes in total 44 seconds for us to determine whether a suspicious website is phishing or not - 38 seconds for feature extraction and 6 seconds for comparison to known features.

\section{Future Work}
Our method employs only linear and rectangular splitting pattern. It works well with most of the website that organized in rectangular layout. However, this method may encounter problems with websites that are organized in irregular shapes. We plan to build a framework based on existing code to support dynamically adding new split patterns and switching to different split patterns. This will allow us to build more powerful and easy-extendable detectors. 

Processing speed is also a crucial part when dealing with large amount of suspicious phishing websites. Although we have discussed some performance improvements in this paper, we are still thinking ways of improving the analysis speed. Migrating to OpenCL is one of the target we are thinking about. By taking advantage of the processing ability of modern video cards, we are hoping to make our algorithm ready for production level application.

In Section "Filter Region", we mentioned about a data-driven filter that determines how important a given feature by telling whether this feature also appeared in other webpages. We believe that this is the most potentially powerful filter in our system because of its learning ability. However, the training of this filter requires a constant effort of data collection and analysis, which is still on going now. We would like to focus on the training of this filter in our future work.

We also noticed that our text identification is based on English text distribution, which may also work with other Latin languages, but may not work well with Asian or Arabian languages. We would also like to elaborate our text identification method to other languages families. 

\section{Conclusion}

Phishing websites is one of the major threats to Internet security. To fight against phishing websites that sprout out all around the Internet world, fast and reliable automatic detection method is crucial. Most traditional methods relies on the overall similarity of two webpages, which will introduce a high false negative rate when dealing with phishing webpages that don't look similar to the original one. We develop this feature-based method that extract visual features from suspicious webpages and compare it to features of known websites.

In our experiment, we show that our new method successfully detect over 90\% phishing websites that evade the detection of traditional method. which we believe is sufficient to demonstrate the advantage of it. We shows that our method of text-in-image identification with SVM can have over 98\% accuracy, which demonstrate a simple but reliable way to detect text in an picture. We also discuss the possibility of using data-driven method to further increase the detection speed and accuracy. 



\bibliographystyle{plain}
\bibliography{reference}
\end{document}