\documentclass[letterpaper,twocolumn,10pt]{article}

\usepackage{graphicx}
\usepackage{algpseudocode}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{authblk}
\usepackage[usenames,dvipsnames]{color}
\usepackage{hyperref}
\usepackage{array}
\usepackage{usenix,epsfig,endnotes}

\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\graphicspath{ {../} }
\begin{document}
\date{}

\title{\Large \bf Feature based phishing website detection}

%for single author (just remove % characters)
\author{
{\rm Hao Jiang}\\
Department of Computer Science\\
Clarkson University\\
hajiang@clarkson.edu
\and
{\rm Jeanna N. Matthews}\\
Department of Computer Science\\
Clarkson University\\
jnm@clarkson.edu
% copy the following lines to add more authors
% \and
% {\rm Name}\\
%Name Institution
} % end author

\maketitle

\thispagestyle{empty}

\subsection*{Abstract}

Phishing website creators and anti-phishing defenders are in a arms race. Cloning a website is fairly easy and can be automated by any level programmer. Thus the accuracy of these automated phishing website detection systems is fundamentally important for the defenders to win. We propose a new method of detecting phishing website, which builds upon a number of great works. Our method uses image feature recognition to extract the most prominent visual parts of a web-page. It uses these parts as the basis for comparison of other suspect web-pages. Comparing to existing methods, our approach has a much wider application range and higher detection accuracy; 90\% detection of phishing websites that go unknown. We also show that our SVMs are capable of 98\% accuracy when detecting text within images on these potentially malicious sites. 

\section{Introduction}
Despite the low technology threshold, phishing is one of the most successful and prominent attack methods. Unlike others which directly target servers owned by victims and more-or-less leave traces, phishing doesn't need to compromise the victim. This alone makes phishing sites extremely hard to detect. A victim may not even be aware that they are under attack. 

Creating a phishing web-page is simple and cheap. If you search Google for the phrase ``create a phishing website'', there are approximately 208,000 results. Most of which are detailed step-by-step tutorials. With a hosting service and free tools that copy a given URL, anyone can setup a phishing website in minutes. Depending on the level of server access, this process can even be automated. 

To deal with number of phishing websites that are constantly sprouting up all across the internet, we need an efficient automated method that scan and identify them quickly. As scanning the entire URL address space is impractical, there have been many innovative ideas of where to look for suspicious phishing URLs. A simple but effective way is to ask everyone to report suspicious URLs that they encountered: PhishTank (\url{http://www.phishtank.com}) is a website that allow user to submit these URLs and verify the status of those submitted by others. Other parties are working on simplify the reporting process: Netcraft (\url{http://www.netcraft.com/} had developed a Firefox plugin to enable user to report a suspicious URL with a single click. 

Some researchers are more interested in automating the url collection work: J. White talks about searching for suspicious URL in Twitter data \cite{spie2012_jwhite}, I. Jeun et. al created a honeypot (the "SpamTrap") to collect URls from spam emails \cite{springer2013_jil}.

Now that we have reliable methods for collecting suspect URLs, we need a fast and reliable method to identify whether these sites are truly phishing pages. %In the next section, we will talk about some previous work in this area.

%\begin{figure*}[pt]
%\centering
%\includegraphics[width=0.8\textwidth]{google_drive_phishing.png}
%\caption{A phishing website of Google Drive}
%\label{gdrive_phishing}
%\end{figure*}

\section{Previous Work}

Different methods have been used to compare the similarity of two web-pages and thus identify a phishing website. We categorize these methods into two types: structure-based comparison and visual-based comparison. 

With the presumption that similar web-page should have similar structure, which in the web-page case, is the DOM tree, structure-based comparison will first parse the HTML web-page to be compared into DOM trees, and conduct comparison on them. 

Visual-based comparison, on the other hand, care more about the appearance of the web-page. They will capture the screenshot of both the original web-page and suspicious web-page, and do comparison between them using image processing techniques. 

We will review some of the previous work below, as well as their pros and cons. 

\subsection{DOM Tree Comparison}
Rosiello et al. \cite{securecomm2007_rosiello} talk about a method that compares the similarity of web-pages DOM trees. They compare the tags within the tree, looking for similar sub-tree structures. To address the different nodes between sub-trees, they assigns weights to them based on: the node type, location, and other properties. The authors then calculate a penalty value of the difference. By properly choosing the threshold of penalty, this method can successfully avoid the interference introduced by the attacker by intentionally adding or removing some DOM node from the tree.

This method is easy to implement and execution speed efficient. It is effective against phishing websites that directly copy the content of original websites. However, it has an obvious disadvantage. The appearance of a web-page cannot be uniquely defined by its DOM tree structure. Thus an attacker could easily avoid such detection by using different HTML tags to generate web-pages that look similar. For example, using \textless DIV\textgreater  instead of \textless TABLE\textgreater element to do page layout can lead to the exact same visual effect, while maintaining a totally different DOM Tree. Additionally, attackers can choose to dynamically generate a DOM tree via script language at runtime. In this case, directly accessing the web-page URL cannot get the DOM tree required, which will lead to false negatives.

\subsection{Visual Similarity}
Another type of phishing site detection through comparison is based on the visual similarity of two web-pages. More specifically, the screenshots of web-pages are captured using techniques such as a headless browser. Comparison between these images are then conducted using various image processing techniques. 

Visual comparison methods overcome some of the disadvantage of DOM-based methods as it focuses on comparing the visual output of the web-page, which is the exact same image seen by the end users. This method is especially efficient against phishing websites that look the same as the original website.

J.White et al. talks about using the hash value of images in their work \cite{spie2012_jwhite}. In their paper, the authors introduce their work of capturing suspicious links from Twitter message, which is also very interesting. Alghough, we are only interested in the method used to compare web-page content in this research. The authors first captured a screenshot of the web-page using CutyCapt, a headless web browser based on webkit. They then calculated the pHash of the image file and evaluate the image difference using Hamming Distance between two hash values. With their experiment result, the authors claims that adding a small change to the original image will also lead a small increase in the Hamming Distance.

A.Fu et al. talk about their work using the Earth's Mover Distance to evaluate the difference of two web-pages \cite{ieee2006_fu}. The Earth Mover's Distance (EMD) is a metric of the similarity between two probability distributions over a region \cite{iccv1998_rubner}. If we think of an image file as a distribution of pixels with different color, the EMD between two images can be defined as the minimal distance to move the pixels in order to make two images look the same. Generally speaking, the closer two images are, the smaller the EMD value we can get. So it should also be a good candidate of comparing web-page screenshots.

The works described are efficient against phishing websites that look exactly the same as the original websites. However, in practice we have found some phishing examples that don't look like the original websites at all. Thus they evade these kinds of detection easily. Figure \ref{gdrive_phishing} shows an example phishing website of Google Drive we retreived from \url{phishtank.com}, as well as the actual Google Drive login page. We can be seen that this phishing web-page does not look like the original, legitimate Google Drive page which uses Google's Single-Sign-On system. After many manual observations we have noticed that this is not a single rare case. 

Even with such a "low-quality" phishing web-page, those that look far from the orginal source, we believe that there is still possibility that this web-page will catch some victims. This is especially true if they are not familiar with Google's actual login page. This possibility could be very high when we consider that not all the web-surfers have enough knowledge to compare a phishing web-page to a legitimate one. Failing to catch this type of phishing websites is a big disadvantage of the discussed existing methods.

\subsection{Layout-based  Comparison}

In \cite{www10_bohunsky}, Bohunsky et al. describes their work of using the visual structure to do web-page comparison and clustering. Instead of doing comparison to the entire picture, they split the web-page into small rectangle areas which they call ''visual boxes''. By comparing the visual box structure of two web-pages, they tries to tell the correlation of two web-pages.

However, during the process of converting everything into images, the text information contained in the website is lost, which means two web-pages that has similar layout but total different text content may be categorized as similar. Figure \ref{page_sim} shows such an example. The images are from two different news websites. They all have a title picture and an abstract. Despite the visual similarity, the content they talk about may be totally different things. This shows that relying only on visual layout to do clustering may introduce high rate of false positive. 

%\begin{figure}[h]
%	\includegraphics[scale=0.55]{page_layout.png}
%	\caption{Page Layout Similarity}
%	\label{page_sim}
%\end{figure}

\section{System Description}
By the analysis to previous work, we found that visual similarity is an potentially effective tool of phishing website detection, and we decide to base our work on it. To overcome the problem we have mentioned in the previous section, we propose our new method, the feature-based comparison.

\subsection{Overview}



We define the "feature" of a web-page to be the most prominent part of a web-page comparing to others. One of the features that can best represent a web-page is the logo. In our observation, most phishing website, including those "low-quality" ones that doesn't look like the original web-page, will at least put the logo of the original website on their pages. In addition, the feature can also be other images that are rarely used by other websites. 

To extract these features, we first need to split the screenshot into small pieces separated by spaces. We do this by first apply edge detection algorithm to the original image to get a gray-scale image. After the edge detection processing, background colors will be removed. Also in the new gray-scale image, only non-black pixels contain valid information. This makes our subsequent work much easier. 
%\begin{figure}
%\centering
%    \includegraphics[scale=0.60]{process.png}
%    \caption{System Process}
%    \label{sys_proc}
%\end{figure}
The next step is to apply our split algorithm on the image, which uses straight lines and rectangles to split the non-black area into small pieces. We then calculate the minimal rectangular boundary of these pieces, that is, the smallest rectangle that contains all non-black pixels in a given area. This process gives us a series of rectangular regions.

We then apply a collection of filters to the region set. These filters are the core part of our methods. They check the given regions  in different aspect and filter out the less important one. Finally, with the rectangles that survive the filter, we crop the original image and get the extracted features. Figure \ref{sys_proc} shows the process. We will describe the detail of each step in the following sections.

\subsection{Edge Detection}

We do the edge detection by calculating the gradient at each pixel based on its 3x3 neighbors. Consider that we are given a image described by $f(x,y)$,
For each 3x3 matrix $\mathbf{X}$ = 
\[ \left[ \begin{array}{lll}
f(x-1,y-1) & f(x,y-1) & f(x+1,y-1)  \\
f(x-1,y) & f(x,y) & f(x+1,y) \\
f(x-1,y+1) & f(x,y+1) & f(x+1,y+1)\end{array} \right]\] 
we calculate the gradient using the vector $\mathbf{t}$ =$[ -1 , 0 , 1 ]^T$. Thus we have 
\begin{align*}
&\nabla_x(x,y) = f(x+1,y) - f(x-1,y)\\
&\nabla_y(x,y) = f(x,y+1)-f(x,y-1)
\end{align*}. We then calculate the gradient value at point $(x,y)$ as 
\begin{displaymath}
\nabla = \frac{1}{\sqrt{2}} \sqrt{(\nabla x)^2 + (\nabla y)^2}
\end{displaymath}
, which is then rounded to an integer between 0 and 255. For an RGB image, we repeat the calculation for the three different color and get the biggest value as the final result. We then generate a new gray-scale image $h(x,y)$ by setting $h(x,y) = \nabla$.

Edge detection is good at removing constant background color. However some of the web-page will use a background with gradient color, which will cause $\nabla$ at the background point to be a small integer bigger than 0. To remove such interference, we setup a threshold $T$ and write h(x,y) as a piecewise function.
\begin{displaymath}
   h(x,y) = \left\{
     \begin{array}{lr}
       \nabla & : \nabla \ge T\\
       0 & : \nabla < T
     \end{array}
   \right.
\end{displaymath} 
By properly choose the value of T, we can make sure $\nabla$ at background to be 0.

\subsection{Split Region}
The goal of region splitting is to separate the image into small pieces of regions that contains non-black pixels. Naturally, we choose rectangular regions as most, if not all, of the HTML elements are rectangular. In practice, we can also find out that the web-page layout is organized in rectangular blocks.

Our splitting method will try to draw a horizontal/vertical straight line or rectangle that only passes the black pixels, splitting the image into smaller regions. This process is repeated until no more splitting can be done. Figure \ref{split_algorithm} shows the pseudo-code of the algorithm we used to do region splitting.
\begin{figure}[h]
\fbox{
\parbox{0.48\textwidth}{
\begin{algorithmic}
\Function{split}{Rectangle region, List result}
\State\textcolor{OliveGreen}{;; Remove excessive space}
\State \Call{lowerBound}{region};
\State\textcolor{OliveGreen}{;; First try to split the region using lines}
\State  vline $\gets$ \Call{vline}{region};
\State  hline $\gets$ \Call{hline}{region};
\State  line $\gets$ \Call{maxMargin}{vline, hline};
\If {line $\neq$ NULL}
	\State region1, region2 $\gets$ \Call{lineSplit}{region, line};
	\State \Call{split}{region1, result};
	\State \Call{split}{region2, result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{;; Split the region use rectangle}
\State region $\gets$ \Call{rectSplit}{region};
\If{region $\neq$ NULL} 
	\State \Call{split}{region,result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{; Not splittable, add the region to result list}
\State result.\Call{add}{region};
\State \Return
\EndFunction

\end{algorithmic}
}
}
\caption{Algorithm of Region Splitting}
\label{split_algorithm}
\end{figure}

Given a region to split, we first calculate the lower bound of this region, that is, the smallest rectangle contains all the non-black pixels in this region. We then try to use line segments to split the region, in either vertical and horizontal direction. There may be multiple possible lines, and we choose the one that gives a maximal margin. Figure \ref{split} shows an example. If such line can be found, we use it to split the given region into two sub-rectangles, and repeat the process on them.

If no line can be found(for example, a region surrounded by a rectangular border), we switch to rectangular splitting. This method tries to find the biggest sub-rectangle in the given region. Figure \ref{rect_split} shows how a rectangular splitting works.

If none of the previous method works, we consider the region as un-splittable and add it to the result list. These un-splittable rectangles are candidates for feature extraction.
%\begin{figure}
%\includegraphics[scale=0.77]{split.png}
%\caption{Max Margin Line-Split}
%\label{split}
%\end{figure}

\subsection{Filter Region and Combine Regions}
Region splitting gives us a set of regions that contains non-black pixel groups. Now we need to determine which of them are important and which are not. 

It's very natural to realize that a region that is too small or too narrow cannot contain any valid information. One example is the rectangle that enclose a horizontal rule created by \textless BR\textgreater tag. We setup a threshold T for the dimension of the rectangles, and ignore all those have a height or width less than T.
%\begin{figure}
%\includegraphics[scale=0.85]{split_rect.png}
%\caption{Maximal Sub-Rect Split}
%\label{rect_split}
%\end{figure}
We also noticed that when people have their first sight of a web-page, the thing that catch their eyes is not text but images. This gives us a fair reason to prefer regions contain images than region that contain text. We adopt two methods to recognize whether a region is text or image. 

First, we noticed that image regions generally contain multiple colors while the color of a text region is generally monotone. By treating the image as a distribution of pixels with different color and calculating the entropy of that distribution, we can get a quantitative measurement of how colorful one region is. Given a region $R$ with width $w$ and height $h$, the entropy is defined as following:

\begin{displaymath}
H(R) = -\sum_{c\in R}p(c)log(p(c))
\end{displaymath}
where $c$ is the color contained in the region,  and
\begin{displaymath}
p(c) = \frac{\text{number of pixels with color c}}{wh}
\end{displaymath}

A region with low entropy has a high probability of being a text region.

Another method we employed is SVM classification. We notice that a region that contains only text has the following interestingly unique characteristic: firstly, the distribution is sparse. Text characters generally occupies a large area but draws only on small amount of pixels. The average percentage of non-white pixels when we draw the lower-case letters in English alphabet on a white region using font Arial is 0.248. For upper-case letters, this value is 0.267.
 
Secondly, the vertical distribution of character image shows an interesting pattern. More specifically, consider that when we write on a ruled piece of paper, only character ``j, g, y'' will occupy the lower part of the text region, and only ``h, i, j'' will occupy the higher part. Most of the character are at the center part. Thus if we calculate the percentage of non-black pixels on each row of a text region, we can expect to get a consistent pattern which can be learned by SVM.

To deal with text with different height, we first scale the candidate region to a parameterized height $H$, preserving the ratio of width and height. Then for each row of pixels, we calculate the percentage of non-black pixels. This gives us a vector of $H$ elements, which is then used to do classification after being normalized.

We noticed that many companies use text-based logos(Google, eBay, Flickr, etc). To reduce false negatives, that treat these logos as text and ignore them, only when a region is reported as text by both of the filters mentioned above, will we filter it out of the candidate. Thus the colorful logo of Google and eBay will not be wrongly excluded.

%\begin{figure}[h]
%\centering
%\includegraphics[scale=0.3]{credit_card_logo.jpg}
%\caption{Credit Card Logos}
%\label{visa_logo}
%\end{figure}

The last methods we used for filtering is a data-driven learning technique. Naturally, if a icon appears on many web-pages, it is less likely to be a feature to a specific web-page. One example of this is the credit card logos which is shown in Figure \ref{visa_logo}, which will be displayed on almost all the financial related websites. However it is clear that they should not be used to identify any of these websites. To recognize this situation, we maintain a database that records the processed websites and their features. If an feature had appeared on too many web-pages, it will be filtered out from the candidates.

Our splitting method works well with most of the images, extracting image parts from their background. However, we have the problem of over-splitting, in which a split region contains  incomplete information. For example, Google's logo has big horizontal space between ``G'' and the first ``o''. This logo will be split by our algorithm if in a high resolution.

To solve this problem, we add a step that tries to re-combine these over split regions into a whole. To avoid over-combining that combine separated features into a whole, we setup some rules of combining. First, the two regions must be close enough. Second, we want to make sure that the combination will not introduce too big whitespace. Figure \ref{combine_space} shows an comparison between the case that big whitespace and small whitespace are introduced. We setup a threshold for the percentage of newly introduced whitespace that is allowed. With these rules, we will first group the regions, then for each such group, draw a rectangle to cover it as a combined result.

%\begin{figure}[h]
%\centering
%\includegraphics[scale=0.6]{combine.png}
%\caption{Combination of Regions introduces whitespace}
%\label{combine_space}
%\end{figure}

\subsection{Image Feature Comparison}
Finally we will talk about our method of feature image comparison. We first restate the problem need to be solved. Given the features extracted from the original website and that from a suspicious web-page, we want to know whether they are the same.

Our method is still based on SVM classification. In \cite{cvpr2005_dalal}, N. Dalal and B. Triggs propose the Histogram of Gradient (HOG) feature descriptor of image, for the purpose of object detection. It had been proved extremely effective in human identification and image categorization. In \cite{sa11_shrivastava}, Shrivastava et. al demonstrate their works of using HOG to do clustering of pictures of natural scenes. In our method, we make use of HOG in our featured image comparison.

Recalled that in the section ``Edge Detection'', we describe how we calculate the gradient value $\nabla(x,y)$ of a given point $(x,y)$. The result we get there is a scalar. In the HOG processing, we also take the direction of the gradient into account and get a normalized vector. 
\begin{displaymath}
\vec\nabla(x,y) =\frac{1}{\sqrt{2}} [\nabla_x,\nabla_y]                                                      
\end{displaymath}
We split the image into $m \times n$ grids. Each cell of the grid is a $k \times k$ square. For each cell of the grid, we calculate the gradient vector of each pixels in that cell, which is $\nabla_1$ to $\nabla_{k^2}$. These $k^2$ vectors are separated into $w$ bucket based on their angle where $w$ is a predefined value. For example, when $w$ is 4, we have four buckets that contains the vector with angle in $[0, \frac{\pi}{2}), [\frac{\pi}{2}, \pi), [\pi, \frac{3\pi}{2})\text{ and }[\frac{3\pi}{2}, 2\pi)$. This forms the HOG of that cell.

For each cell, we sum the value in each bucket up and normalize them to get $w$ values, and for the entire image, we get $m\times n\times w$ values. These values form the descriptor we use for SVM classification.

The size of the HOG descriptor is proportional to the image size given a fixed cell size, thus to compare image features of different size. We need to scale them to a predefined fixed dimension $[M, N]$, whose value will be talked about later in the "Implementation" section. 

To correctly match image features that are the same in content but different in size, we prepare the training data by first stretching the source image to different dimensions, then scaling them all back to dimension $[M,N]$ to generate positive training samples. We then use the target image to generate the test set. From the classification result we can tell whether two image features are the same. This process will be repeated for all pairs of feature images to be compared.

\begin{table*}[ht!]
\renewcommand{\arraystretch}{1.3}%
\centering
\begin{tabular}[t]{|L{2cm}|L{1.5cm}|L{4cm}|L{3cm}|}
\hline
\textbf{Category}& \textbf{Type} & \textbf{Name} & \textbf{Value} \\ \hline
Filter& Size& Minimal Width & 5px\\ \hline 
Filter& Size&Minimal Height & 5px \\ \hline
Filter& Size& Minimal Area & 100px$^2$ \\ \hline
Filter& Text& Height Threshold & 25px \\ \hline
Filter& Entropy & Entropy Threshold & 0.72 \\ \hline
SVM& Image&  Dimension & 500$\times$500px$^2$ \\ \hline 
SVM& Image& HOG Bucket Size & 9 \\ \hline
SVM& Image& HOG Cell Size & 50px \\ \hline
SVM& Text & Height & 50px \\ \hline
\end{tabular}
\caption{Parameter value}
\end{table*}

\section{Implementation}
In previous section, we have a thorough introduction to the algorithms we used. In this section, we talk about our implementation, including the environment, external tools we used, the parameter choice as well as some optimization work that we have done. 

The main program is written in java and javascript. All the source code is readily available on our website. For the web-page screenshot image retrieval, we make use of PhantomJS\cite{phantomjs} developed by A. Hidayat , a headless browser based on WebKit kernel that provides a javascript programming API. For the SVM classification, we use Libsvm\cite{2011_libsvm} from Chang et al..

\subsection{Parameter values}
In this section we talk about the parameter value used in our algorithm. Table 1 lists the parameters and their value.

The first three parameter controls the behavior of the size filter. Any region that either has its width/height less than 5px or has a area less than 100px$^2$ is considered unable to hold valid information and is thus ignored.

In order to decrease the false positive in text identification, we introduce a height threshold for text detection. With our observation, most of the web-page have their text font size between 10 and 16px. With a given candidate, we will first try to split it into rows, and apply text filtering SVM to each row. If a region cannot be split horizontally and it has a height bigger than 25px, we think it is not a normal text and skip the text filter step.

Our claim that entropy value can help distinguish text from image features is supported by the experiment result shown in Figure \ref{entropy}. In the figure, we noticed that the entropy value for text features and image features has distinct distributions. We choose the value 0.72 as the threshold that distinguish a text feature from an image feature, which is the value that gives the maximal margin.

%\begin{figure}
%\centering
%\includegraphics[scale=0.51]{entropy.png}
%\caption{Image \& Text Entropy Distribution}
%\label{entropy}
%\end{figure}

The next two parameters are for the HOG descriptor. As described in previous section, the size of HOG descriptor is proportional to the size of the input image. Thus in order to do comparison between different images, we need to first scale them to the same size. A too big size will lead to a bigger descriptor and affect the performance of svm classification. A too small size will lead to a severe information loss and impact the accuracy. Table 2 shows the length of HOG descriptors with the same cell size (50$\times$50px$^2$) under some common dimensions. 

Figure \ref{svm_train_time} shows the training time required for training set with different descriptor size. All data is collected from training set with 10089 rows of data. It can be seen that the time needed for SVM training is proportional to the size of descriptor. A feature with dimension 1024$\times$768 thus requires more than 3 times amount of time to train. Considering that the training process will be repeated for each feature, and we already get nearly 100\% accuracy in the test using dimension 500$\times$500, we think it is not worthy to use a bigger size.

\begin{table}
\renewcommand{\arraystretch}{1.2}%
\begin{tabular}{|c|l|}
\hline
1024$\times$768 & 2646\\ 
\hline
800$\times$600 & 1728\\
\hline
500$\times$500 & 900\\
\hline
\end{tabular}
\centering
\caption{HOG descriptor size with different dimension}
\end{table}

%\begin{figure}
%\includegraphics[scale=0.5]{svm_train_speed.png}
%\centering
%\caption{SVM Training Time}
%\label{svm_train_time}
%\end{figure}

Similarly, we tried different size for HOG cells, and choose the biggest one that doesn't impact the performance severely.  In \cite{cvpr2005_dalal}, Dalal and Triggs suggests to use unsigned gradients and set the bucket size to 9, which in their test performed best for human detection. Here we start with the same setting and get a satisfying result. Thus we keep this parameter unchanged.

The last parameter we want to mention here is the height $H$ when we use SVM to identify text region. In this case, the model can be pre-trained and the performance is not a big problem. Worrying about that too small descriptor length will affect the accuracy, we double the threshold value for text height in text filtering, which gives us 50.  

\subsection{SVM training data}
Our methods relies on SVM classification heavily to identify the text region as well as compare the feature images. 
In both case, it's easy and straightforward to generate positive training data. 

For the text identification, we generate positive samples by creating images that contains string with random length, font and size. We use four most common fonts: Geogia, Sans-Serif, Arial and Courier. The font size varies between 12 and 20, which we believes is the most common font size in web-pages. We use both text from books(We use Bible here) and random words from dictionary as our content. We trained our model with 45541 positive samples.  

For feature image comparison, we scale the feature image under comparison into different dimensions. For each image to be compared, we scale it into 100 different dimensions, from 100$\times$10 to 1100$\times$1100. These scaled images are then used to generate positive training samples for recognizing this image, which means each model are trained with 100 positive samples.

For the negative training data, we need some random picture data. We gather these data from Flickr (\url{http://www.flickr.com/}).
Flickr is a picture sharing image that provides a place for people around the world to upload and share the picture they shoot. This makes it a perfect place to retrieve random picture images. Sadly it doesn't provide a function for downloading packed pictures. We use PhantomJS to automatically download pictures from the Flickr photo streams. Our script will keep refreshing the web-page, getting a new set of pictures every time. We retrieve the image URL by looking for \textless IMAGE\textgreater tags with CSS class ``defer'', which is the tag Flickr used to hold their photos. We repeat this process and collect 10843 random unique pictures. The negative training samples for image feature comparison are then generated from them. The javascript that is used to download these pictures can also be found in our source code.

\section{Experiment Result}
We conduct our experiment on a test machine with AMD A10-6800K Quad-core APU and 8GB memory, installed with Ubuntu Desktop 13.10 64-bit and Oracle JDK 1.7.0\_45 64-bit for Ubuntu. 

Our experiment result consists of three parts. First, we test the accuracy of our text detection algorithm, which is one of the most critical part in our algorithm. Then we apply entire algorithm to phishing websites extracted from PhishTank.com to verify the eligibility of our method. The test result is described below. We also had a series of performance analysis to check the availability for our method to be applied to online phishing detection.

\subsection{Accuracy of Text Filtering and Image Comparing}
Our test data of text identification algorithm consist of two parts:  9468 rows of data generated from images that contains a single row of text and 11942 rows of data generated from random pictures downloaded from Flickr. Each of the text image contains a single row of text of 80 characters. All the text is extracted from Mark Twain's \textit{Adventure of Tom Sawyer}, Leuis Carol's \textit{Alice's Adventure in Wonderland} and Leo Tolstoy's \textit{Peace and War}, in order to make sure our test set distribution conforms with real English literature. The test result is listed in Table 3.

\begin{table}
\renewcommand{\arraystretch}{1.2}%
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Type} & \textbf{Input} & \textbf{Correct} & \textbf{Accuracy} \\
\hline
Text Image & 9468 & 9465 & 99.97\% \\
\hline
Non-text Image & 11942 & 11764 & 98.51\% \\
\hline
\end{tabular}
\caption{Text Identification Accuracy}
\end{table}

For image comparison accuracy, our test is designed as following: each round we will randomly choose a picture from the image library as the original one. We then generate different size of images by scaling the original image, and use them together with the pre-calculated negative set generated from random picture to train a model. This model is then applied to a test set that consists of the original images in different sizes and random pictures from image library. We generate 300 positive sample, working together with 9997 positive examples. We repeat the test 200 times and always get 100\% accuracy. We believe that this result provides sufficient support to our method. 

\subsection{Application to phishing website}

We test our detection method using 25 URLs fetched from Phishtank, and observe the checking result manually. These phishing web-pages are all different from the original web-pages, which means they cannot be detected by traditional method. We think our method is successful if the logo of the website is included in the extracted features. In our test, 24 out of 25 test examples are successful. The only one that failed is because of a irregular logo (not a rectangle), but we still are about to extract a rectangle that contains the logo in it.

This test also provide sufficient evidences that our methods are far better than the existing method. In Figure \ref{paypal_detect}, we shows a phishing website of Paypal that doesn't look like the real Paypal. This means that traditional methods will all fail to recognize the phishing websites when comparing them to the original one. However our method successfully located the paypal logo in this case, which is the region marked by red rectangle. This shows that our method overcomes the inborn limitation of existing method and is immune to web-page layout change -- as long as the phishing websites use the logo of the original website, which we believe they surely will do -- our method can locate the logo and thus detect the phishing website.

%\begin{figure}
%\centering
%\includegraphics[scale=0.3]{paypal_detect.png}
%\caption{Detection of Paypal phishing website}
%\label{paypal_detect}
%\end{figure}
We also have some analysis to other features extracted by our website, which are considered useless in the phishing detection. We have extracted in average 6 image features from each web-page. The maximal value is 11 and the minimal value is 2. Out of these features, 62\% are text images that are of big font size and are failed to be filtered. Others are small picture parts contained in the web-page. 

\subsection{Performance Analysis}
In order for our method to be available to online phishing detection, we also test the performance of our method. Running on the test machine,  the time required to do feature extraction is in average 38 seconds, with maximal value 52 seconds and minimal value 16 seconds. We noticed that the time required for image extraction is primarily related to the complexity of page layout.

Image Feature comparison is a primary weak point for the performance of our method as that each time to compare two features, our method need to train a model separately. The dataset generating time and model training time is in average 35 seconds. So if we assume that we can extract 6 features from each web-page, the average time needed to compare two web-pages are 1260 seconds, which is around 21 minutes. This is too slow for an online phishing detection system.

To overcome this shortage, we regularly train a multi-class classification model. The model first only contains known logos such as Google , Paypal, Amazon, etc. If we can identify the image feature using this pre-trained model, we can directly mark this web-page as phishing. Otherwise, we will put the features into database and waits for human inspectors to manually process it. Whenever suspected web-pages are confirmed to be phishing websites, we add the new feature to our training data and re-train the model. This method performs much better than the previous one. With the model pre-trained, the time required to recognize a known features is only less than 1 second, which means we can recognize a phishing website within 6 seconds in average. So it takes in total 44 seconds for us to determine whether a suspicious website is phishing or not.

\section{Future Work}
Our method employs only linear and rectangular splitting pattern. It works well with most of the website that organized in rectangular layout. However, this method may encounter problems with websites that are organized in irregular shapes. We plan to build a framework based on existing code to support dynamically adding new split patterns and switching to different split patterns. This will allow us to build more powerful and easy-extendable detectors. 

Processing speed is also a crucial part when dealing with large amount of suspicious phishing websites. Although we have achieved acceptable performance in the work in the experiment, we are still thinking ways of improving the analysis speed. Migrating to OpenCL is one of the target we are thinking about. By taking advantage of the processing ability of modern video cards, we are hoping to make our algorithm ready for production level application.

In Section "Filter Region", we mentioned about a data-driven filter that determines how important a given feature by telling whether this feature also appeared in other web-pages. We believe that this is the most potentially powerful filter in our system because of its learning ability. However, the training of this filter requires a constant effort of data collection and analysis, which is still on going now. We would like to focus on the training of this filter in our future work.

We also noticed that our text identification is based on English text distribution, which may also work with other Latin languages but may not work well with Asian or Arabian languages. We would also like to elaborate our text identification method to other languages families. 

\textbf{ADDRESS: the issue of processing time in the future work section... such as this method can scale if we X}

\section{Conclusion}

Phishing websites is one of the major threats to Internet security. To fight against phishing websites that sprout out all around the Internet world, fast and reliable automatic detection method is crucial. Most traditional methods relies on the overall similarity of two web-pages, which will introduce a high false negative rate when dealing with phishing web-pages that don't look similar to the original one. We develop this feature-based method that extract visual features from suspicious web-pages and compare it to features of known websites.

In our experiment, we show that our new method successfully detect over 90\% phishing websites that evade the detection of traditional method. which we believe is sufficient to demonstrate the advantage of it. We shows that our method of text-in-image identification with SVM can have over 98\% accuracy, which demonstrate a simple but reliable way to detect text in an picture. We also discuss the possibility of using data-driven method to further increase the detection speed and accuracy. 



\bibliographystyle{plain}
\bibliography{reference}
\end{document}