\documentclass[times,twocolumn,10pt]{article}
%DIF LATEXDIFF DIFFERENCE FILE
%DIF DEL source_JNM_2.tex   Tue Jul  8 10:35:50 2014
%DIF ADD source.tex         Tue Jul  8 12:13:33 2014

\usepackage{graphicx}
\usepackage{algpseudocode}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{authblk}
\usepackage[usenames,dvipsnames]{color}
\usepackage{hyperref}
\usepackage{array}
\usepackage{verbatim}
\usepackage{latex8}
\usepackage{times}

\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}

\graphicspath{{/home/harper/Repositories/research-papers-hao/phishing-detection/image/}}
%\pagestyle{empty}
%DIF PREAMBLE EXTENSION ADDED BY LATEXDIFF
%DIF UNDERLINE PREAMBLE %DIF PREAMBLE
\RequirePackage[normalem]{ulem} %DIF PREAMBLE
\RequirePackage{color}\definecolor{RED}{rgb}{1,0,0}\definecolor{BLUE}{rgb}{0,0,1} %DIF PREAMBLE
\providecommand{\DIFaddtex}[1]{{\protect\color{blue}\uwave{#1}}} %DIF PREAMBLE
\providecommand{\DIFdeltex}[1]{{\protect\color{red}\sout{#1}}}                      %DIF PREAMBLE
%DIF SAFE PREAMBLE %DIF PREAMBLE
\providecommand{\DIFaddbegin}{} %DIF PREAMBLE
\providecommand{\DIFaddend}{} %DIF PREAMBLE
\providecommand{\DIFdelbegin}{} %DIF PREAMBLE
\providecommand{\DIFdelend}{} %DIF PREAMBLE
%DIF FLOATSAFE PREAMBLE %DIF PREAMBLE
\providecommand{\DIFaddFL}[1]{\DIFadd{#1}} %DIF PREAMBLE
\providecommand{\DIFdelFL}[1]{\DIFdel{#1}} %DIF PREAMBLE
\providecommand{\DIFaddbeginFL}{} %DIF PREAMBLE
\providecommand{\DIFaddendFL}{} %DIF PREAMBLE
\providecommand{\DIFdelbeginFL}{} %DIF PREAMBLE
\providecommand{\DIFdelendFL}{} %DIF PREAMBLE
%DIF END PREAMBLE EXTENSION ADDED BY LATEXDIFF
%DIF PREAMBLE EXTENSION ADDED BY LATEXDIFF
%DIF HYPERREF PREAMBLE %DIF PREAMBLE
\providecommand{\DIFadd}[1]{\texorpdfstring{\DIFaddtex{#1}}{#1}} %DIF PREAMBLE
\providecommand{\DIFdel}[1]{\texorpdfstring{\DIFdeltex{#1}}{}} %DIF PREAMBLE
%DIF END PREAMBLE EXTENSION ADDED BY LATEXDIFF

\begin{document}
\date{}

\title{An Image-based Feature Extraction Approach for Phishing Website Detection}

%\author{Hao Jiang\\
%Clarkson University\\
%hajiang@clarkson.edu\\
%\and
%Joshua S. White\\
%whitejs@clarkson.edu\\
%\and
%Jeanna N. Matthews\\
%jmatthews@clarkson.edu\\
%}

\author[1]{Hao Jiang}
\author[1]{Joshua S. White}
\author[1]{Jeanna N. Matthews}
\affil[1]{Clarkson University}

\maketitle
\thispagestyle{empty}

%----------------------------------------------------------------------
\begin{abstract}
Phishing website creators and anti-phishing defenders are in an arms race. Cloning a website is fairly easy and can be automated by any junior programmer. Attempting to recognize numerous phishing links posted in the wild e.g. on social media sites or in email is a constant game of escalation. Automated phishing website detection systems need both speed and accuracy to win. We present a new method of detecting phishing websites and a prototype system LEO (Logo Extraction and cOmparison) that implements it. LEO uses image feature recognition to extract ``visual hotspots'' of a webpage, and compare these parts with known logo images. LEO can recognize phishing websites that has different layout from the original websites, or logos embedded in images. Comparing to existing visual similarity-based methods, our method has a much wider application range and higher detection accuracy. Our method successfully recognized 24 of 25 random URLs from PhishTank that previously evaded detection of other visual similarity-based methods. 
%DIF < We also show that our method of SVM-based text identification is capable of achieving over 98\% accuracy for detecting text embedded in images on these potentially malicious sites while maintain a good performance.
\end{abstract}

%----------------------------------------------------------------------
\section{Introduction}
Phishing is one of the most successful and prominent attack methods\cite{Moore}. It also requires little technical skill on the part of the attacker. Unlike other attack methods, phishing does not require infiltration of a victim's machine which can leave traces of malicious methods and activity. This alone makes phishing extremely hard to detect. A victim may never know that they are under attack. 

Creating a phishing webpage is simple and cheap. If you search Google for the phrase "create a phishing website", there are approximately 208,000 results, most of which are detailed step-by-step tutorials. With a hosting service(or access to compromised machines) and free tools that copy a given URL, anyone can setup a phishing website in minutes. With readily available tools, this process can even be automated. 

To deal with phishing websites that are constantly sprouting up all across the Internet, we need an efficient and automated method for identifying them quickly and accurately. As scanning the entire URL address space is impractical, there have been many innovative ideas of where to look for suspicious phishing URLs. A simple but effective way is to ask everyone to report suspicious URLs that they encounter. PhishTank (\DIFdelbegin %DIFDELCMD < \url{http://www.phishtank.com}%%%
\DIFdelend \DIFaddbegin \DIFadd{http://www.phishtank.com}\DIFaddend ) is a website that allows user to submit these suspicious URLs and also to verify the status of URLs submitted by others. Netcraft (\DIFdelbegin %DIFDELCMD < \url{http://www.netcraft.com/}%%%
\DIFdel{) offers }\DIFdelend \DIFaddbegin \DIFadd{http://www.netcraft.com/) offeres }\DIFaddend a Firefox plugin that enables users to report the suspicious URL with a single click. Some researchers are more interested in automating the URL collection work. J. White et al.\cite{spie2012_jwhite} talk about searching for suspicious URLs in Twitter data. I. Jeun et. al \cite{springer2013_jil} create a honeypot (the "SpamTrap") to collect URLs from spam emails.

Beyond reliable methods for collecting suspicious URLs, we need a fast and reliable method to identify whether these websites are truly phishing pages. This paper presents an innovative image-based feature extraction method for phishing website recognition. Our method works by first taking a screenshot of a target webpage, then locating ``visual hotspots'' in it. A visual hotspot is a continuous rectangular region that contains non-text visual information. These hotspots represent image features of the target webpage. The features are then compared with the pre-built logo library. If any of these features match a logo in the logo library, the target webpage is thought to be a phishing suspect. 

To evaluate our algorithm, we implement a prototype system LEO(Logo Extraction and cOmparison), run it against real world phishing websites, and evaluate its accuracy and performance. Most parts of LEO can be run in parallel, which grants it the scalability necessary for the system to be used for large-scale online phishing detection.

\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{google_drive_phishing.png}
\caption{A phishing website of Google Drive}
\label{gdrive_phishing}
\end{figure*}

The rest of the paper is organized as follows. Section 2 reviews some previous works of phishing detection, and compares them with our new method. Section 3 describes our new algorithm in more detail and Section 4 presents an overview of our system implementation. Section 5 evaluates the accuracy and performance of LEO. %DIF < Section 6 discusses some possibility of extending this work and
Section 6 presents our conclusion.

%----------------------------------------------------------------------
\section{Previous Work}

Different methods have been used to compare the similarity of two webpages and thus used as the basis for identifing phishing websites. We categorize these methods into three high-level types: structure-based comparison, visual-based comparison and content-based comparison. 

Structure-based comparison works under the assumption that similar webpages will have a similar underlying DOM tree structure. Structure-based comparison methods first parse the HTML webpages to construct the corresponding DOM trees, and then use various algorithms to compare them. Rosiello et al. \cite{securecomm2007_rosiello} compare the HTML tags in the DOM tree, looking for similar sub-tree structures. This method is effective against phishing websites that directly copy the content of original websites. However it has an obvious disadvantage. The appearance of a webpage cannot be uniquely defined by its DOM tree structure. Thus an attacker can easily avoid such detection by using different HTML tags to generate webpages that look similar. For example, using \textless DIV\textgreater  instead of \textless TABLE\textgreater element to do page layout can lead to exactly the same visual effect, while maintaining a totally different DOM Tree. In addition, attackers can choose to dynamically generate DOM trees via scripts at runtime. In this case, directly accessing the webpage URL cannot get the DOM tree required, which will lead to false negatives.

Visual-based comparison on the other hand focuses on the final visual effect rather than on the underlying DOM structure. Specifically, the screenshots of webpages are captured using techniques such as a headless browser. Comparison between these images are then conducted using various image processing techniques. Visual comparison methods overcome some of the disadvantage of DOM-based methods by focusing on comparing the visual output of a webpage, which is the exact same image seen by the end users. 

A.Fu et al. talk about their work \cite{ieee2006_fu} of using Earth's Mover Distance (EMD),\cite{iccv1998_rubner} to evaluate the difference between two webpages. EMD is a metric of the similarity between two probability distributions over a region. The closer two images are, the smaller the EMD value will be. J.White et al. talk about using the hash value of images in their work\cite{spie2012_jwhite}. The authors calculate pHash of a screenshot and evaluate the difference using hamming distance between two hash values. They present experimental results illustrating that adding a small change to the original image will also lead a small increase in the hamming distance.

%DIF < JNM - here you say visual structure which is a little confusing given the two 
%DIF < high level categories you set up - be very clear which category this falls in
%DIF < It almost seems like a hybrid?  I;ve moved it later
Bohunsky et al.\cite{www10_bohunsky} describes their work of using the visual image to do webpage comparison and clustering. Instead of comparing the entire picture, they split the webpage into small rectangular areas which they call ''visual boxes''. By comparing the visual box structure of two webpages, they attempt to detect the correlation. However, two webpages that have a similar layout but totally different text content may be categorized as similar. For example, almost all news website present news with a title picture and an abstract. Despite the visual similarity, the content they talk about may be completely different. This shows that relying only on visual layout of a webpage for clustering may introduce a high rate of false positives. 

These works are efficient against phishing websites that look exactly the same as the original websites. However, in practice we have found some phishing examples that do not like the original websites at all. Thus they evade this kind of detection easily. Figure \ref{gdrive_phishing} shows an example phishing website of Google Drive we retrieved from PhishTank, as well as the actual Google Drive login page. We can see that this phishing webpage does not actually look like the original legitimate Google Drive page. %DIF < that uses Google's Single-Sign-On system. 
We have observed over 200 samples from PhishTank and noticed that this is not a single rare case. 

We believe that "low-quality" phishing webpages, i.e. those that look very different from the original source, still have a possibility of catching victims. Certainly, \DIFdelbegin \DIFdel{no }\DIFdelend \DIFaddbegin \DIFadd{not }\DIFaddend all the web-surfers have enough knowledge to compare the phishing webpage to the legitimate one. This is especially true if victims are not familiar with the actual webpage they are intending to access. Failing to catch this type of phishing websites is a big disadvantage of the discussed methods.

G. Wang et al \DIFdelbegin \DIFdel{. }\DIFdelend described an idea that is closest to our effort in \cite{WLBWBSS11}. Noticing the importance of logos in detection of phishing website, they create a Firefox plugin named Verilogo that is capable of extracting image files from \texttt{<IMG>} tags in webpages. The extracted images are then compared with known website logos using SIFT\cite{sift} method. If a webpage contains images that are identical to some known logos and the webpage is not authorized to use the logo, this webpage is flagged as a phishing suspect. In the case that phishing websites use logos as separate image files, we believe that this method works as good as ours. However, this method fails to deal with ``embedded'' logos, in which case logo images are included as part of the background image, as what we can see in Figure \ref{gdrive_phishing}. The \DIFaddbegin \DIFadd{new }\DIFaddend method we propose is capable of extracting logos from the background image, thus overcoming this problem.

\DIFdelbegin \DIFdel{Contentl-based }\DIFdelend %DIF > We also want to mention an interesting research direction -- the application of machine learning technique in the detection of phishing websites. %Most recent work in this direction focuses on the analysis of content-based features of the webpages, such as links included in the page, javascripts, etc. This is different from methods that focuses on the image-based feature extraction and analysis.  In this sense,  our application of machine learning techniques to the area of phishing detection can be a good complement to these existing online detection methods.
\DIFaddbegin \DIFadd{Content-based }\DIFaddend comparison focuses on comparison of the webpage text, often using machine learning techniques. %DIF < Another interesting research direction is the application of machine learning techniques to the detection of phishing websites. %Most recent work in this direction focuses on the analysis of content-based features of the webpages, such as links included in the page, javascripts, etc. This is different from methods that focuses on the image-based feature extraction and analysis.  In this sense,  our application of machine learning techniques to the area of phishing detection can be a good complement to these existing online detection methods.
Y. Zhang et al.\cite{www07_zhang} create a content-based phishing website detection system in which they apply TF-IDF\cite{Jones72_tfidf}, an algorithm that is widely adopted in text mining and information retrieval, to the webpage text. %The authors use this algorithm to extract keywords from the target webpage, and query Google with them. If the return result does not contain the target webpage, they consider it a phishing one. To reduce false positive, a weighted sum of other features including Age of Domain, suspicious links and the existence of HTML forms are also used to mitigate the error.
R. Basnet et al. \cite{scai08_basnet} apply different machine learning techniques, including SVM, neural networks and SOM \DIFaddbegin \DIFadd{to them }\DIFaddend and evaluate their performance. C. Whittaker et al.\cite{ndss10_google} present the automatic maintenance of Google's blacklist of phishing websites with similar feature sets using classification.
\DIFaddbegin 

\DIFaddend S. Abu-Nimeh et al.\cite{cml2007_abu} present a performance comparison of different methods for detection of phishing email. D. Miyamoto \cite{anip2009_miyamoto} did similar work on phishing website detection and showed AdaBoost\cite{Freund1997119} works best in their particular case.  

Content-based phishing detection also has the advantage of performance when being compared to image-based methods. However, content-based methods typically have a higher error rate and higher false positive rate, which limits their accuracy. We believe that the combination of both content-based feature and image-based feature, will allow us a better result. This is fundamental to our future research interests.

%----------------------------------------------------------------------
\section{Algorithm Description}
In this section, we describe the details of our new method of visual-based feature comparison. We first focus on the extraction of "features"\DIFaddbegin \DIFadd{, }\DIFaddend or prominent visual elements from a webpage. Our goal is to isolate recognizable logos. We observe that most phishing websites, even "low-quality" ones that do not look like the original webpage, will at least include a logo.  Thus this provides us a perfect target for phishing detection.

\subsection{Feature Extraction}

To extract features, we first distinguish valid graphical information from background image using edge detection, then split it into small rectangular regions. We then apply a collection of filters to the region set in order to identify the \DIFdelbegin \DIFdel{most }\DIFdelend features most likely to contain a logo. Figure \ref{sys_proc} shows this process. We describe each step in detail in the following paragraphs.
\begin{figure}[t]
\centering
    \includegraphics[width=0.39\textwidth]{process.png}
    \caption{Feature Extraction Process}
    \label{sys_proc}
\end{figure}

Edge detection is done by calculating the gradient of each pixel based on its 3x3 neighbors. We first calculate the horizontal and vertical gradients using central difference vector $\mathbf{t}$ =$[ -1 , 0 , 1 ]^T$. Thus we have 
\begin{align*}
&\nabla_x(x,y) = f(x+1,y) - f(x-1,y)\\
&\nabla_y(x,y) = f(x,y+1)-f(x,y-1)
\end{align*} We then calculate the gradient value at point $(x,y)$ as 
\begin{displaymath}
\nabla = \frac{1}{\sqrt{2}} \sqrt{(\nabla x)^2 + (\nabla y)^2}
\end{displaymath}
 The result is then rounded to an integer between 0 and 255. For an RGB image, we repeat the calculation for the three different colors and get the biggest value as the final result. 

Edge detection is good at removing \DIFdelbegin \DIFdel{a }\DIFdelend constant background color. However, some webpages use a background of image gradient, which causes $\nabla$ at the background region a small non-zero value. To remove such interference, we setup a threshold $T$ and write h(x,y) as a piece-wise function.
\begin{displaymath}
   h(x,y) = \left\{
     \begin{array}{lr}
       \nabla & : \nabla \ge T\\
       0 & : \nabla < T
     \end{array}
   \right.
\end{displaymath} 
By properly choosing the value of T, we make sure $\nabla$ at background will be 0, which gives us a grayscale image that we used as the input for region splitting.

The goal of region splitting is to separate the image into smaller regions that contain non-black pixels. Given a region to be split, we first calculate the lower bound of this region, i.e. the smallest rectangle that encloses all the non-black pixels in this region. We then try to draw a line to split this region\DIFaddbegin \DIFadd{, }\DIFaddend in either the vertical or horizontal direction. There may be multiple possible lines, and we choose the one that gives a maximal margin. If such a line can be found, we split the given region into two sub-rectangles, and then repeat the process on each of these sub-rectangles. If no line can be found, we switch to rectangular splitting. This method of \DIFdelbegin \DIFdel{rectangular }\DIFdelend \DIFaddbegin \DIFadd{rectangle }\DIFaddend splitting tries to find the biggest sub-rectangle in the given region. If none of the previous methods work, we consider the region as unsplittable and add it to the result list. These unsplittable rectangles are input to the filtering step. Figure \ref{split_algorithm} shows the pseudo-code of the algorithm we used for region splitting.
\begin{figure}[h]
\centering
\fbox{
\parbox{0.45\textwidth}{
\begin{algorithmic}
\Function{split}{Rectangle region, List result}
\State\textcolor{OliveGreen}{;; Remove excessive space}
\State \Call{lowerBound}{region};
\State\textcolor{OliveGreen}{;; First try to split the region using lines}
\State  vline $\gets$ \Call{vline}{region};
\State  hline $\gets$ \Call{hline}{region};
\State  line $\gets$ \Call{maxMargin}{vline, hline};
\If {line $\neq$ NULL}
	\State region1, region2 $\gets$ \Call{lineSplit}{region, line};
	\State \Call{split}{region1, result};
	\State \Call{split}{region2, result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{;; Split the region use rectangle}
\State region $\gets$ \Call{rectSplit}{region};
\If{region $\neq$ NULL} 
	\State \Call{split}{region,result};
	\State \Return;
\EndIf
\State\textcolor{OliveGreen}{; Not splittable, add the region to result list}
\State result.\Call{add}{region};
\State \Return
\EndFunction

\end{algorithmic}
}
}
\caption{Algorithm of Region Splitting}
\label{split_algorithm}
\end{figure}

We then apply a set of filters to split regions. These filters are designed to remove regions that are unlikely to \DIFaddbegin \DIFadd{contain }\DIFaddend logos or other important features. First, we observe that it would be difficult for a region that is too small or too narrow to contain any valid information. One example is the rectangle that encloses a horizontal rule created by \textless BR\textgreater tag. We setup a threshold T for the dimension of the rectangles, and ignore all those have a height or width less than T.

Second, we prefer filter out regions that contain text rather than images. We observe that when users first see a webpage, the first thing that \DIFdelbegin \DIFdel{catches their eye is images }\DIFdelend \DIFaddbegin \DIFadd{catch their eyes is image }\DIFaddend rather than text. We have trained a SVM model to identify regions that contain only text. To do this, we notice that the vertical distribution of a character image shows an interesting pattern. More specifically, consider that when we write on a ruled piece of paper, only character ``j, g, y'' will occupy the lower part of the text region, and only ``h, i, j'' will occupy the higher part. Most of the characters are at the center part. Thus if we calculate the percentage of non-black pixels on each row of a text region, we can expect to get a consistent pattern which can be learned by SVM. Figure \ref{text_histo} shows this distribution pattern\DIFdelbegin \DIFdel{in average}\DIFdelend . This algorithm is specific to text written in a Latin based alphabet but we are interested in developing similar filters for other alphabets such as Cyrillic, Arabic or Asian characters.

\begin{figure}[t]
\centering
    \includegraphics[width=0.45\textwidth]{text_histo.png}
    \caption{\DIFdelbeginFL \DIFdelFL{Histogram showing text-image }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{Text-image row scanning }\DIFaddendFL distribution pattern}
    \label{text_histo}
\end{figure}

To deal with fonts of different sizes or heights, we first scale the candidate region to a parameterized height $H$, preserving the ratio of width and height. Next, for each row of pixels, we calculate the percentage of non-black pixels. This gives us a histogram of $H$ bins, which is then used as feature descriptor to do classification after being normalized. 

Our splitting method works well with most of the images, extracting image parts from their background. However, we have a problem of over-splitting, in which a region that should be kept together is split incorrectly. For example, Google's logo has big vertical gap between ``G'' and the first ``o''. This logo will be split by our algorithm if the resolution is too high.

To solve this problem, we add a step that combines these over-split regions into a whole. We balance the tension between over-splitting and over-combining in this way. First, we have a \DIFdelbegin \DIFdel{threshhold }\DIFdelend \DIFaddbegin \DIFadd{threshold }\DIFaddend for how close the two regions must be in order to combine them. Second, we check that the combination will not introduce too much whitespace. We setup a threshold for the percentage of newly introduced whitespace that is allowed. With these rules, we will first group the regions, then for each such group, draw a rectangle to cover it as a combined result.

\subsection{Feature Comparison}
\DIFdelbegin %DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdelend Now that we have covered our algorithm for feature identification, we are ready to move on to discussing the method we use for comparing features. Restating the problem to be solved,  given the image features extracted from a suspicious webpage, we want to know whether they are identical to known logos. 

Our method is based on SVM classification \DIFaddbegin \DIFadd{using HOG descriptor}\DIFaddend . In \cite{cvpr2005_dalal}, N. Dalal and B. Triggs propose the Histogram of Oriented Gradient (HOG) feature descriptor of image, for the purpose of object detection. It had been proved extremely effective in human facial recognition. In \cite{sa11_shrivastava}, Shrivastava et\DIFaddbegin \DIFadd{. }\DIFaddend al. demonstrate the effectiveness of using HOG to do clustering of pictures with natural scenes. 

Recall that in previous section we described how we calculate the gradient value $\nabla(x,y)$ of a given point $(x,y)$. The result we get there is a scalar. In the HOG processing, we also take the direction of the gradient into account and get a normalized vector. 
\begin{displaymath}
\vec\nabla(x,y) =\frac{1}{\sqrt{2}} [\nabla_x,\nabla_y]                                                      
\end{displaymath}
We split the image into $m \times n$ grids. Each cell of the grid is a $k \times k$ square. For each cell of the grid, we calculate the gradient vector of each pixels in that cell, which is $\nabla_1$ to $\nabla_{k^2}$. These $k^2$ vectors are separated into $w$ bucket based on their angles. For example, when $w$ is 4, we have four buckets that contains vectors with angles in $[0, \frac{\pi}{2})$, $[\frac{\pi}{2}, \pi)$, $[\pi, \frac{3\pi}{2})$ and $[\frac{3\pi}{2}, 2\pi)$. This forms the HOG descriptor of that cell.

For each cell, we sum the value in each bucket up and normalize them to get $w$ values, and for the entire image, we get $m\times n\times w$ values. These values form the descriptor we use for SVM classification. The size of the HOG descriptor is proportional to the image size given a fixed cell size, thus to compare image features of different size. We need to scale them to a predefined fixed dimension $[M, N]$. 

To correctly match image features that contain the same content, but are different in size, we use a scale-invariant method. We prepare the training data by first stretching the source image to different dimensions, then scaling them all back to dimension $[M,N]$ to generate positive training samples. We then use the target image to generate the test set. From the classification result \DIFdelbegin \DIFdel{, }\DIFdelend we can tell whether two image features are the same. 

To detect phishing websites based on logo recognition, we first prepare a logo library that contains collected logo images of commonly phished sites, then train a multi-class SVM classifier based on features generated from these images. Each logo image is represented by a separated class in this classifier. If a test image falls into any of these classes, we consider the webpage that contains this image a candidate of phishing website. We emphasize that any organization could prepare a logo library specific to their interests(e.g. looking for any logos they use in their legitimate media campaigns).

\begin{table}[ht!]
\renewcommand{\arraystretch}{1.1}%
\centering
\begin{tabular}[t]{|L{1.55cm}|L{3cm}|L{2.1cm}|}
\hline
\textbf{Category}& \textbf{Name} & \textbf{Value} \\ \hline
Filter& Minimal Width & 5px\\ \hline 
Filter& Minimal Height & 5px \\ \hline
Filter& Minimal Area & 100px$^2$ \\ \hline
Filter& Height Threshold & 25px \\ \hline
SVM& Dimension & 500$\times$500px$^2$ \\ \hline 
SVM& HOG Bucket Size & 9 \\ \hline
SVM& HOG Cell Size & 50px \\ \hline
SVM& Height & 50px \\ \hline
\end{tabular}
\caption{Parameter value}
\end{table}

\section{Implementation}
In this section, we present the implementation of LEO, including \DIFdelbegin \DIFdel{the }\DIFdelend our settings of tunable parameters, training data used, the logo library we used, the environment in which we have run our experiments and the external tools we have used.

The main program is written in Java and Javascript, containing around 4000 lines of code. All the source code is readily available on our website. We use PhantomJS\cite{phantomjs} for webpage screenshot image retrieval and Libsvm\cite{2011_libsvm} for SVM classification.

\subsection{Parameter values}
In this section \DIFdelbegin \DIFdel{, }\DIFdelend we explain the parameter values used in our algorithm. Table 1 lists relevant parameters and their current values.

The first three parameters control the behavior of the size filter. Any region that either has its width/height less than 5px or has an area less than 100px$^2$ is considered unable to hold valid information and is thus ignored.

In order to decrease the false positive in text identification, we introduce a height threshold for text detection. With our observation, most webpages have a text font size between 10 and 16px. With a given candidate, we will first try to split it into rows, and apply text filtering SVM to each row. If a region cannot be split horizontally and it has a height bigger than 25px, we think it is not a normal text and skip the text filter step.

The next two parameters are for the HOG descriptor. As described in previous section, the size of HOG descriptor is proportional to the size of the input image. Thus in order to do comparison between different images, we need to first scale them to the same size. Setting the size too large will lead to a bigger descriptor and affect the performance, while setting the size too small \DIFdelbegin \DIFdel{size }\DIFdelend will lead to a substantial information loss and impact the accuracy. We tried different dimensions(1024$\times$768, 800$\times$600, 500$\times$500) with the same cell size (50$\times$50px$^2$) under some common dimensions, and choose the smallest dimension while maintaining the accuracy.

Similarly, we tried different sizes for HOG cells, and chose the biggest one that does not impact the performance severely.  In \cite{cvpr2005_dalal}, Dalal and Triggs suggest using unsigned gradients and set the bucket size to 9, which their experiments suggest offers the best performance for detection of human faces. Here we start with the same setting and get a satisfying result. Thus we kept this parameter unchanged.

The last parameter we want to mention here is the height $H$ used by SVM to identify the text region. In this case, the model can be pre-trained and the performance is not a big problem. A small descriptor length will affect the accuracy. We double the threshold value for text height in text filtering, which gives us 50.  

\subsection{SVM training data}
Our methods relies on SVM classification heavily to identify the text region as well as compare feature images, which requires a sufficient large training set to function properly.

For text identification, we generate positive samples by creating images that contains strings with random length, font and size. We use four most common fonts: Geogia, Sans-Serif, Arial and Courier. The font size varies between 12 and 20, which we believes covers most common font size in webpages. We use both text from books(e.g. the Bible) and random words from dictionary to generate 45541 positive samples. For feature image comparison, we scale the feature image under comparison into 100 different dimensions, from 100$\times$100 to 1100$\times$1100. These scaled images are then used to generate positive training samples for recognizing this image, which means each model are trained with 100 positive samples.

We also need some random picture data to be used as negative training data. We gather these data from Flickr (\DIFdelbegin %DIFDELCMD < \url{http://www.flickr.com/}%%%
\DIFdelend \DIFaddbegin \DIFadd{http://www.flickr.com/}\DIFaddend ).
Flickr is a picture sharing image that provides a place for people around the world to upload and share the picture they shoot. This makes it a perfect place to retrieve random pictures. Flicker does not provide a function for downloading packed pictures, but we used PhantomJS to automatically download pictures from Flickr photo streams. We repeat this process and collect 10843 unique pictures. They are used as negative training samples for image feature comparison. The Javascript we used to download these pictures can also be found in our source code.

\subsection{Logo library}
According to Kaspersky Lab's technical report \cite{kaspersky} about phishing attacks in 2013, over 90\% phishing attacks target at social networks, financial services and mail services. Besides these, we also notice a trend of attacks against personal cloud service providers. As a demo system, we choose 15 logos from top companies in these fields. The detail list is provided in Table \ref{logo_library}.
The process of comparing a given feature to these logos is fully parallelized. Thus in a production system, unlimited number of logos can be added to this library without affecting system performance. We will talk more about this when discussing the performance of our system.

\section{Experiment Result}
We conduct our experiment on a test machine with AMD A10-6800K quad-core APU and 8GB memory, installed with Ubuntu Desktop 13.10 64-bit and Oracle JDK 1.7.0\_45 64-bit for Ubuntu. All test screenshots are captured in 1920x1080 resolution.

%DIF < Our experiment result consists of three parts. First, we present the accuracy test result of our text detection algorithm, we then shows the entire algorithm to phishing websites from PhishTank.com to test the detection accuracy of our method. Finally, we present the performance test result of our methods and discuss the possibility of deployment in online environments.
\DIFdelbegin %DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdelend \subsection{Accuracy of Text Filtering and Image Comparison}
We first present the accuracy of our text filtering algorithm. We have prepared two test sets: a positive test set by generating images that each contains a text string from English literature. Our algorithm is considered successful if it can recognize these images as ``image contains only text'', In other words, it answers ``yes'' to these images. We then prepare a negative test set using pictures downloaded from Flicker website. Comprising mostly of landscape and portrait photos, these images are not likely to contain only text strings. Thus we expect our algorithm to answer ``no'' to them.
\begin{table}[t]
\renewcommand{\arraystretch}{1.3}%
\centering
\begin{tabular}{|l|p{5cm}|}
\hline
\textbf{Field} & \textbf{Logo} \\
\hline
Financial System & eBay, Amazon, PayPal, HSBC, IRS, BOA\\ 
\hline
Social Network& Facebook, Twitter, LinkedIn\\ 
\hline
Mail Service & Gmail, Outlook, Yahoo!\\ 
\hline
Cloud Service & Google Drive, Dropbox, Box \\
 \hline
\end{tabular}
\caption{Logo Library content}
\label{logo_library}
\end{table}

We prepare a positive test set that contains 9468 images, each containing a single row of text of 80 characters. All the sentences are extracted from Leo Tolstoy's \textit{Peace and War}. The negative test set we use contains 11942 pictures downloaded from Flickr. The test result is shown in Table \ref{text_rec_accuracy}. It can be seen that our text-filtering algorithm have a false-positive rate of 0.831\% and a false-negative rate of 0.01\%. This is a convincing result that shows our text-filtering algorithm has a high accuracy of detecting images that contain only text.

To address image comparison accuracy, our test has been designed as follows: we randomly choose a picture from Flicker and insert it into LEO's logo library, then run LEO on a test input that contains both the scaled original picture and other irrelevant pictures. The test is thought to be successful if LEO can recognize the original picture.  We repeat the test 200 times and always get 100\% accuracy. This is a strong support to the effectiveness of our algorithm.

\subsection{Application to Phishing Websites}
We manually choose 25 URLs fetched from PhishTank as our test set. These URLs are chosen with following guidelines: the phishing webpage contains the logo of the original website; the logo is included in our logo library; the visual appearance of the phishing webpage is different from the original one. All previous detection methods that based on visual features are not able to deal with these phishing websites. As a result, we successfully identify 24 out of 25 test cases as phishing websites. The only case that failed is because that the logo is partially overlapped by a floating layer on the webpage, but we still are about to extract a rectangle that enclose the logo.

\begin{table}[t]
\renewcommand{\arraystretch}{1.2}%
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Type} & \textbf{Input} & \textbf{Correct} & \textbf{Accuracy} \\
\hline
Text Image & 9468 & 9465 & 99.97\% \\
\hline
Non-text Image & 11942 & 11764 & 98.51\% \\
\hline
\end{tabular}
\caption{Text Identification Accuracy}
\label{text_rec_accuracy}
\end{table}

This test also shows the improvement of our methods comparing to existing visual similarity-based methods. In Figure \ref{paypal_detect}, we show a phishing website of PayPal that does not look like the real one. In addition, the logo is not a separate image file but embedded in background image. Nevertheless, our method successfully located the PayPal logo in this case, which is the region marked by red rectangle, and thus detect this phishing website. This shows that our method overcomes the limitation of existing visual similarity-based method and is immune to webpage layout change -- as long as the phishing websites use the logo of the original website, which we believe they surely will do -- our method can always locate the logo and thus detect the phishing website.

\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{paypal_detect.png}
\caption{Detection of PayPal phishing}
\label{paypal_detect}
\end{figure}

\subsection{Performance Analysis}

In this section \DIFdelbegin \DIFdel{, }\DIFdelend we present the performance test result of LEO. Running on the test machine,  the time required to identify one webpage of size 1920x1080 is in average 3.11 seconds, with maximal value 9.7 seconds and minimal value 2.1 seconds. We noticed that the time required for image extraction is primarily related to the complexity of page layout. 

In a production system, we can increase the throughput by simply adding more machines to process different webpages in parallel. In addition, the performance can be further increased by parallelize some of the steps. As we adopt a top-down method when splitting the region, split regions do not overlap and can be processed in parallel.  As an example, we have provided a parallel version of split/combine operations in LEO's source code.

The process of logo recognition can also be done in parallel. By preparing different logo library and distribute them to different machines in a cluster, an image feature can be compared with multiple logos concurrently.  This allows the system to have full scalability with the size increasing of the logo library. We have included a simple distributed framework in LEO's source code that allows users to build a cluster to process logo recognition. This framework can be easily extended to support other distributed frameworks such as Apache Hadoop.

\DIFdelbegin %DIFDELCMD < \begin{comment}
%DIFDELCMD < %%%
\section{\DIFdel{Future Work}}
%DIFAUXCMD
\addtocounter{section}{-1}%DIFAUXCMD
\DIFdel{Our method employs only linear and rectangular splitting pattern. It works well with most of the website that organized in rectangular layout. However, this method may encounter problems with websites that are organized in irregular shapes. We plan to build a framework based on existing code to support dynamically adding new split patterns and switching to different split patterns. This will allow us to build more powerful and easy-extendable detectors. 
}%DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdel{Processing speed is also a crucial part when dealing with large amount of suspicious phishing websites. Although we have discussed some performance improvements in this paper, we are still thinking ways of improving the analysis speed. Migrating to OpenCL is one of the target we are thinking about. By taking advantage of the processing ability of modern video cards, we are hoping to make our algorithm ready for production level application.
}%DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdel{In Section "Filter Region", we mentioned about a data-driven filter that determines how important a given feature by telling whether this feature also appeared in other webpages. We believe that this is the most potentially powerful filter in our system because of its learning ability. However, the training of this filter requires a constant effort of data collection and analysis, which is still on going. We would like to focus on the training of this filter in our future work.
}%DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdel{Our text identification is based on model trained by English text samples, which may also work with other Latin languages but may not work well with Asian or Arabian languages. We would also like to elaborate our text identification method to other languages families. 
}%DIFDELCMD < \end{comment}
%DIFDELCMD < 

%DIFDELCMD < %%%
\DIFdelend \section{Conclusion}

In this paper \DIFdelbegin \DIFdel{, }\DIFdelend we presented a new image-based feature extraction method for phishing website detection.  We also showed a prototype system LEO that implements the algorithm. LEO is able to recognize phishing websites by extracting logos from webpage screenshots, which make it immune to attacks like layout changing or logo embedding. We apply LEO to real-world phishing examples and obtain evaluation results. Our results show that LEO is able to deal with different types of phishing, while maintaining high accuracy and scalable performance. We believe that LEO can be used in conjunction with existing content-based method to further increase the accuracy of phishing detection.

\bibliographystyle{latex8}
\bibliography{reference}
\end{document}
