\documentclass[twocolumn]{article}
\usepackage{amsmath,amssymb}
\usepackage{pstricks}
\usepackage{graphicx}
\usepackage{xspace} \usepackage{graphicx}
\usepackage{multirow}
\usepackage{subfig}
\usepackage{multirow}
\usepackage{array}
\usepackage{url}
\usepackage{pdfpages}
\usepackage{booktabs}
\usepackage{balance}
\usepackage{authblk}
\usepackage{url}
\newcommand{\vct}[1]{\ensuremath{\boldsymbol{#1}}}
\newcommand{\mat}[1]{\ensuremath{\mathtt{#1}}}
\newcommand{\set}[1]{\ensuremath{\mathcal{#1}}}
\newcommand{\con}[1]{\ensuremath{\mathsf{#1}}}
\newcommand{\T}{\ensuremath{^\top}}
\newcommand{\ind}[1]{\ensuremath{\mathbbm 1_{#1}}}
\newcommand{\argmax}{\operatornamewithlimits{\arg\,\max}}
\newcommand{\argmin}{\operatornamewithlimits{\arg\,\min}}
\newcommand{\mycomment}[1]{\textcolor{red}{#1}}
\newcommand{\mycommentfixed}[1]{\textcolor{green}{#1}}
\newcommand{\myparagraph}[1]{\smallskip \noindent \textbf{#1}}
\newcommand{\ie}{\emph{i.e.}\xspace}
\newcommand{\eg}{\emph{e.g.}\xspace}
\newcommand{\etal}{\emph{et al.}\xspace}
\newcommand{\etc}{\emph{etc.}\xspace}
\newcommand{\aka}{\emph{a.k.a.}\xspace}
\newcommand{\deltaphish}{\texttt{$\delta$Phish}\xspace}
\begin{document}
\title{DeltaPhish: Detecting Phishing Webpages\\in Compromised Websites\footnote{Preprint version of the work accepted for publication at ESORICS 2017.}}
\author[1,2]{Igino Corona}
\author[1,2]{Battista Biggio}
\author[2]{Matteo Contini}
\author[1,2]{Luca Piras}
\author[2]{Roberto Corda}
\author[2]{Mauro Mereu}
\author[2]{Guido Mureddu}
\author[1,2]{Davide Ariu}
\author[1,2]{Fabio Roli}
\affil[1]{Pluribus One, via Bellini 9, 09123 Cagliari, Italy}
\affil[2]{DIEE, University of Cagliari, Piazza d'Armi 09123, Cagliari, Italy}
\date{} \setcounter{Maxaffil}{0}
\maketitle
\abstract{The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting.
To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website.
Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it.
DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99\% of phishing webpages, while only misclassifying less than 1\% of legitimate pages. We further show that the detection rate remains higher than 70\% even under very sophisticated attacks carefully designed to evade our system.}
\section{Introduction}
In spite of more than a decade of research, phishing is still a concrete, widespread threat that leverages social engineering to acquire confidential data from victim users~\cite{Beardsley2005}. Phishing scams are often part of a profit-driven economy, where stolen data is sold in underground markets~\cite{Han2016,Bursztein2014}. They may be even used to achieve political or military objectives~\cite{Hong2012,Khonji2013}. To maximize profit, as most of the current cybercrime activities, modern phishing attacks are automatically deployed on a large scale, exploiting vulnerabilities in publicly-available websites through the so-called~\emph{phishing kits}~\cite{Han2016,Bursztein2014,Cova2008,Invernizzi2012,APWG2015}. These toolkits automatize the creation of phishing webpages on hijacked legitimate websites, and advertise the newly-created phishing sites to attract potential victims using dedicated spam campaigns.
The data harvested by the phishing campaign is then typically sold on the black market, and part of the profit is reinvested to further support the scam campaign~\cite{Han2016,Bursztein2014}.
To realize the importance of such a large-scale underground economy, note that, according to the most recent Global Phishing Survey by APWG, published in 2014, $59,485$ out of the $87,901$ domains linked to phishing scams (\ie, the $71.4\%$) were actually pointing to legitimate (compromised) websites~\cite{APWG2015}.
\begin{figure*}[t]
\centering
\includegraphics[height=0.25\textwidth]{figs/ex-home.pdf} \hspace{1pt}
\includegraphics[height=0.25\textwidth]{figs/ex-legit.pdf} \hspace{1pt}
\includegraphics[height=0.25\textwidth]{figs/ex-phish.pdf}
\caption{Homepage (\emph{left}), legitimate (\emph{middle}) and phishing (\emph{right}) pages hosted in a compromised website.}
\label{fig:examples}
\end{figure*}
Compromising vulnerable, legitimate websites does not only enable a large-scale deployment of phishing attacks; it also provides several other advantages for cyber-criminals.
First, it does not require them to take care of registering domains and deal with hosting services to deploy their scam. This also circumvents recent approaches that detect malicious domains by evaluating abnormal domain behaviors (\eg, burst registrations, typosquatting domain names), induced by the need of automatizing domain registration~\cite{hao16-ccs}.
On the other hand, website compromise is only a \emph{pivoting} step towards the final goal of the phishing scam. In fact, cyber-criminals normally leave the \emph{legitimate} pages hosted in the compromised website \emph{intact}. This allows them to hide the presence of website compromise not only from the eyes of its legitimate owner and users, but also from blacklisting mechanisms and browser plug-ins that rely on reputation services (as legitimate sites tend to have a good reputation)~\cite{Han2016}.
For these reasons, malicious webpages in compromised websites remain typically undetected for a longer period of time. This has also been highlighted in a recent study by Han~\etal~\cite{Han2016}, in which the authors have exposed vulnerable websites (\ie, honeypots) to host and monitor phishing toolkits.
They have reported that the first victims usually connect to phishing webpages within a couple of days after the hosting website has been compromised, while the phishing website is blacklisted by common services like \texttt{Google Safe Browsing} and \texttt{PhishTank} after approximately twelve days, on average.
The same authors have also pointed out that the most sophisticated phishing kits include functionalities to evade blacklisting mechanisms. The idea is to redirect the victim to a randomly-generated subfolder within the compromised website, where the attacker has previously installed another copy of the phishing kit.
Even if the victim realizes that he/she is visiting a phishing webpage, he/she will be likely to report the randomly-generated URL of the visited webpage (and not that of the redirecting one), which clearly makes blacklisting unable to stop this scam.
To date, several approaches have been proposed for phishing webpage detection (Sect.~\ref{sect:rel-work}). Most of them are based on comparing the candidate phishing webpage against a set of known targets~\cite{Basnet2014,Medvet2008}, or on extracting some generic features to discriminate between phishing and legitimate webpages~\cite{Chen2014,Blum2010}.
To our knowledge, this is the first work that leverages the adversarial behavior of cyber-criminals to detect phishing pages in compromised websites, while overcoming some limitations of previous work. The key idea behind our approach, named \texttt{DeltaPhish} (or \deltaphish, for short), is to compare the HTML code and the \emph{visual} appearance of potential phishing pages against the corresponding characteristics of the homepage of the compromised (hosting) website (Sect.~\ref{sect:deltaphish}). In fact, phishing pages normally exhibit a much significant difference in terms of aspect and structure with respect to the website homepage than the other \emph{legitimate} pages of the website. The underlying reason is that phishing pages should resemble the appearance of the website targeted by the scam, while legitimate pages typically share the same style and aspect of their homepage (see, \eg, Fig.~\ref{fig:examples}).
Our approach is also robust to well-crafted manipulations of the HTML code of the phishing page, aimed to evade detection, as those performed in~\cite{Liang2016} to mislead the Google's Phishing Pages Filter embedded in the \emph{Chrome} web browser.
This is achieved by the proposal of two distinct \emph{adversarial fusion} schemes that combine the outputs of our HTML and visual analyses while accounting for potential attacks against them.
We consider attacks targeting the HTML code of the phishing page as altering also its visual appearance may significantly affect the effectiveness of the phishing scam. Preserving the visual similarity between a phishing page and the website targeted by the scam is indeed a fundamental \emph{trust-building} tactic used by miscreants to attract new victims~\cite{Beardsley2005}.
In Sect.~\ref{sect:exp}, we simulate a case study in which \deltaphish is deployed as a module of a web application firewall, used to protect a specific website. In this setting, our approach can be used to detect whether users are accessing potential phishing webpages that are uploaded to the monitored website after its compromise. To simulate this scenario, we collect legitimate and phishing webpages hosted in compromised websites from \texttt{PhishTank}, and compare each of them with the corresponding homepage (which can be set as the reference page for \deltaphish when configuring the web application firewall).
We show that, under this setting, \deltaphish is able to correctly detect more than 99\% of the phishing pages while misclassifying less than 1\% of legitimate pages. We also show that \deltaphish can retain detection rates higher than $70\%$ even in the presence of adversarial attacks carefully crafted to evade it. To encourage reproducibility of our research, we have also made our dataset of $1,012$ phishing and $4,499$ legitimate webpages publicly available, along with the classification results of \deltaphish.
We conclude our work in Sect.~\ref{sect:conclusions}, highlighting its main limitations and related open issues for future research.
\section{Phishing Webpage Detection}
\label{sect:rel-work}
We categorize here previous work on the detection of phishing webpages along two main axes, depending on $(i)$ the detection approach, and $(ii)$ the features used for classification. The detection approach can be \emph{target-independent}, if it exploits generic features to discriminate between phishing and legitimate webpages, or \emph{target-dependent}, if it compares the suspect phishing webpage against known phishing targets. In both cases, features can be extracted from the webpage URL, its HTML content and visual appearance, as detailed below.
\myparagraph{Target-independent.} These approaches exploit features computed from the webpage URL and its domain name~\cite{Garera2007,Blum2010,Le2011,Marchal2012}, from its HTML content and structure, and from other sources, including search engines, HTTP cookies, website certificates~\cite{Pan2006,Xu2013,Basnet2014,Whittaker2010,Xiang2010,Xiang2011,Britt2012,Jo2010}, and even publicly-available blacklisting services like \texttt{Google Safe Browsing} and \texttt{PhishTank}~\cite{Ludl2007}.
Another line of work has considered the detection of phishing emails by analyzing their content along with that of the linked phishing webpages~\cite{Fette2007}.
\myparagraph{Target-dependent.} These techniques typically compare the potential phishing page to a set of known targets (\eg, \texttt{PayPal}, \texttt{eBay}).
HTML analysis has also been exploited to this end, often complemented by the use of search engines to identify phishing pages with similar text and page layout~\cite{Britt2012,Wardman2011}, or by the analysis of the pages linked to (or by) the suspect pages~\cite{Wenyin2012}. The main difference with target-independent approaches is that most of the target-dependent approaches have considered measures of \emph{visual similarity} between webpage \emph{snapshots} or embedded images, using a wide range of image analysis techniques, mostly based on computing low-level visual features, including color histograms, two-dimensional Haar wavelets, and other well-known image descriptors normally exploited in the field of computer vision~\cite{Chen2009a,Fu2006,Chen2014,Chen2010}. Notably, only few work has considered the combination of both HTML and visual characteristics~\cite{Medvet2008,Afroz2011}.
\myparagraph{Limitations and Open Issues.} The main limitations of current approaches and the related open research issues can be summarized as follows.
Despite \emph{target-dependent} approaches are normally more effective than \emph{target-independent} ones, they require a-priori knowledge of the set of websites that may be potentially targeted by phishing scams, or anyway try to retrieve them during operation by querying search engines.
This makes them clearly unable to detect phishing scams against unknown, legitimate services.
On the other hand, \emph{target-independent} techniques are, in principle, easier to evade, as they exploit generic characteristics of webpages to discriminate between phishing and legitimate pages, instead of making an explicit comparison between webpages. In particular, as shown in~\cite{Liang2016}, it is not only possible to infer enough information on how a publicly-available, \emph{target-independent} anti-phishing filter (like Google's Phishing Pages Filter) works, but it is also possible to exploit this information to evade detection, by carefully manipulating phishing webpages to resemble the characteristics of the legitimate webpages used to learn the classification system.
Evasion becomes clearly more difficult if visual analysis is also performed, as modifying the visual appearance of the phishing page tends to compromise the effectiveness of the phishing scam~\cite{Beardsley2005}.
However, mainly due to the higher computational complexity of this kind of analysis, only few approaches have combined HTML and visual features for target-dependent phishing detection~\cite{Medvet2008,Afroz2011}, and it is not clear to which extent they can be robust against well-crafted adversarial attacks.
Another relevant limitation is that no dataset has been made publicly available for comparing different detection approaches to a common benchmark, and this clearly hinders research reproducibility.
Our approach overcomes many of the aforementioned limitations. First, it does not require any knowledge of legitimate websites potentially targeted by phishing scams.
Although it may be thus considered a target-independent approach, it is not based on extracting generic features from phishing and legitimate webpages, but rather on comparing the characteristics of the phishing page to those of the homepage hosted in the compromised website.
This makes it more robust than other target-independent approaches against evasion attempts in which, \eg, the HTML code of the phishing webpage is obfuscated, as this would make the phishing webpage even more \emph{different} from the homepage.
Furthermore, we explicitly consider a security-by-design approach while engineering our system, based on explicitly accounting for well-crafted attacks against it. As we will show, our \emph{adversarial fusion} mechanisms guarantee high detection rates even under worst-case changes in the HTML code of phishing pages, by effectively leveraging the role of the visual analysis.
Finally, we publicly release our dataset to encourage research reproducibility and benchmarking.
\vspace{-10pt}
\section{DeltaPhish} \label{sect:deltaphish}
\vspace{-5pt}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{figs/deltaphish.pdf}
\caption{High-level architecture of \deltaphish.}
\label{fig:detection}
\end{center}
\end{figure*}
In this section we present \texttt{DeltaPhish} (\deltaphish). Its name derives from the fact that it determines whether a certain URL contains a phishing webpage by evaluating HTML and visual \emph{differences} between the input page and the website homepage.
The general architecture of \deltaphish is depicted in Fig.~\ref{fig:detection}.
We denote with $x \in \set X$ either the URL of the input webpage or the webpage itself, interchangeably.
Accordingly, the set $\set X$ represents all possible URLs or webpages. The homepage hosted in the same domain of the visited page (or its URL) is denoted with $x_{0} \in \set X$.
Initially, our system receives the input URL of the input webpage $x$ and retrieves that of the corresponding homepage $x_{0}$.
Each of these URLs is received as input by a \emph{browser automation} module (Sect.~\ref{sect:bro-auto}), which downloads the corresponding page and outputs its HTML code and a snapshot image.
The HTML code of the input page and that of the homepage are then used to compute a set of HTML features (Sect.~\ref{sub-sec:HTML-Based}).
Similarly, the two snapshot images are passed to another feature extractor that computes a set of visual features (Sect.~\ref{sub-sec:Snapshot-Based}).
The goal of these feature extractors is to map the input page $x$ onto a vector space suitable for learning a classification function.
Recall that both feature sets are computed based on a \emph{comparison} between the characteristics of the input page $x$ and those of the homepage $x_{0}$.
We denote the two mapping functions implemented by the HTML and by the visual feature extractor respectively with $\delta_{1}(x) \in \mathbb R^{\con d_{1}}$ and $\delta_{2}(x) \in \mathbb R^{\con d_{2}}$, being $\con d_{1}, \con d_{2}$ the dimensionality of the two vector spaces.
For compactness of our notation, we do not explicitly highlight the dependency of $\delta_{1}(x)$ and $\delta_{2}(x)$ on $x_{0}$, even if it should be clear that such functions depend on both $x$ and $x_{0}$.
These two vectorial-based representations are then used to learn two distinct classifiers, \ie, an HTML- and a Snapshot-based classifier. During operation, these classifiers will respectively output a \emph{dissimilarity} score $s_{1}(x) \in \mathbb R$ and $s_{2}(x) \in \mathbb R$ for each input page $x$, which essentially measure how \emph{different} the input page is from the corresponding homepage. Thus, the higher the score, the higher the probability of $x$ being a phishing page. These scores are then combined using different (standard and adversarial) \emph{fusion} schemes (Sect.~\ref{sub-sec:class-fusion}), to output an aggregated score $g(x) \in \mathbb R$. If $g(x) \geq 0$, the input page $x$ is classified as a phish, and as legitimate otherwise.
Before delving into the technical implementation of each module, it is worth remarking that \deltaphish can be implemented as a module in web application firewalls, and, potentially, also as an online blacklisting service (to filter suspicious URLs). Some implementation details that can be used to speed up the processing time of our approach are discussed in Sect.~\ref{sect:exp-res}.
\subsection{Browser Automation}
\label{sect:bro-auto}
The browser automation module launches a browser instance using \emph{Selenium}\footnote{\url{http://docs.seleniumhq.org}} to gather the snapshot of the landing web page and its HTML source, even if the latter is dynamically generated with (obfuscated) JavaScript code. This is indeed a common case for phishing webpages.
\subsection{HTML-Based Classification}
\label{sub-sec:HTML-Based}
For HTML-based classification, we define a set of $11$ features, obtained by comparing the input page $x$ and the homepage $x_{0}$ of the website hosted in the same domain. They will be the elements of the $\con d_{1}$-dimensional feature vector $\delta_{1}(x)$ (with $\con d_{1}=11$) depicted in Fig.~\ref{fig:detection}.
We use the Jaccard index $J$ as a similarity measure to compute most of the feature values. Given two sets $A, B$, it is defined as the cardinality of their intersection divided by the cardinality of their union:
\begin{eqnarray}\vspace{-5pt}
J(A, B)={\lvert A \cap B \lvert} / {\lvert A \cup B \lvert} \in [0,1] \, .
\label{eq:jaccard}\vspace{-5pt}
\end{eqnarray}
If $A$ and $B$ are both empty, $J(A,B)=1$.
The $11$ HTML features used by our approach are described below.
\myparagraph{$(1)$ URL.} We extract all URLs corresponding to hyperlinks in $x$ and $x_{0}$ through the inspection of the \texttt{href} attribute of the \texttt{} tag,\footnote{Recall that the \texttt{} tag defines a hyperlink and the \texttt{href} attribute is its destination.} and create a set for each page. URLs are considered once in each set without repetition.
We then compute the Jaccard index (Eq.~\ref{eq:jaccard}) of the two sets extracted.
For instance, let us assume that $x$ and $x_{0}$ respectively contain these two URL sets:
\begin{enumerate}
\item[$U_{x}:$] \begin{itemize}
\item[] \{\texttt{https://www.example.com/p1/}, \texttt{https://www.example.com/p2/},
\item[] \texttt{https://support.example.com/}\}
\end{itemize}
\item[$U_{x_{0}}:$] \begin{itemize}
\item[] \{\texttt{https://support.example.com/p1}, \texttt{https://www.example.com/p2/},
\item[] \texttt{https://support.example.com/en-us/ht20}\}
\end{itemize}
\end{enumerate}
In this case, since only one element is exactly the same in both sets (\ie, \texttt{https://www.example.com/p2/}), the Jaccard index is $J(U_{x}, U_{x_{0}})=0.2$.
\myparagraph{$(2)$ 2LD.} This feature is similar to the previous one, except that we consider the second-level domains (2LDs) extracted from each URL instead of the full link. The 2LDs are considered once in each set without repetition. Let us now consider the example given for the computation of the previous feature. In this case, both $U_{x}$ and $U_{x_{0}}$ will contain only \texttt{example.com}, and, thus, $J(U_{x}, U_{x_{0}})=1$.
\myparagraph{$(3)$ SS.} To compute this feature, we extract the content of the \texttt{