%\documentclass[times, 10pt,twocolumn]{article}
\documentclass[10pt, conference, compsocconf]{IEEEtran}
\usepackage{times}
\usepackage{verbatim}
\usepackage{slashbox}
\usepackage{multirow}

\usepackage{amsmath}
\usepackage{amsthm}

\usepackage{graphicx}
\usepackage{epsfig}
\usepackage{subfigure}
\usepackage{ccaption}
\usepackage{algorithmic}
\usepackage{algorithm}

\usepackage{url}



\hyphenation{op-tical net-works semi-conduc-tor}



%-------------------------------------------------------------------------
% take the % away on next line to produce the final camera-ready version
\pagestyle{empty}

%-------------------------------------------------------------------------
\begin{document}
\title{TwitterCop: Catching Cyber-criminals in Twitter}

\author{Chao Yang, Bobby Harkreader and Guofei Gu \\
 Texas A\&M University\\
\{\}@cse.tamu.edu}


\maketitle
\thispagestyle{empty}
\newcounter{count1}


%-------------------------------------------------------------------------
\section{Introduction}
Motivation. 

Social networking sites are now part of many people's daily routine: from posting their recent experience, digging out what friends are up to, to viewing photos or videos. Sophos experts note that unprecedented amounts of information are updated on these social networking sites every minute. Frequent use of social networking sites leads users to be targets of cybercriminals intent on stealing identities, spreading malware or bombarding users with spam. Twitter, one of the most popular micro-blog systems, also suffers from spammers. In August of 2009, for example, nearly 11 percent of all Twitter posts were spam~\cite{ref1}. A specific case is that in January 2009 many Twitter users received direct messages from their online followers enticing them to visit a phishing website that attempted to steal their username and password~\cite{ref2}.




The goal is to keep Twitter a spam-free community, instead of having tons of spam that consumes our time into reading advertisements or pornography messages or even links us to phishing sites. In order to gain enough attention, a Twitter spammer tends to start by following a large group of target users, hoping a fraction of those users will follow back and become his victims. Thus, not only the contents of the Twitter spam sent by the spammers, but also the following behavior itself can be quite annoying, since most users allow email notifications when someone follows them. These notification emails themselves can be regarded as a kind of spam. Therefore, it is beneficial if one is able to tell whether or not a specific user is a spammer, and filter out following notifications or spamming tweets from these spammers.
Also, as Twitter is an open community, there exist many non-English speaking users. This makes analyzing the contents of a Tweet alone insufficient for detecting spammers. For example, using an English (or Spanish) key word list for content analysis will miss out spammers speaking other languages, and collecting a key word list containing all languages the Twitter users could possibly speak is actually infeasible. In addition, key word matching alone suffers from high false negative when the spammers are shrewd enough to avoid using the key words in the blacklist.
Our solution is to find out a set of significant features, which can be utilized to classify users into spammers or non-spammers, by performing experiments that analyze user profile data together with the content of tweets (Twitter messages) fetched from Twitter. And then, through machine learning methods, a model in regard of the features can be learned and served as a spammer detector to classify users in spammer/non-spammer. Finally, the detector will be evaluated using some Twitter users pre-classified by human as well as using non-classified Twitter users.
We structure the remaining of this paper as follows. In Section 2, we discuss the definition and characteristics of Twitter spam. Section 3 presents our techniques and methodologies used for spammer detection. Section 4 talks about the dataset we use. Followed by evaluation in Section 5, and the results in Section 6. Related work is discussed in Section 7. And in Section 8, we give conclusions and future work.


The source or root of social spam is in social networking sites and services, or social networks. Broadly defined, social spam is simply requests or messages, which you do not want, that come from other people or entities within the social network that you use.
In short, our paper makes the following contributions:
\begin{itemize}
\item We propose the first user-side evil twin detection solution, to the best of our knowledge. Our technique does not rely on ``fingerprint'' checking of suspect devices nor require a known authorized AP/host list. Thus, this solution is particularly attractive to traveling users.
\item We propose to exploit the intrinsic communication structure and property of evil twin attacks.  Furthermore, we propose two statistical anomaly detection algorithms for evil twin detection, TMM and HDT. In particular, our HDT improves TMM by removing the training requirement. HDT is resistant to the environment change such as network saturation and RSSI fluctuation.
\item We implement our techniques in a prototype system, ETSniffer (Evil Twin sniffer). We have extensively evaluated ETsniffer in several real-world wireless networks, including both 802.11b and 802.11g. Our evaluation results show that ETSniffer can detect an evil twin quickly and with high accuracy (a high detection rate and a low false positive rate).
\end{itemize}

%-------------------------------------------------------------------------
\section{Related Work}
The goal of this system is to detect spammers -- users who post tweets containing urls linking to phishing mallicious webpages in the Twitter.

Tweet Crawler

Detect Trustable Users

Mal

implement and evaluate Naive Bayes learner and execute the test based on the dataset of UCI Repository. The program, implemented in Python, mainly consists of three modules: Dataset, Train and Test.


The system contains parts.


%-------------------------------------------------------------------------
\section{Proposed solution}
This section presents the proposed solution to the problem of finding malicious users on the social networking website, Twitter. To find malicious users in Twitter, traits of malicious users must be observed. This report uses the assumption that a malicious user will post one or more links to malicious websites for the purposes of either propagating malware or phishing for identity theft. The solution presented in this report can be broken into four different steps: crawling Twitter for data, extracting the URLs, filtering the URLs and analyzing the URLs.

\subsection{Crawling Twitter}
The first step is to collect data from Twitter. This crawler collected information including user screen names, verified status, follower count, following count and other information. Tweets were also saved in order for URLs to be extracted and analyzed. The data was collected from Twitter's public timeline. This timeline gives the 20 most recent Tweets and the users who posted them. The data collector collects data from these 20 users, the users who follow them and the users they follow. Using this method, data on over 500,000 users were collected and over 20 million Tweets were saved.

\subsection{Extracting URLs}
The next step is to find users that are posting malicious URLs. To do this, the URLs must first be extracted from the Tweets. There is also another problem about the URLs from Tweets. Twitter only allows so many characters, so links that contain many characters are not allowed. To solve this problem, many websites have been created to shorten URLs. These websites simply create a shorter URL that will redirect to the real URL. This method requires analysis of the real URL, therefore, when URLs are extracted from the Tweets, they are then converted to the real URL before being analyzed further.

\subsection{Filtering Trustable Users}
The problem with this naive approach is the volume of users that have been collected. In order to reduce the number of users that need to be processed, this paper introduces the idea of a trustable user. Twitter already has verified user accounts. The owners of these accounts have been verified by Twitter. This paper assumes that verified users are trusted and are not malicious users. The users that these verified users follow are also partially trustable, since they are followed by trusted users. To determine if a user is trustable enough, a score is assigned to each user. The score for each verified user is 1. Each non-verified user, $U_{n}$, increases their trustable score for each verified user, $U_{v}$, that follows $U_{n}$. The amount that the score of $U_{n}$ increases is proportional to the number of users that the verified user, $U_{v}$, follows. If $U_{v}$ follows many users, only a small amount of trust will be given to a non-verified user, $U_{n}$, who is followed by $U_{v}$. On the other hand, if $U_{v}$ follows only a few users, $U_{n}$ will gain a large amount of trust. In order to assign a score, $S_{u}$ for a non-verified user, $U_{n}$, the following algorithm is used:

\begin{algorithm}
\caption{Trust Score Calculation Algorithm}
\label{alg1}
\begin{algorithmic}
\STATE CalcTrustScore($U_{n}$)
\STATE $followers = getFollowers(U_{n})$
\STATE $score = 0.0$
\FORALL{$F_{U_{n}} \in followers:$}
    \IF{$isVerified(F_{U_{n}}):$}
        \STATE $score += 1/numFollowings(F_{U_{n}})$
    \ENDIF
\ENDFOR
\STATE return score
\end{algorithmic}
\end{algorithm}

If the user, $U_{n}$, has a score above the threshold, THRESHOLD, the user is considered trusted and this user's Tweets do not need to be analyzed.

\subsection{Analyzing URLs}
Once all of the URLs have been extracted from all of the Tweets, the URLs are processed in a two phase analysis step. The first phase is a fast analyzer that simply analyzes the URL itself. It uses a machine learning technique to determine if the URL is malicious based on features of the URL. The techniques used in this phase are based on the techniques discussed by Justin Ma in \cite{ref}. This technique uses features of URLs such as, length, number of '.'s in the URL, WHOIS information and others. This phase has a high false positive rate and is therefore supplemented by a second analysis by Google Safebrowsing. The URLs marked as malicious by the machine learning technique are then analyzed again by Google Safebrowsing. Google Safebrowsing uses a blacklist of URLs from Google's web indexer. If the URL is marked as malicious by the machine learning algorithm and Google Safebrowsing, then the URL and the user who Tweeted this URL are considered malicious and these users are saved.

%-------------------------------------------------------------------------
\section{Evalutaion}


%-------------------------------------------------------------------------
\section{Conclusion}
In this paper, we have proposed a novel lightweight user-side evil twin attack detection technique. We presented two algorithms, TMM and HDT. Through our prototype system implementation and extensive evaluation in several real-world wireless networks, we showed that our proposed technique is effective and efficient. HDT is particularly attractive because it does not rely on trained knowledge or parameters, and is resilient to changes in wireless environments.



\section*{Acknowledgment}

We thank Mahesh Sabbavarapu and Radu Stoleru for helpful discussions on an early version of this paper.

%-------------------------------------------------------------------------

\bibliography{twitter}
\bibliographystyle{abbrv}



\end{document}

