Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
107 kB
{
"paper_id": "I13-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:33.871484Z"
},
"title": "Detecting Spammers in Community Question Answering",
"authors": [
{
"first": "Zhuoye",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "12110240006,zhouyaqian,qz"
}
},
"email": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "12110240006,zhouyaqian,qz"
}
},
"email": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "12110240006,zhouyaqian,qz"
}
},
"email": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "12110240006,zhouyaqian,qz"
}
},
"email": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "12110240006,zhouyaqian,qz"
}
},
"email": "xjhuang@fudan.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As the popularity of Community Question Answering(CQA) increases, spamming activities also picked up in numbers and variety. On CQA sites, spammers often pretend to ask questions, and select answers which were published by their partners or themselves as the best answers. These fake best answers cannot be easily detected by neither existing methods nor common users. In this paper, we address the issue of detecting spammers on CQA sites. We formulate the task as an optimization problem. Social information is incorporated by adding graph regularization constraints to the text-based predictor. To evaluate the proposed approach, we crawled a data set from a CQA portal. Experimental results demonstrate that the proposed method can achieve better performance than some state-of-the-art methods.",
"pdf_parse": {
"paper_id": "I13-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "As the popularity of Community Question Answering(CQA) increases, spamming activities also picked up in numbers and variety. On CQA sites, spammers often pretend to ask questions, and select answers which were published by their partners or themselves as the best answers. These fake best answers cannot be easily detected by neither existing methods nor common users. In this paper, we address the issue of detecting spammers on CQA sites. We formulate the task as an optimization problem. Social information is incorporated by adding graph regularization constraints to the text-based predictor. To evaluate the proposed approach, we crawled a data set from a CQA portal. Experimental results demonstrate that the proposed method can achieve better performance than some state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Due to the massive growth of Web 2.0 technologies, user-generated content has become a primary source of various types of content. Community Question Answering (CQA) services have also attracted continuously growing interest. They allow users to submit questions and answer questions asked by other users. A huge number of users contributed enormous questions and answers on popular CQA sites such as Yahoo! Answers 1 , Baidu Zhidao 2 , Facebook Questions 3 , and so on. According to a statistic from Yahoo, Yahoo! Answers receives more than 0.82 million questions and answers per day 4 .",
"cite_spans": [
{
"start": 585,
"end": 586,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On CQA sites, users are primary contributors of content. The volunteer-driven mechanism brings many positive effects, including the rapid growth in size, great user experience, immediate response, and so on. However, the open access and reliance on users have also made these systems becoming targets of spammers. They post advertisements or other irrelevant answers aiming at spreading advertise or achieving other goals. Some spammers directly publish content to answer questions asked by common users. Additionally, another kind of spammers (we refer them as \"best answer spammers\") create multiple user accounts, and use some accounts to ask a question, the others to provide answers which are selected as the best answers by themselves. They deliberately organize themselves in order to deceive readers. This kind of spammers are even more hazardous, since they are neither easily ignored nor identifiable by a human reader. Google Confucius CQA system also reported that best answer spammers may generate amounts of fake best answers, which could have a non-trivial impact on the quality of machine learning model (Si et al., 2010) .",
"cite_spans": [
{
"start": 1120,
"end": 1137,
"text": "(Si et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the increasing requirements, spammer detection has received considerable attentions, including e-mails(L. Gomes et al., 2007; C.Wu et al., 2005) , web spammer (Cheng et al., 2011) , review spammer (Lim et al., 2010; N.Jindal and B.Liu, 2008; ott et al., 2011) , social media spammer (Zhu et al., 2012; Bosma et al., 2012; Wang, 2010) . However, little work has been done about spammers on CQA sites. Filling this need is a challenging task. The existing approaches of spam detection can be roughly into two directions. The first direction usually relied on costly human-labeled training data for building spam classifiers based on textual features (Y. Liu et al., 2008; Y.Xie et al., 2008; Ntoulas et al., 2006; Gyongyi and Molina, 2004) . However, since fake best answers are well designed and lack of easily identifiable textual patterns, text-based methods cannot achieve satisfactory performance. Another direction relied solely on hyperlink graph in the web (Z. Gyongyi et al., 2004; Krishnan and Raj, 2006; Benczur et al., 2005) . Although making good use of link information, link-based methods neglect the contentbased information. Moreover, unlike the web, there is no explicit link structure on CQA sites. So two intuitive research questions are: (1) Is there any useful link-based structure for spammer detection in CQA? (2) If so, can the two techniques, i.e., content-based model and link-based model, be integrated together to complement each other for CQA spammer detection?",
"cite_spans": [
{
"start": 111,
"end": 130,
"text": "Gomes et al., 2007;",
"ref_id": null
},
{
"start": 131,
"end": 149,
"text": "C.Wu et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 164,
"end": 184,
"text": "(Cheng et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 202,
"end": 220,
"text": "(Lim et al., 2010;",
"ref_id": "BIBREF14"
},
{
"start": 221,
"end": 246,
"text": "N.Jindal and B.Liu, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 247,
"end": 264,
"text": "ott et al., 2011)",
"ref_id": "BIBREF18"
},
{
"start": 288,
"end": 306,
"text": "(Zhu et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 307,
"end": 326,
"text": "Bosma et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 327,
"end": 338,
"text": "Wang, 2010)",
"ref_id": "BIBREF21"
},
{
"start": 657,
"end": 674,
"text": "Liu et al., 2008;",
"ref_id": null
},
{
"start": 675,
"end": 694,
"text": "Y.Xie et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 695,
"end": 716,
"text": "Ntoulas et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 717,
"end": 742,
"text": "Gyongyi and Molina, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 972,
"end": 993,
"text": "Gyongyi et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 994,
"end": 1017,
"text": "Krishnan and Raj, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 1018,
"end": 1039,
"text": "Benczur et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the problems, in this paper, we first investigate the link-based structure in CQA. Then we formulate the task as an optimization problem in the graph with an efficient solution. We learn a content-based predictor as an objective function. The link-based information is incorporated into textual predictor by the way of graph regularization. Finally, to evaluate the proposed approach, we crawled a large data set from a commercial CQA site. Experimental results demonstrate that our proposed method can improve the accuracy of spammer detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The major contributions of this work can be summarized as follows: (1) To the best of our knowledge, our work is the first study on spammer detection on CQA sites; (2) Our proposed optimization model can integrate the advantages of both content-based model and link-based model for CQA spammer detection. (3) Experimental results demonstrate that our method can improve accuracy of spammer detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of the paper is organized as follows: In section 2, we review a number of the state-of-the-art approaches in related areas. Section 3 analyzes the social network of CQA sites. Section 4 presents the proposed method. Experimental results in test collections and analysis are shown in section 5. Section 6 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of current studies on spam detection can be roughly divided into two categories: contentbased model and link-based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Content-based method targets at extracting ev-idences from textual descriptions of the content, treating the text corpus as a set of objects with associated attributes, and applying some classification methods to detect spam(P. Heymann et al., 2007; C.Castillo et al., 2007; Y.Liu et al., 2008; Y.Xie et al., 2008) . Fetterly proposed quite a few statistical properties of web pages that could be used to detect content spam(D. Fetterly et al., 2004) . Benevenuto went a step further by addressing the issue of detecting video spammers and promoters and applied the state-of-the-arts supervised classification algorithm to detect spammers and promoters (Benevenuto et al., 2009) . Lee proposed and evaluated a honeypot-based approach for uncovering social spammers in online social systems (Lee et al., 2010) . Wang proposed to improve spam classification on a microblogging platform (Wang, 2010) . An alternative web spam detection technique relies on link analysis algorithms, since a hyperlink often reflects some degree of similarity among pages (Gyngyi and Garcia-Molina, 2005; Gyongyi et al., 2006; Zhou et al., 2008) . Corresponding algorithms include TrustRank(Z. Gyongyi et al., 2004) and AntiTrustRank (Krishnan and Raj, 2006) , which used a seed set of Web pages with labels of trustiness or badness and propagate these labels through the link graph. Moreover, Benczur developed an algorithm called SpamRank which penalized suspicious pages when computing PageRank (Benczur et al., 2005) .",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "Heymann et al., 2007;",
"ref_id": null
},
{
"start": 250,
"end": 274,
"text": "C.Castillo et al., 2007;",
"ref_id": "BIBREF3"
},
{
"start": 275,
"end": 294,
"text": "Y.Liu et al., 2008;",
"ref_id": "BIBREF22"
},
{
"start": 295,
"end": 314,
"text": "Y.Xie et al., 2008)",
"ref_id": "BIBREF23"
},
{
"start": 428,
"end": 450,
"text": "Fetterly et al., 2004)",
"ref_id": null
},
{
"start": 653,
"end": 678,
"text": "(Benevenuto et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 790,
"end": 808,
"text": "(Lee et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 884,
"end": 896,
"text": "(Wang, 2010)",
"ref_id": "BIBREF21"
},
{
"start": 1050,
"end": 1082,
"text": "(Gyngyi and Garcia-Molina, 2005;",
"ref_id": "BIBREF8"
},
{
"start": 1083,
"end": 1104,
"text": "Gyongyi et al., 2006;",
"ref_id": "BIBREF10"
},
{
"start": 1105,
"end": 1123,
"text": "Zhou et al., 2008)",
"ref_id": "BIBREF25"
},
{
"start": 1172,
"end": 1193,
"text": "Gyongyi et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 1212,
"end": 1236,
"text": "(Krishnan and Raj, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 1476,
"end": 1498,
"text": "(Benczur et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Before analyzing the social network in CQA, we introduce some definitions. We refer users on C-QA sites are someone who ask at least one question or answer at least one question. Moreover, users are divided into two categories: spammers and legitimate users. We define spammers as users who post at least one question or one answer intent to create spam.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "A CQA site is particularly rich in user interactions. These interactions can be represented by Figure 1(a) , where a particular question has a number of answers associated with it, represented by an edge from the question to each of the answer. We also include vertices representing authors of question or answers. An edge from a user to a question means that the user asked the question, and an edge from an answer to a user means that the answer was posted by this user. In the example, a user U 1 asks a question Q 1 , while users U 4 , U 5 and U 6 answers this question. In order to observe the relation between users more clearly and directly, we summarize the relations between users as a graph shown in Figure 1 (b). This graph contains vertices representing the users and omits the actual questions and answers that connect the users. Question-answer relation:",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 106,
"text": "Figure 1(a)",
"ref_id": null
},
{
"start": 710,
"end": 718,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "As shown in Fig- ure 2(a), U 4 answers U 1 's question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "We define that U 4 and U 1 have Question-answer relation. Furthermore, Question-answer relation can be divided into two disjoint sets: best-answer relation and non-best-answer relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "Best-answer relation: U 1 selects U 5 's answer as the best answer. We define that U 1 and U 5 have best-answer relation. The solid lines in Figure 2 (b) express the best-answer relation.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "Non-best-answer relation: U 1 does not select U 4 's answer as the best answer. We define that U 1 and U 4 have non-best-answer relation. The dashed lines in Figure 2 (c) express the non-best-answer relation.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Social Network",
"sec_num": "3"
},
{
"text": "From analyzing data crawled from CQA site, we present the following property about best-answer relation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best-answer Consistency Property",
"sec_num": "3.1"
},
{
"text": "Best-answer consistency property: If U i selects U j 's answer as the best answer, the classes of users U i and U j should be similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best-answer Consistency Property",
"sec_num": "3.1"
},
{
"text": "We explain this property as follows: consider that a legitimate user is unlikely to select a spammer's answer as the best answer due to its low quality, while a legitimate user is unlikely to answer a spammer's question, so the possibility of a spammer selecting a legitimate user's answer will also be small. This means that two users linked via best-answer relation are more likely to share similar property than two random users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best-answer Consistency Property",
"sec_num": "3.1"
},
{
"text": "Different from the general spammers, some spammers generate many fake best answers to obtain higher status in the community. We refer them as best answer spammers. In order to generate fake best answers, a spammer creates multiple user accounts first. Then, it uses some of the accounts to ask questions, and others to provide answers. Such spammers may post low quality answers to their own questions, and select those as the best by themselves. They may generate lots of fake best answers, which may highly impact the user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "Furthermore, when the spammer's intention is just advertising, we can easily identify signs of its activity: repeated phone numbers or URLs and then ignore them. However, when the spammer's intention is to obtain higher reputation within the community, the spam content may lack obvious patterns. Fortunately, there are still some clues that may help identify best answer spammers. Two characteristics are described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "High best answer rate: Best answer rate is the ratio of answers selected as the best answer among the total answers. This kind of spammers have an incredible high best answer rate, compared to normal users. Specifically, in a possible best answer spammer pair, sometimes only one user has an incredible high best answer rate. Because normally one responses for asking and another for answering. So we calculate the best answer rate BR(i, j) for a user pair (u i , u j ) based on the maximum of their best answer rates:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "BR(i, j) = M ax(BR(i), BR(j))",
"eq_num": "(1)"
}
],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "Where BR(i) is the best answer rate of u i . Time margin score: To be efficient, best answer spammers tend to answer their own ques-tion quickly. We consider the time margin score T ime(i, j) between a question posted and answered for u i and u j as an evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "T ime(i, j) = { 1, if T imeM argin(i, j) < \u03b5 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "(2) where T imeM argin(i, j) is the real time margin between u i asks a question and u j answers this question and \u03b5 = 30 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "The best answer spammer score s(i, j) for a user pair (u i , u j ) can be calculated as the combination of these two scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "s(i, j) = \u00b5BR(i, j) + (1 \u2212 \u00b5)T ime(i, j) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "\u00b5 is trade-off of two scores, here we simply set \u00b5 = 0.5. The value of s(i, j) is between 0 to 1. The higher s(i, j) is, the more likely u i and u j is a pair of the best answer spammers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Best Answer Spammer",
"sec_num": "3.2"
},
{
"text": "In this section, the framework of our proposed approach is presented. First, the problem is formally defined. Next, we build a baseline supervised predictor that makes use of a variety of textual features, and then the consistency property and best answer spammer characteristics are incorporated by adding regularization to the textual predictor, last we discuss how to effectively optimize it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spammer Detection on CQA Sites",
"sec_num": "4"
},
{
"text": "On CQA sites, there are three distinct types of entities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "users U = {u 1 , ...u l+u }, answers A = {a 1 , ...a M }, and questions Q = {q 1 , ...q N }. The set of users U contains both U L = {u 1 , ...u l } of l labeled users and U U = {u l+1 , ...u l+u } of u un- labeled users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "We model the social network for U as a directed graph G = (U, E) with adjacency matrix A, where A ij = 1 if there is a link or edge from u i to u j and zero otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "Given the input data {U L , U U , G, Q, A}, we want to learn a predictor c for a user",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "u i . c(u i )\u2212 > {spammer, legitimate user} (4) Legitimacy score y i (0 \u2264 y i \u2264 1,i =1,2,...n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "is computed for all the users. The lower y i is, the more likely u i is a spammer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4.1"
},
{
"text": "In this subsection, we build a baseline predictor based on textual features in a supervised fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "We regard the legitimacy scores as generated by combining textual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "We consider the following textual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 The Length of answers: The length may to some extent indicate the quality of the answer. The average length of answers is calculated as a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 The ratio of Ads words in answers: Advertising of products is the main goal of a kind of spammers and they repeat some advertisement words in their answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 The ratio of Ads words in questions: Some spammers will refer some Ads in questions in order to get attention from more users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 The number of received answers: The number of received answers can indicate the quality of the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 Best answer rate: Best answer rate can show the quality of their answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 The number of answers: It can indicate the authority of a user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 Relevance of question and answer: We measure the average content similarity over a pair of question and answer which is computed using the standard cosine similarity over the bag-of-words vector representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2022 Duplication of answers: The Jaccard similarity of answers are applied to indicate the duplication of answers .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "With these features, suppose there are in total k features for each user u i , denoted as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "x i . Then X = (x 1 , x 2 , ...x n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "is the k-by-n feature matrix of all users. Based on these features, we define the legitimacy score of each user as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "y i = w T x i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "where w is a k-dimensional weight vector. Suppose we have legitimate/spammer labels t i in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t i = { 1, u i is labeled as legitimate user 0, u i is labeled as spammer",
"eq_num": "(6)"
}
],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "We will then define the loss term as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "\u2126(w) = 1 l l \u2211 i=1 (w T x i \u2212 t i ) 2 + \u03b1w T w (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "Once we have learned the weight vector w, we can apply it to any user feature vector and predict the class of unlabeled users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Spammer Prediction",
"sec_num": "4.2"
},
{
"text": "In Section 4.2, each user is considered as a standalone item. In this subsection, we exploit social information to improve CQA spammer detection. In Section 3.1, the consistency property has been analyzed that users connected via bestanswer relation are more similar in property. So the property is enforced by adding a regularization term into the optimization model. The regularization is acted in a collection data set, including a small amount of labeled data(l users) and a large amount of unlabeled data(u users). Then the regularization term is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "REG 1 (U ) = l+u \u2211 i,j A ij (y i \u2212 y j ) 2",
"eq_num": "(8)"
}
],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "Minimizing the regularization constraint will force users who have best-answer relation belong to the same class. We formulate this as graph regularization. The graph adjacency matrix A is defined as A ij = 1 if u j selects u i 's answer as the best answer, and zero otherwise. Then, Equation 8 becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "REG 1 (w) = l+u \u2211 i,j A ij (w T x i \u2212 w T x j ) 2",
"eq_num": "(9)"
}
],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "With this regularization, then the objective function Equation 7 becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2126 1 (w) = 1 l l \u2211 i=1 (w T x i \u2212 t i ) 2 + \u03b1w T w +\u03b2 l+u \u2211 i,j A ij (w T x i \u2212 w T x j ) 2",
"eq_num": "(10)"
}
],
"section": "Regularization for Consistency Property",
"sec_num": "4.3"
},
{
"text": "In this subsection, we focus on best answer spammers. Since they cannot be easily detected by only textual features(Equation 7), we introduce an additional penalty score b i to each user u i which indicates the possibility of becoming a best answer spammer. With the penalty score b i , Equation 5 can be redefined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "y i = w T x i \u2212 b i (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "where b i is a non-negative score. In order to obtain b i , characteristics of best answer spammers are incorporated by adding graph regularization to the optimization problem. The regularization is also acted in a collection data set. Two kinds of regularization are presented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "Penalty for Best Answer Spammers in Pairs As described in Section 3.2, the score s(i, j) indicates the possibility of u i and u j becoming a pair of best answer spammers(Equation 3). We expect u i and u j , who create the spam together, should share this possibility together, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "b i + b j = e \u00d7 s(i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": ", where e is a penalty factor, we empirically set it to 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "Then we can also formulate this as graph regularization as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "REG 2 (b) = l+u \u2211 i<j A ij (b i + b j \u2212 e \u00d7 s(i, j)) 2 (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "Penalty Assignment for Individual User After introducing a penalty score to the user pair (u i , u j ) , we have to decide how they share this penalty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "Penalty is assigned to u i and u j similarly. This can be also formulated as graph regularization as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "REG 3 (b) = l+u \u2211 i<j A ij (b i \u2212 b j ) 2",
"eq_num": "(13)"
}
],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "With the regularization for best answer spammer, the objective function becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2126 3 (w, b) = 1 l l \u2211 i=1 (w T x i \u2212 b i \u2212 t i ) 2 + \u03b1w T w +\u03b2 l+u \u2211 i,j A ij ((w T x i \u2212 b i ) \u2212 (w T x j \u2212 b j )) 2 +\u03b3 l+u \u2211 i<j A ij (b i + b j \u2212 e \u00d7 s(i, j)) 2 +\u03b4 l+u \u2211 i<j A ij (b i \u2212 b j ) 2",
"eq_num": "(14)"
}
],
"section": "Regularization for Best Answer Spammer",
"sec_num": "4.4"
},
{
"text": "By considering all the components of the objective function introduced in the previous subsection, we can obtain the optimization problem. Our goal is to minimize the objective function to get optimal parameters vector w * and penalty vector b. For solving the optimization problem, we apply a kind of limited-memory Quasi-Newton(LBFGS) (Liu and Nocedal, 1989) . After obtaining the optimal parameter vector w * and b, we can use the following scoring function y i = w * T x i \u2212 b i to calculate scores for unlabeled users. Users with low scores will be regarded as spammers.",
"cite_spans": [
{
"start": 337,
"end": 360,
"text": "(Liu and Nocedal, 1989)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Problem",
"sec_num": "4.5"
},
{
"text": "In this section, the experimental evaluation of our approach is presented. Firstly, we introduce the details of our data sets. Then the prediction performance of our proposed approach is compared with other methods. Finally, we test the contribution of the loss term and each regularization term on these real data sets and conduct some further analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In order to evaluate our proposed approach to detect CQA spammers from the CQA site, we need a training/test collection of users, classified into the target categories. However, to the best of our knowledge, no such collection is currently available, thus requiring us to build one. We consider a CQA user is a user if he has posted at least one question or one answer. Moreover, we define spammer as a user who intends to create one spam. Examples of spams are: (1) an advertisement of a product or web site. (2) Completely unrelated to the subject of question. A user that is not a spammer is considered legitimate. Then we will explain the strategy of crawling data from a CQA site, Baidu Zhidao, one of the most popular CQA site in China. We randomly select 50 seed users covering different topics, including sports, entertainment, medicine and technology. The crawler follows links of question asked and question answered, gathering information on different attributes of users, including content of all responded questions and answers. The crawler ran for one week, gathering 29,257 users and 299,815 Q&A pairs. From the collection data, we randomly select a training set of 1000 users for learning process and a test set of 698 users for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collections",
"sec_num": "5.1"
},
{
"text": "Three annotators were asked to label the users as spammers or legitimate users in both training and test set. All of the judges are Chinese and have used Baidu Zhidao frequently. The annotators judge the property of a user comprehensively based on the content information (quality of their answers, i.e. advertising and duplication of answers) and social information (interaction with other possible-spammers). The Cohen's Kappa coefficient is around 0.85, showing fair to good agreement. And our test collection contains 698 users, including 525 legitimate users and 173 spammers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collections",
"sec_num": "5.1"
},
{
"text": "To measure the effectiveness of our proposed method, we use the standard metrics such as precision, recall, the F1 measure. Precision is the ratio of correctly predicted users among the total predicted users by system. Recall(R) is the ratio of correctly predicted users among the actual users manually assigned. F1 is a measure that trades off precision versus recall. F1 measure of the spammer class is 2P R/(P + R).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Settings",
"sec_num": "5.2"
},
{
"text": "We fix the parameter \u03b1 in optimization method to 0.0005 which gives the best performance for the textual predictor and simply set the coefficients \u03b2 = 0.5 \u03b3 = \u03b4 = 1 in the objective function. The problem of parameter sensitivity will be tested in Section 5.6. In the optimization process, initial value of w i is set to a random value range from 0 to 1 and initial value of b i is set to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Settings",
"sec_num": "5.2"
},
{
"text": "Since there has been little work on QA spam detection, we implement four state-of-the-art methods for comparison, where TrustRank and An-tiTrustRank are selected to represent link-based model, while Decision Tree and SVM are two content-based classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "\u2022 Our approach: Optimization with regularization terms that Similarity with best-answer relation, penalty for Best answer spammer. (Equation 14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "\u2022 TrustRank: TrustRank is a well-known link-based method in Web spam detection, which is totally based on the Web link graph(Z. Gyongyi et al., 2004 ).",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "Gyongyi et al., 2004",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "\u2022 AntiTrustRank: AntiTrustRank is another well-known link-based method, which assumes that a web page pointing to spam pages is likely to be spam (Krishnan and Raj, 2006) .",
"cite_spans": [
{
"start": 146,
"end": 170,
"text": "(Krishnan and Raj, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "\u2022 Decision Tree: Castillo et al. applied a base classifier, decision tree, for spam detection, the features include content-based and linkbased features(C. Castillo et al., 2007) .",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "Castillo et al., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "\u2022 SVM: We applied another state-of-the-art classifier SVM (Cortes and Vapnik, 1995) . First, taking the advantages of both contentbased model and link-based model, our optimization approach outperforms baselines under all metrics. This indicates the robustness and effectiveness of our approach.",
"cite_spans": [
{
"start": 58,
"end": 83,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "The second observation is link-based models(TrustRank and AntiTrustRank) cannot perform well. The explanations are as follows. (1)Linkbased models rely solely on hyperlinks, without considering content-based features. However, as described in section 4.2, the content can provide a strong hint for detecting spammers. (2)A technical requirement of link-based model is that the link graph must be strongly connected, which may be the case in Web, but it is not the case in QA user question-answer graph. We measured on our collection dataset and found that the graph density(defined as D =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "5.3"
},
{
"text": "|V |(|V |\u22121) for a graph with vertices V and edges E) of user question-answer graph is only 10 \u22124 . The small connectivity limits the performance of link-based model. This indicates that link-based models cannot be directly applied to CQA spammer detection. Considering that our proposed approach can integrate contentbased features and link-based features effectively, we regard our approach as very complementary to the state-of-the-art link-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2|E|",
"sec_num": null
},
{
"text": "Another observation is that the content-based classifiers underperform our approach. And SVM performs slightly better than Decision Tree. This shows the advantages of our proposed regularization in section 4. Regularization for consistency can propagate the labeled information among users, and regularization for best answer spammers help to identify the best answer spammers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2|E|",
"sec_num": null
},
{
"text": "In this subsection, we validate the contribution of our proposed loss term and regularization terms by the performance of real spammer detection task. And Table 2 lists the results of each method for comparison. We consider the following methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Contribution of Loss and Regularization",
"sec_num": "5.4"
},
{
"text": "BL: Optimization using only content-based features. Equation 7REG:Sim: Optimization with one regularization term that Similarity with best-answer relation. From the results we have the following observations: (1) Our content-based classifier BL performs well, due to the well-formed supervised learning model and reasonable features. (2) The performance of REG:Sim improves over BL, especially in the Precision measure because the social information is useful. (3) REG:Sim+BAS can significantly improve over BL especially in Recall measure. Because after adding penalty to best answer spammer, some best answer spammers can be detected successful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of Loss and Regularization",
"sec_num": "5.4"
},
{
"text": "In this subsection, we test the robustness of the features described in Section 4.2. To measure the discrimination power between spammers and legitimate users of each proposed attribute, we generate a Receiver Operating Characteristics (ROC)curve. ROC curves plot false positive rate on the X axis and true positive rate on the Y axis. The closer the ROC curve is to the upper left corner, the higher the overall accuracy is. Samples with the lowest scores (10%,20%...100%) for each attribute are labeled as spammers respectively. The (ROC) curve are shown in Figure 3 . Figure 3 shows the discrimination power of each content feature we described in Section 4.2. The first observation is that all of the content features are discriminative. The feature of Ads words in questions is the most powerful. Because few legitimate users will repeat Ads words in questions, so this feature can help to identify spammers more easily. Note that the feature of the best answer rate do not perform well. Because some best answer spammers also have high best answer rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 560,
"end": 568,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 571,
"end": 579,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Contribution of Content-based Features",
"sec_num": "5.5"
},
{
"text": "Our optimization approach have four parameters \u03b1, \u03b2, \u03b3, \u03b4 to set: the tradeoff weight for each regularization term. The value of the regulariza-tion weight controls our importance in the regularizer: a higher value results in a higher penalty when violating the corresponding regularization. So we mainly evaluate the sensitivity of our model with parameters by fixing all the other parameters and let one of {\u03b1, \u03b2, \u03b3, \u03b4} varies. Figure 4 shows the prediction performance in F1 measure varying each parameter. As we observed over a large range of parameters, our approach (REG:Sim+BAS) achieves significantly better performance than BL method. It indicates that the parameters selection will not critically affect the performance of our optimization approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 438,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Sensitivity",
"sec_num": "5.6"
},
{
"text": "In this paper, we first studied social networks on CQA sites. We found that spammers are usually connected to other spammers via the best-answer relation. We also studied the \"best answer spammers\" on CQA sites, which cannot be easily detected for lack of identifiable textual patterns. Our proposed model incorporated the link-based information by adding regularization constraints to the textual predictor. Experimental results demonstrated that our method is more effective for spammer detection compared to other state-of-the-art methods. Besides obtaining better performance, we have also analyzed the CQA social networks, which gives us insight on the model design.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://answers.yahoo.com 2 http://zhidao.baidu.com 3 http://www.facebook.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://yanswersblog.com/index.php/archives/2010/05/03/1billion-answers-served",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (61003092, 61073069), National Major Science and Technology Special Project of China (2014ZX03006005), Shanghai Municipal Science and Technology Commission (No.12511504500) and \"Chen Guang\" project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation(11CG05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Spamrank-fully automatic link spam detection",
"authors": [
{
"first": "A",
"middle": [],
"last": "Andras",
"suffix": ""
},
{
"first": "Karoly",
"middle": [],
"last": "Benczur",
"suffix": ""
},
{
"first": "Tamas",
"middle": [],
"last": "Csalogany",
"suffix": ""
},
{
"first": "Mate",
"middle": [],
"last": "Sarlos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uher",
"suffix": ""
}
],
"year": 2005,
"venue": "AIRWeb'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andras A. Benczur, Karoly Csalogany, Tamas Sarlos, and Mate Uher. 2005. Spamrank-fully automatic link spam detection. In AIRWeb'05.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Detecting spammers and content promoters in online video social networks",
"authors": [
{
"first": "Fabricio",
"middle": [],
"last": "Benevenuto",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Virgilio",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "Jussara",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Goncalves",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceeding of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabricio Benevenuto, Tiago Rodrigues, Virgilio Almei- da, Jussara Almeida, and Marcos Goncalves. 2009. Detecting spammers and content promoters in online video social networks. In Proceeding of SIGIR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A framework for unsupervised spam detection in social networking sites",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Bosma",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Meij",
"suffix": ""
},
{
"first": "Wouter",
"middle": [],
"last": "Weerkamp",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ECIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Bosma, Edgar Meij, and Wouter Weerkamp. 2012. A framework for unsupervised spam detec- tion in social networking sites. In Proceedings of ECIR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Know your neighbors: Web spam detection using the web topoloty",
"authors": [
{
"first": "C",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Donato",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gionis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Silvestri",
"suffix": ""
}
],
"year": 2007,
"venue": "Int'l ACM SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.Castillo, D.Donato, A.Gionis, V.Murdock, and F.Silvestri. 2007. Know your neighbors: Web s- pam detection using the web topoloty. In Int'l ACM SIGIR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Let web spammers expose themselves",
"authors": [
{
"first": "Zhicong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Congkai",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yanbing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhicong Cheng, Bin Gao, Congkai Sun, Yanbing Jiang, and Tie-Yan Liu. 2011. Let web spammers expose themselves. In WSDM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vlandimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vlandimir Vapnik. 1995. Support- vector networks. Machine Learning, 20:273-297.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using visual features for anti-spam filtering",
"authors": [
{
"first": "C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Int'l Conference on Image Processing(ICIP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.Wu, K.Cheng, Q.Zhu, and Y.Wu. 2005. Using vi- sual features for anti-spam filtering. In IEEE Int'l Conference on Image Processing(ICIP).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Spam,damn spam, and statistics: Using statistical analysis to locate spam web pages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Fetterly",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Manasse",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Najork",
"suffix": ""
}
],
"year": 2004,
"venue": "Int'l Workshop on the Web and Databases(WebDB)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.Fetterly, M.Manasse, and M.Najork. 2004. S- pam,damn spam, and statistics: Using statistical analysis to locate spam web pages. In Int'l Work- shop on the Web and Databases(WebDB).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Link spam alliances",
"authors": [
{
"first": "Zoltn",
"middle": [],
"last": "Gyngyi",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Garcia-Molina",
"suffix": ""
}
],
"year": 2005,
"venue": "VLDB",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoltn Gyngyi and Hector Garcia-Molina. 2005. Link spam alliances. In VLDB.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Web spam taxonomy",
"authors": [
{
"first": "Zoltan",
"middle": [],
"last": "Gyongyi",
"suffix": ""
},
{
"first": "Hector",
"middle": [
"Garcia"
],
"last": "Molina",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoltan Gyongyi and Hector Garcia Molina. 2004. Web spam taxonomy. Technical report, Stanford Digital Library Technologies Project.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Link spam detection based on mass estimation",
"authors": [
{
"first": "Zoltan",
"middle": [],
"last": "Gyongyi",
"suffix": ""
},
{
"first": "Hetor",
"middle": [],
"last": "Pavelberkhin",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Garcia-Molina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2006,
"venue": "VLDB",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoltan Gyongyi, PavelBerkhin, Hetor Garcia-Molina, and Jan O. Pedersen. 2006. Link spam detection based on mass estimation. In VLDB.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Web spam detection with anti-trust rank",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Raj",
"suffix": ""
}
],
"year": 2006,
"venue": "ACM SIGIR workshop on adversarial information retrieval on the Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay Krishnan and Rashmi Raj. 2006. Web spam de- tection with anti-trust rank. In ACM SIGIR work- shop on adversarial information retrieval on the We- b.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Uncovering social spammers: Social honeypots + machine learning",
"authors": [
{
"first": "Kyumin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Caverlee",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"Webb"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyumin Lee, James Caverlee, and Steve Webb. 2010. Uncovering social spammers: Social honeypots + machine learning. In Proceeding of SIGIR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Workload models of spam and legitimate e-mails",
"authors": [
{
"first": "L",
"middle": [],
"last": "Gomes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Meira",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.Gomes, J.Almeida, V.Almeida, and W.Meira. 2007. Workload models of spam and legitimate e-mails. In Performance Evaluation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting product review spammers using rating behaviors",
"authors": [
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Viet-An",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hady",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lauw",
"suffix": ""
}
],
"year": 2010,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ee-Peng Lim, Viet-An Nguyen, Nitin Jindal, Bing Liu, and Hady W. Lauw. 2010. Detecting product review spammers using rating behaviors. In CIKM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the limited memory bfgs method for large scale optimization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming",
"volume": "45",
"issue": "",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45:503-528.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Opinion spam and analysis",
"authors": [
{
"first": "N",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N.Jindal and B.Liu. 2008. Opinion spam and analysis. In WSDM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Detecting spam web pages through content analysis",
"authors": [
{
"first": "Alexandros",
"middle": [],
"last": "Ntoulas",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Najork",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Manasse",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Fetterly",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandros Ntoulas, Marc Najork, Mark Manasse, and Dennis Fetterly. 2006. Detecting spam web pages through content analysis. In Proceedings of WWW.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Finding deceptive opinion spam by any stretch of the imagination",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"T"
],
"last": "Hancock",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle ott, Yejin Choi, Claire Cardie, and Jeffrey T.Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Fighting spam on social web sites: A survey of approaches and future challenges",
"authors": [
{
"first": "P",
"middle": [],
"last": "Heymann",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Koutrika",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Garcia-Molina",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Internet Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.Heymann, G.Koutrika, and H.Garcia-Molina. 2007. Fighting spam on social web sites: A survey of ap- proaches and future challenges. In IEEE Internet Computing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Confucius and its intelligent disciples: integrating social with search",
"authors": [
{
"first": "Xiance",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"Y"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Zoltan",
"middle": [],
"last": "Gyongyi",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of VLDB",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiance Si, Edward Y. Chang, Zoltan Gyongyi, and Maosong Sun. 2010. Confucius and its intelligen- t disciples: integrating social with search. In Pro- ceeding of VLDB.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Don't follow me: Twitter spam detection",
"authors": [
{
"first": "Alex",
"middle": [
"Hai"
],
"last": "",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 5th International Conference on Security and Cryptography (SECRYPT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Hai Wang. 2010. Don't follow me: Twitter spam detection. In Proceedings of 5th International Con- ference on Security and Cryptography (SECRYPT).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Detecting splogs via temporal dynamics using self-similarilty analysis",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sundaram",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tatemura",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Tseng",
"suffix": ""
}
],
"year": 2008,
"venue": "In ACM Transactions on the Web(TWeb)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.Liu, H.Sundaram, Y.Chi, J.Tatemura, and B.Tseng. 2008. Detecting splogs via temporal dynamics using self-similarilty analysis. In ACM Transactions on the Web(TWeb).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Spamming botnet: Signatures and characteristics",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Achan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Panigrahy",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hulten",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Osipkov",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM SIGCOMM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.Xie, F.Yu, K.Achan R.Panigrahy, G.Hulten, and I.Osipkov. 2008. Spamming botnet: Signatures and characteristics. In ACM SIGCOMM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combating web spam with trustrank",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Gyongyi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Garcia-Molina",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2004,
"venue": "Int'l Conference on Very Large Data Bases(VLDB)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z.Gyongyi, H.Garcia-Molina, and J.pedersen. 2004. Combating web spam with trustrank. In Int'l Con- ference on Very Large Data Bases(VLDB).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A spamicity approach to web spam detection",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Zhaohui",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2008,
"venue": "SDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Zhou, Jian Pei, and ZhaoHui Tang. 2008. A spam- icity approach to web spam detection. In SDM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Discovering spammers in social networks",
"authors": [
{
"first": "Yin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Erheng",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Nanthan",
"middle": [
"N"
],
"last": "Liu",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin Zhu, Xiao Wang, Erheng Zhong, Nanthan N. Liu, He Li, and Qiang Yang. 2012. Discovering spam- mers in social networks. In Proceedings of AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Graph with users, questions, and answers in CQA; (b) Summary graph of users in C-QA U1 User graph with different relations in C-QA (a) Question-answer relation; (b) Best-answer relation; (c) Non-best-answer relation Three kinds of major relations among users on CQA sites are defined as follows:",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Equation 10) REG:Sim+BAS: Optimization with all regularization terms that Similarity with best-answer relation, penalty for Best Answer Spammer.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 4: Parameter Sensitivity",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Content features comparison",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Performance of our optimization methods with different regularization for comparison",
"type_str": "table"
}
}
}
}