Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W01-0506",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:00:37.785875Z"
},
"title": "Stacking classifiers for anti-spam filtering of e-mail",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Sakkis",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paliouras",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": "paliourg@iit.demokritos.gr"
},
{
"first": "Vangelis",
"middle": [],
"last": "Karkaletsis",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": "vangelis@iit.demokritos.gr"
},
{
"first": "Constantine",
"middle": [
"D"
],
"last": "Spyropoulos",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": ""
},
{
"first": "Panagiotis",
"middle": [],
"last": "Stamatopoulos",
"suffix": "",
"affiliation": {
"laboratory": "Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Centre for Scientific Research \"",
"institution": "",
"location": {
"addrLine": "Demokritos\" GR-153 10 Ag. Paraskevi",
"settlement": "Athens",
"country": "Greece"
}
},
"email": "t.stamatopoulos@di.uoa.gr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We evaluate empirically a scheme for combining classifiers, known as stacked generalization, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization. Unsolicited commercial email, or \"spam\", floods mailboxes, causing frustration, wasting bandwidth, and exposing minors to unsuitable content. Using a public corpus, we show that stacking can improve the efficiency of automatically induced anti-spam filters, and that such filters can be used in reallife applications.",
"pdf_parse": {
"paper_id": "W01-0506",
"_pdf_hash": "",
"abstract": [
{
"text": "We evaluate empirically a scheme for combining classifiers, known as stacked generalization, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization. Unsolicited commercial email, or \"spam\", floods mailboxes, causing frustration, wasting bandwidth, and exposing minors to unsuitable content. Using a public corpus, we show that stacking can improve the efficiency of automatically induced anti-spam filters, and that such filters can be used in reallife applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents an empirical evaluation of stacked generalization, a scheme for combining automatically induced classifiers, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The increasing popularity and low cost of email have intrigued direct marketers to flood the mailboxes of thousands of users with unsolicited messages, advertising anything, from vacations to get-rich schemes. These messages, known as spam or more formally Unsolicited Commercial E-mail, are extremely annoying, as they clutter mailboxes, prolong dial-up connections, and often expose minors to unsuitable content (Cranor & Lamacchia, 1998) .",
"cite_spans": [
{
"start": 414,
"end": 440,
"text": "(Cranor & Lamacchia, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Legal and simplistic technical countermeasures, like blacklists and keyword-based filters, have had a very limited effect so far. 1 The success of machine learning techniques in text categorization (Sebastiani, 2001) has recently led to alternative, learning-based approaches (Sahami, et al. 1998; Pantel & Lin, 1998; Drucker, et al. 1999) . A classifier capable of distinguishing between spam and non-spam, hereafter legitimate, messages is induced from a manually categorized learning collection of messages, and is then used to identify incoming spam e-mail. Initial results have been promising, and experiments are becoming more systematic, by exploiting recently introduced benchmark corpora, and cost-sensitive evaluation measures (Gomez Hidalgo, et al. 2000; Androutsopoulos, et al. 2000a, b, c) .",
"cite_spans": [
{
"start": 130,
"end": 131,
"text": "1",
"ref_id": null
},
{
"start": 198,
"end": 216,
"text": "(Sebastiani, 2001)",
"ref_id": "BIBREF18"
},
{
"start": 276,
"end": 297,
"text": "(Sahami, et al. 1998;",
"ref_id": "BIBREF16"
},
{
"start": 298,
"end": 317,
"text": "Pantel & Lin, 1998;",
"ref_id": "BIBREF14"
},
{
"start": 318,
"end": 339,
"text": "Drucker, et al. 1999)",
"ref_id": "BIBREF8"
},
{
"start": 737,
"end": 765,
"text": "(Gomez Hidalgo, et al. 2000;",
"ref_id": null
},
{
"start": 766,
"end": 802,
"text": "Androutsopoulos, et al. 2000a, b, c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Stacked generalization (Wolpert, 1992) , or stacking, is an approach for constructing classifier ensembles. A classifier ensemble, or committee, is a set of classifiers whose individual decisions are combined in some way to classify new instances (Dietterich, 1997) . Stacking combines multiple classifiers to induce a higher-level classifier with improved performance. The latter can be thought of as the president of a committee with the ground-level classifiers as members. Each unseen incoming message is first given to the members; the president then decides on the category of the message by considering the opinions of the members and the message itself. Ground-level classifiers often make different classification errors. Hence, a president that has successfully learned when to trust each of the members can improve overall performance.",
"cite_spans": [
{
"start": 23,
"end": 38,
"text": "(Wolpert, 1992)",
"ref_id": "BIBREF19"
},
{
"start": 247,
"end": 265,
"text": "(Dietterich, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We have experimented with two groundlevel classifiers for which results on a public benchmark corpus are available: a Na\u00efve Bayes classifier (Androutsopoulos, et al. 2000a, c) and a memory-based classifier (Androutsopoulos, et al. 2000b; Sakkis, et al. 2001) . Using a third, memory-based classifier as president, we investigated two versions of stacking and two different cost-sensitive scenarios. Overall, our results indicate that stacking improves the performance of the ground-level classifiers, and that the performance of the resulting anti-spam filter is acceptable for real-life applications.",
"cite_spans": [
{
"start": 141,
"end": 175,
"text": "(Androutsopoulos, et al. 2000a, c)",
"ref_id": null
},
{
"start": 206,
"end": 237,
"text": "(Androutsopoulos, et al. 2000b;",
"ref_id": "BIBREF3"
},
{
"start": 238,
"end": 258,
"text": "Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Section 1 below presents the benchmark corpus and the preprocessing of the messages; section 2 introduces cost-sensitive evaluation measures; section 3 provides details on the stacking approaches that were explored; section 4 discusses the learning algorithms that were employed and the motivation for selecting them; section 5 presents our experimental results followed by conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Text categorization has benefited from public benchmark corpora. Producing such corpora for anti-spam filtering is not straightforward, since user mailboxes cannot be made public without considering privacy issues. A useful public approximation of a user's mailbox, however, can be constructed by mixing spam messages with messages extracted from spam-free public archives of mailing lists. The corpus that we used, Ling-Spam, follows this approach (Androutsopoulos, et al. 2000a, b; Sakkis, et al. 2001) . It is a mixture of spam messages and messages sent via the Linguist, a moderated list about the science and profession of linguistics. The corpus consists of 2412 Linguist messages and 481 spam messages.",
"cite_spans": [
{
"start": 449,
"end": 483,
"text": "(Androutsopoulos, et al. 2000a, b;",
"ref_id": null
},
{
"start": 484,
"end": 504,
"text": "Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "Spam messages constitute 16.6% of Ling-Spam, close to the rates reported by Cranor and LaMacchia (1998) , and Sahami et al. (1998) .",
"cite_spans": [
{
"start": 76,
"end": 103,
"text": "Cranor and LaMacchia (1998)",
"ref_id": "BIBREF5"
},
{
"start": 110,
"end": 130,
"text": "Sahami et al. (1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "Although the Linguist messages are more topicspecific than most users' e-mail, they are less standardized than one might expect. For example, they contain job postings, software availability announcements and even flame-like responses. Moreover, recent experiments with an encoded user mailbox and a Na\u00efve Bayes (NB) classifier (Androutsopoulos, et al. 2000c) yielded results similar to those obtained with Ling-Spam (Androutsopoulos, et al. 2000a) . Therefore, experimentation with Ling-Spam can provide useful indicative results, at least in a preliminary stage. Furthermore, experiments with Ling-Spam can be seen as studies of antispam filtering of open unmoderated lists.",
"cite_spans": [
{
"start": 328,
"end": 359,
"text": "(Androutsopoulos, et al. 2000c)",
"ref_id": "BIBREF4"
},
{
"start": 417,
"end": 448,
"text": "(Androutsopoulos, et al. 2000a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "Each message of Ling-Spam was converted into a vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "n x x x x x , , , , 3 2 1 h = , where n x x , , 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "are the values of attributes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "n X X , , 1 h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": ". Each attribute shows if a particular word (e.g. \"adult\") occurs in the message. All attributes are binary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "1 = i X if the word is present; otherwise 0 = i X .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "To avoid treating forms of the same word as different attributes, a lemmatizer was applied, converting each word to its base form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "To reduce the dimensionality, attribute selection was performed. First, words occurring in less than 4 messages were discarded. Then, the Information Gain (IG) of each candidate attribute X was computed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": ") ( ) ( ) , ( log ) , ( ) , ( } , { }, 1 , 0 { c P x P c x P c x P C X IG legit spam c x \u22c5 \u22c5 = \u2211 \u2208 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "The attributes with the m highest IG-scores were selected, with m corresponding to the best configurations of the ground classifiers that have been reported for Ling-Spam (Androutsopoulos, et al. 2000a; Sakkis, et al. 2001) ; see Section 4.",
"cite_spans": [
{
"start": 171,
"end": 202,
"text": "(Androutsopoulos, et al. 2000a;",
"ref_id": "BIBREF1"
},
{
"start": 203,
"end": 223,
"text": "Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark corpus and preprocessing",
"sec_num": null
},
{
"text": "Blocking a legitimate message is generally more severe an error than accepting a spam message. Let S L \u2192 and L S \u2192 denote the two error types, respectively, and let us assume that S L \u2192 is \u03bb times as costly as L S \u2192 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "Previous research has considered three cost scenarios, where \u03bb = 1, 9, or 999 (Androutsopoulos, et al. 2000a, b, c; Sakkis, et al. 2001) . In the scenario where \u03bb = 999, blocked messages are deleted immediately. S L \u2192 is taken to be 999 times as costly as L S \u2192 , since most users would consider losing a legitimate message unacceptable. In the scenario where \u03bb = 9, blocked messages are returned to their senders with a request to resend them to an unfiltered address. In this case, S L \u2192 is penalized more than L S \u2192 , to account for the fact that recovering from a blocked legitimate message is more costly (counting the sender's extra work) than recovering from a spam message that passed the filter (deleting it manually). In the third scenario, where \u03bb = 1, blocked messages are simply flagged as possibly spam. Hence, S L \u2192 is no more costly than L S \u2192 . Previous experiments indicate that the Na\u00efve Bayes ground-classifier is unstable when \u03bb = 999 (Androutsopoulos, et al. 2000a , respectively, the criterion above achieves optimal results (Duda & Hart, 1973) .",
"cite_spans": [
{
"start": 78,
"end": 115,
"text": "(Androutsopoulos, et al. 2000a, b, c;",
"ref_id": null
},
{
"start": 116,
"end": 136,
"text": "Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
},
{
"start": 956,
"end": 986,
"text": "(Androutsopoulos, et al. 2000a",
"ref_id": "BIBREF1"
},
{
"start": 1048,
"end": 1067,
"text": "(Duda & Hart, 1973)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "\u03bb > ) ( ) ( x W x W L S If ) (x W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "To measure the performance of a filter, weighted accuracy (WAcc) and its complementary weighted error rate (WErr = 1 -WAcc) are used (Androutsopoulos, et al. 2000a, b, c; Sakkis, et al. 2001 ",
"cite_spans": [
{
"start": 133,
"end": 170,
"text": "(Androutsopoulos, et al. 2000a, b, c;",
"ref_id": null
},
{
"start": 171,
"end": 190,
"text": "Sakkis, et al. 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "): S L S S L L N N N N WAcc + \u22c5 \u03bb + \u22c5 \u03bb = \u2192 \u2192 where Z Y N \u2192 is the number of messages in category Y that the filter classified as Z , S L L L L N N N \u2192 \u2192 + = , L S S S S N N N \u2192 \u2192 + = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "That is, when a legitimate message is blocked, this counts as \u03bb errors; and when it passes the filter, this counts as \u03bb successes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "We consider the case where no filter is present as our baseline: legitimate messages are never blocked, and spam messages always pass. The weighted accuracy of the baseline is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "S L L b N N N WAcc + \u22c5 \u03bb \u22c5 \u03bb =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "The total cost ratio (TCR) compares the performance of a filter to the baseline:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "L S S L S b N N N WErr WErr TCR \u2192 \u2192 + \u22c5 = = \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "Greater TCR values indicate better performance. For TCR < 1, not using the filter is better. Our evaluation measures also include spam recall (SR) and spam precision (SP):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "L S S S S S N N N SR \u2192 \u2192 \u2192 + = S L S S S S N N N SP \u2192 \u2192 \u2192 + =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "SR measures the percentage of spam messages that the filter blocks (intuitively, its effectiveness), while SP measures how many blocked messages are indeed spam (its safety). Despite their intuitiveness, comparing different filter configurations using SR and SP is difficult: each configuration yields a pair of SR and SP results; and without a single combining measure, like TCR, that incorporates the notion of cost, it is difficult to decide which pair is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "In all the experiments, stratified 10-fold cross-validation was used. That is, Ling-Spam was partitioned into 10 equally populated parts, maintaining the original spam-legitimate ratio. Each experiment was repeated 10 times, each time reserving a different part j S (j = 1, \u2026, 10) for testing, and using the remaining 9 parts as the training set j L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "2"
},
{
"text": "In the first version of stacking that we explored (Wolpert, 1992) , which we call cross-validation stacking, the training set of the president was prepared using a second-level 3-fold crossvalidation. Each training set j L was further partitioned into three equally populated parts, and the training set of the president was prepared in three steps. At each step, a different part presidents in a 10-fold experiment, while in the first version there are only 10. In each case, WAcc is averaged over the presidents, and TCR is reported as WErr b over the average WErr.",
"cite_spans": [
{
"start": 50,
"end": 65,
"text": "(Wolpert, 1992)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stacking",
"sec_num": "3"
},
{
"text": "i LS (i = 1, 2, 3) of j L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stacking",
"sec_num": "3"
},
{
"text": "Holdout stacking is likely to be less effective than cross-validation stacking, since its classifiers are trained on smaller sets. Nonetheless, it requires fewer computations, because the members are not retrained. Furthermore, during classification the president consults the same members that were used to prepare its training set. In contrast, in crossvalidation stacking the president is tested using members that have received more training than those that prepared its training set. Hence, the model that the president has acquired, which shows when to trust each member, may not apply to the members that the president consults when classifying incoming messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stacking",
"sec_num": "3"
},
{
"text": "As already mentioned, we used a Na\u00efve Bayes (NB) and a memory-based learner as members of the committee (Mitchell 1997; Aha, et al. 1991) . For the latter, we used TiMBL, an implementation of the k-Nearest Neighbor algorithm (Daelemans, et al. 2000) . With NB, the degree of confidence",
"cite_spans": [
{
"start": 104,
"end": 119,
"text": "(Mitchell 1997;",
"ref_id": "BIBREF13"
},
{
"start": 120,
"end": 137,
"text": "Aha, et al. 1991)",
"ref_id": "BIBREF0"
},
{
"start": 225,
"end": 249,
"text": "(Daelemans, et al. 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": ") (x W S that x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "is spam is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "= = ) | ( ) ( x spam P x W NB S \u2211 \u220f \u220f \u2208 = = \u22c5 \u22c5 = } , { 1 1 ) | ( ) ( ) | ( ) ( legit spam k m i i m i i k x P k P spam x P spam P NB assumes that m X X , , 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "are conditionally independent given the category (Duda & Hart, 1973) .",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Duda & Hart, 1973)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "With k-NN, a distance-weighted method is used, with a voting function analogous to the inverted cube of distance (Dudani 1976 This formula weighs the contribution of each neighbor by its distance from the message to be classified, and the result is scaled to [0,1]. The distance is computed by an attribute-weighted function (Wettschereck, et al. 1995) , employing Information Gain (IG):",
"cite_spans": [
{
"start": 113,
"end": 125,
"text": "(Dudani 1976",
"ref_id": "BIBREF10"
},
{
"start": 325,
"end": 352,
"text": "(Wettschereck, et al. 1995)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "\u2211 \u2211 = = \u2212 \u03b4 = k i i k i i i NN k S x x d x x d x C spam x W 1 3 1 3 ) , ( 1 ) , ( )) ( , ( ) ( , where ) ( i x C is the category of neighbor i x , ) , ( j i x x d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": ") , ( ) , ( 1 j r i r n t t j i x x IG x x d \u2211 = \u22c5 \u2261 \u03b4 , where i m i i x x x , , 1 l = , j m j j x x x , , 1 l =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": ", and t IG is the IG score of t X (Section 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "In Tables 1 and 2 , we reproduce the best performing configurations of the two learners on Ling-Spam (Androutsopoulos, et al. 2000b; Sakkis, et al. 2001) . These configurations were used as members of the committee.",
"cite_spans": [
{
"start": 101,
"end": 132,
"text": "(Androutsopoulos, et al. 2000b;",
"ref_id": "BIBREF3"
},
{
"start": 133,
"end": 153,
"text": "Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 3,
"end": 17,
"text": "Tables 1 and 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "The same memory-based learner was used as the president. However, we experimented with several configurations, varying the neighborhood size (k) from 1 to 10, and providing the president with the m best wordattributes, as in Section 1, with m ranging from 50 to 700 by 50. The same attribute-and distance-weighting schemes were used for the president, as with the ground-level memorybased learner. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducers employed",
"sec_num": "4"
},
{
"text": "3.46% 1.76%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All",
"sec_num": null
},
{
"text": "Our motivation for combining NB with k-NN emerged from preliminary results indicating that the two ground-level learners make rather uncorrelated errors. Table 3 shows the average percentages of messages where only one, or both ground-level classifiers fail, per cost scenario (\u03bb) and message category. The figures are for the configurations of Tables 1 and 2. It can be seen that the common errors are always fewer than the cases where both classifiers fail. Hence, there is much space for improved accuracy, if a president can learn to select the correct member.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 3",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "All",
"sec_num": null
},
{
"text": "Tables 4 and 5 summarize the performance of the best configurations of the president in our experiments, for each cost scenario. Comparing the TCR scores in these tables with the corresponding scores of Tables 1 and 2 shows that stacking improves the performance of the overall filter. From the two stacking versions, cross-validation stacking is slightly better than holdout stacking. It should also be noted that stacking was beneficial for most of the configurations of the president that we tested, i.e. most sub-optimal presidents outperformed the best configurations of the members. This is encouraging, since the optimum configuration is often hard to determine a priori, and may vary from one user to the other. There was one interesting exception in the positive impact of stacking. The 1-NN and 2-NN (k = 1, 2) presidents were substantially worse than the other k-NN presidents, often performing worse than the ground-level classifiers. We witnessed this behavior in both cost scenarios, and with most values of m (number of attributes). In a \"postmortem\" analysis, we ascertained that most messages misclassified by 1-NN and 2-NN, but not the other presidents, are legitimate, with their nearest neighbor being spam. Therefore, the additional errors of 1-NN and 2-NN, compared to the other presidents, are of the S L \u2192 type. Interestingly, in most of those cases, both members of the committee classify the instance correctly, as legitimate. This is an indication, that for small values of the parameter k the additional two features, i.e., the members' confidence",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 217,
"text": "Tables 1 and 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": ") ( 1 x W S and ) ( 2 x W S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": ", do not enhance but distort the representation of instances. As a result, the close neighborhood of the unclassified instance is not a legitimate, but a spam e-mail. This behavior of the memorybased classifier is also noted in (Sakkis, et al. 2001) . The suggested solution there was to use a larger value for k, combined with a strong distance weighting function, such as the one presented in section 4.",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "(Sakkis, et al. 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "In this paper we adopted a stacked generalization approach to anti-spam filtering, and evaluated its performance. The configuration that we examined combined a memory-based and a Na\u00efve Bayes classifier in a two-member committee, in which another memory-based classifier presided. The classifiers that we chose as members of the committee have been evaluated individually on the same data as in our evaluation, i.e. the Ling-Spam corpus. The results of these earlier studies were used as a basis for comparing the performance of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "Our experiments, using two different approaches to stacking and two different misclassification cost scenarios, show that stacking consistently improves the performance of anti-spam filtering. This is explained by the fact that the two members of the committee disagree more often than agreeing in their misclassification errors. Thus, the president is able to improve the overall performance of the filter, by choosing the right member's decision when they disagree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "The results presented here motivate further work in the same direction. In particular, we are interested in combining more classifiers, such as decision trees (Quinlan, 1993) and support vector machines (Drucker, et al. 1999) , within the stacking framework. A larger variety of classifiers is expected to lead the president to more informed decisions, resulting in further improvement of the filter's performance. Furthermore, we would like to evaluate other classifiers in the role of the president. Finally, it would be interesting to compare the performance of the stacked generalization approach to other multi-classifier methods, such as boosting (Schapire & Singer, 2000) .",
"cite_spans": [
{
"start": 159,
"end": 174,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF15"
},
{
"start": 203,
"end": 225,
"text": "(Drucker, et al. 1999)",
"ref_id": "BIBREF8"
},
{
"start": 653,
"end": 678,
"text": "(Schapire & Singer, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Instance-Based Learning Algorithms",
"authors": [
{
"first": "W",
"middle": [
"D"
],
"last": "Aha",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kibler",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "Machine Learning",
"volume": "6",
"issue": "",
"pages": "37--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aha, W. D., Kibler D., and Albert, M.K., (1991) Instance-Based Learning Algorithms. \"Machine Learning\", Vol. 6, pp. 37-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An evaluation of na\u00efve Bayesian anti-spam filtering",
"authors": [
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Koutsias",
"suffix": ""
},
{
"first": "K",
"middle": [
"V"
],
"last": "Chandrinos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Paliouras",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Spyropoulos",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Workshop on Machine Learning in the New Information Age",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Androutsopoulos, I., Koutsias, J., Chandrinos, K.V., Paliouras, G., and Spyropoulos, C.D. (2000a) \"An evaluation of na\u00efve Bayesian anti-spam filtering\". In Proceedings of the Workshop on Machine Learning in the New Information Age, 11th",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "European Conference on Machine Learning",
"authors": [],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "9--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "European Conference on Machine Learning (ECML 2000), Barcelona, Spain, pp. 9-17.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to filter spam e-mail: a comparison of a na\u00efve Bayesian and a memorybased approach",
"authors": [
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Paliouras",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Karkaletsis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sakkis",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Spyropoulos",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Stamatopoulos",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Workshop on Machine Learning and Textual Information Access",
"volume": "",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Androutsopoulos, I., Paliouras, G., Karkaletsis, V., Sakkis, G., Spyropoulos, C.D., and Stamatopoulos, P. (2000b). \"Learning to filter spam e-mail: a comparison of a na\u00efve Bayesian and a memory- based approach\". In Proceedings of the Workshop on Machine Learning and Textual Information Access, PKDD 2000, Lyon, France, pp. 1-3.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An experimental comparison of na\u00efve Bayesian and keyword-based anti-spam filtering with encrypted personal e-mail messages",
"authors": [
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Koutsias",
"suffix": ""
},
{
"first": "K",
"middle": [
"V"
],
"last": "Chandrinos",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Spyropoulos",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of SIGIR 2000",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Androutsopoulos, I, Koutsias, J, Chandrinos, K.V., and Spyropoulos, C.D. (2000c) \"An experimental comparison of na\u00efve Bayesian and keyword-based anti-spam filtering with encrypted personal e-mail messages\". In Proceedings of SIGIR 2000, Athens, Greece, pp. 160-167.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Spam!",
"authors": [
{
"first": "L",
"middle": [
"F"
],
"last": "Cranor",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Lamacchia",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "41",
"issue": "",
"pages": "74--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cranor, L.F., and LaMacchia, B.A. (1998). \"Spam!\", Communications of ACM, 41(8):74-83.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "TiMBL: Tilburg Memory Based Learner, version 3.0, Reference Guide. ILK, Computational Linguistics",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Van Der Sloot",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daelemans, W., Zavrel, J., van der Sloot, K., and van den Bosch, A. (2000) TiMBL: Tilburg Memory Based Learner, version 3.0, Reference Guide. ILK, Computational Linguistics, Tilburg University. http:/ilk.kub.nl/~ilk/papers.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Machine Learning Research: Four Current Directions",
"authors": [
{
"first": "G",
"middle": [
"T"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "18",
"issue": "",
"pages": "97--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dietterich, G. T. (1997). \"Machine Learning Research: Four Current Directions\". AI Magazine 18(4):97-136.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Support Vector Machines for Spam Categorization",
"authors": [
{
"first": "H",
"middle": [
"D"
],
"last": "Drucker",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE Transactions On Neural Networks",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drucker, H. D. ,Wu, D., and Vapnik V. (1999). \"Support Vector Machines for Spam Categorization\". IEEE Transactions On Neural Networks, 10(5).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bayes decision theory",
"authors": [
{
"first": "R",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
}
],
"year": 1973,
"venue": "Pattern Classification and Scene Analysis",
"volume": "",
"issue": "",
"pages": "10--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duda, R.O, and Hart, P.E. (1973). \"Bayes decision theory\". Chapter 2 in Pattern Classification and Scene Analysis, pp. 10-43, John Wiley.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The distance-weighted knearest neighbor rule",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Dudani",
"suffix": ""
}
],
"year": 1976,
"venue": "IEEE Transactions on Systems, Man and Cybernetics",
"volume": "6",
"issue": "4",
"pages": "325--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dudani, A. S. (1976). \"The distance-weighted k- nearest neighbor rule\". IEEE Transactions on Systems, Man and Cybernetics, 6(4):325-327.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combining text and heuristics for cost-sensitive spam filtering",
"authors": [
{
"first": "G\u00f3mez",
"middle": [],
"last": "Hidalgo",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Ma\u00f1a L\u03c3p\u00e9z",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Puertas",
"middle": [],
"last": "Sanz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 4 th Computational Natural Language Learning Workshop",
"volume": "",
"issue": "",
"pages": "99--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00f3mez Hidalgo, J.M., Ma\u00f1a L\u03c3p\u00e9z, M., and Puertas Sanz, E. (2000). \"Combining text and heuristics for cost-sensitive spam filtering\". In Proceedings of the 4 th Computational Natural Language Learning Workshop, CoNLL-2000, Lisbon, Portugal, pp. 99- 102.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A study of cross-validation and bootstrap for accuracy estimation and model selection",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kohavi",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 12 th International Joint Conference on Artificial Intelligence (IJCAI-1995)",
"volume": "",
"issue": "",
"pages": "1137--1143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kohavi, R. (1995). \"A study of cross-validation and bootstrap for accuracy estimation and model selection\". In Proceedings of the 12 th International Joint Conference on Artificial Intelligence (IJCAI- 1995), Morgan Kaufmann, pp. 1137-1143.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Machine Learning",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, T.M. (1997). Machine Learning. McGraw- Hill.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SpamCop: a spam classification and organization program",
"authors": [
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning for Text Categorization -Papers from the AAAI Workshop",
"volume": "",
"issue": "",
"pages": "95--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pantel, P., and Lin, D. (1998). \"SpamCop: a spam classification and organization program\". In Learning for Text Categorization -Papers from the AAAI Workshop, pp. 95-98, Madison Wisconsin. AAAI Technical Report WS-98-05.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, J.R. (1993). C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, California.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Bayesian approach to filtering junk e-mail",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sahami",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning for Text Categorization -Papers from the AAAI Workshop",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahami, M., Dumais, S., Heckerman D., and Horvitz, E. (1998). \"A Bayesian approach to filtering junk e-mail\". In Learning for Text Categorization - Papers from the AAAI Workshop, pp. 55-62, Madison Wisconsin. AAAI Technical Report WS- 98-05.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A memory-based approach to anti-spam filtering",
"authors": [
{
"first": "G",
"middle": [],
"last": "Sakkis",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Paliouras",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Karkaletsis",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Spyropoulos",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Stamatopoulos",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "BoosTexter: a boosting-based system for text categorization",
"volume": "39",
"issue": "",
"pages": "135--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sakkis, G., Androutsopoulos, I., Paliouras, G., Karkaletsis, V., Spyropoulos, C.D., and Stamatopoulos, P. (2001) \"A memory-based approach to anti-spam filtering\". NCSR \"Demokritos\" Technical Report, Athens, Greece. Schapire, R.E., and Singer, Y. (2000). \"BoosTexter: a boosting-based system for text categorization\". Machine Learning, 39(2/3):135-168.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Review and Comparative Evaluation of Feature Weighting Methods for Lazy Learning Algorithms",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wettschereck",
"suffix": ""
},
{
"first": "W",
"middle": [
"D"
],
"last": "Aha",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastiani, F. (2001). Machine Learning in Automated Text Categorization. Revised version of Technical Report IEI-B4-31-1999, Istituto di Elaborazione dell'Informazione, Consiglio Nazionale delle Ricerche, Pisa, Italy. Wettschereck, D., Aha, W. D., and Mohri, T. (1995). A Review and Comparative Evaluation of Feature Weighting Methods for Lazy Learning Algorithms. Technical Report AIC-95-012, Naval Research Laboratory, Navy Center for Applied Research in Artificial Intelligence, Washington, D.C.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Stacked Generalization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wolpert",
"suffix": ""
}
],
"year": 1992,
"venue": "Neural Networks",
"volume": "5",
"issue": "2",
"pages": "241--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolpert, D. (1992). \"Stacked Generalization\". Neural Networks, 5(2):241-260.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>i LL of</td></tr><tr><td colspan=\"18\">the other two parts. Each</td><td>x</td><td>=</td><td>x 1</td><td>,</td><td>,</td><td>m x</td><td>of</td></tr><tr><td colspan=\"18\">i LS was enhanced with the members'</td></tr><tr><td colspan=\"5\">confidence</td><td/><td colspan=\"5\">( 1 x W S</td><td>)</td><td/><td/><td colspan=\"4\">and</td><td>( 2 x W S</td><td>)</td><td>that x</td><td>is spam,</td></tr><tr><td colspan=\"18\">yielding an enhanced</td><td>' LS with vectors i</td></tr><tr><td>x</td><td>'</td><td>=</td><td>1 x</td><td>,</td><td>,</td><td colspan=\"2\">x m</td><td>,</td><td colspan=\"3\">1 S W</td><td colspan=\"2\">(</td><td colspan=\"2\">x</td><td>),</td><td>2 S W</td><td>(</td><td>x</td><td>)</td><td>. At the end of</td></tr><tr><td colspan=\"18\">the 3-fold cross-validation, the president was</td></tr><tr><td colspan=\"5\">trained on</td><td colspan=\"2\">L j</td><td>'</td><td colspan=\"2\">=</td><td colspan=\"3\">LS</td><td colspan=\"2\">1</td><td>'</td><td/><td>LS</td><td>2</td><td>'</td><td>LS</td><td>3</td><td>'</td><td>. It was then</td></tr><tr><td colspan=\"18\">tested on j S , after retraining the members on</td></tr><tr><td colspan=\"18\">the entire j L and enhancing the vectors of j S</td></tr><tr><td colspan=\"18\">with the predictions of the members.</td></tr><tr><td/><td/><td colspan=\"16\">The second stacking version that we</td></tr><tr><td colspan=\"18\">explored, dubbed holdout stacking, is similar to</td></tr><tr><td colspan=\"18\">Kohavi's (1995) holdout accuracy estimation. It</td></tr><tr><td colspan=\"18\">differs from the first version, in two ways: the</td></tr><tr><td colspan=\"18\">members are not retrained on the entire j L ; and</td></tr><tr><td colspan=\"18\">each partitioning of j L into</td><td>i LL and</td><td>i LS leads</td></tr><tr><td colspan=\"18\">to a different president, trained on</td><td>' LS , which i</td></tr><tr><td colspan=\"18\">is then tested on the enhanced j S . Hence, there</td></tr><tr><td colspan=\"3\">are</td><td>3\u00d7</td><td>10</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "was reserved, and the members were trained on the union"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Best configurations of k-NN per usage scenario and the corresponding performance."
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Best configurations of NB per usage scenario and the corresponding performance."
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Best configurations of cross-validation stacking per usage scenario and the corresponding performance."
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Best configurations of holdout stacking per usage scenario and the corresponding performance."
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Analysis of the common errors of the best configurations of NB and k-NN per scenario (\u03bb) and message class."
}
}
}
}