label
class label 2
classes | group
stringclasses 5
values | text
stringlengths 3
205k
|
---|---|---|
1ham
| easy_ham | "[Skip Montanaro]\n> I'm listed as a developer on SF and have the spambayes CVS module checked\n> out using my SF username, but I'm unable to write to the repository. CVS\n> complains:\n>\n> % cvs add unheader.py\n> cvs [server aborted]: \"add\" requires write access to the repository\n>\n> Any thoughts?\n\nNot really. Try again? About half the developers on the spambayes project\nwere missing some permission or other, so I ran thru all of them and checked\nevery damned box and clicked on every damn dropdown list I could find. As\nfar as SF is concerned, you're all sitting on God's Right Hand now, so if it\nstill doesn't work I suggest you upgrade to Win98 <wink>.\n\n" |
1ham
| easy_ham | "\n Tim> About half the developers on the spambayes project were missing\n Tim> some permission or other, so I ran thru all of them and checked\n Tim> every damned box and clicked on every damn dropdown list I could\n Tim> find. As far as SF is concerned, you're all sitting on God's Right\n Tim> Hand now, so if it still doesn't work I suggest you upgrade to\n Tim> Win98 <wink>.\n\nTime to upgrade I guess. :-(\n\n % cvs add unheader.py \n cvs [server aborted]: \"add\" requires write access to the repository\n\nI'll try a checkout into a new directory...\n\nS\n" |
1ham
| easy_ham | "\n >> ... choose something with a different prefix than \"X-Spam-\" so that\n >> people don't confuse it with SpamAssassin ...\n\n Neale> How about X-Spambayes-Disposition (or X-Hammie-Disposition if\n Neale> there'll be other classifier front-ends)?\n\nI kinda like \"hammie\". Your front end was there first, so I suspect it will\nrule the front end roost.\n\nS\n\n" |
1ham
| easy_ham | "\n Skip> I'll try a checkout into a new directory...\n\nWhich didn't help. At the very least I think that means it's time for\nbed...\n\nSkip\n\n" |
1ham
| easy_ham | "\n >> Accordingly, I wrote unheader.py, which is mostly a ripoff of\n >> something someone else posted to python-dev or c.l.py within the last\n >> week or so to strip out SA-generated headers.\n\n Tim> Unless I've grown senile tonight, you got it from Anthony to begin\n Tim> with. Please check it in to project, and add a short blurb to\n Tim> README.txt!\n\nRaring to go, once I can write to CVS...\n\nS\n" |
1ham
| easy_ham | "> > Maybe. I batch messages using fetchmail (don't ask why), and adding\n> > .4 seconds per message for a batch of 50 (not untypical) feels like a\n> > real wait to me...\n> \n> Yeesh. Sounds like what you need is something to kick up once and score\n> an entire mailbox.\n> \n> Wait a second... So *that's* why you wanted -u.\n> \n> If you can spare the memory, you might get better performance in this\n> case using the pickle store, since it only has to go to disk once (but\n> boy, does it ever go to disk!) I can't think of anything obvious to\n> speed things up once it's all loaded into memory, though. That's\n> profiler territory, and profiling is exactly the kind of optimization\n> I just said I wasn't going to do :)\n\nWe could have a server mode (someone described this as an SA option).\n\n--Guido van Rossum (home page: http://www.python.org/~guido/)\n" |
1ham
| easy_ham | ">>>>> \"TP\" == Tim Peters <tim.one@comcast.net> writes:\n\n TP> [Jeremy Hylton[\n >> The total collections are 1100 messages. I trained with 1100/5\n >> messages.\n\n TP> I'm reading this now as that you trained on about 220 spam and\n TP> about 220 ham. That's less than 10% of the sizes of the\n TP> training sets I've been using. Please try an experiment: train\n TP> on 550 of each, and test once against the other 550 of each.\n\nThis helps a lot! Here are results with the stock tokenizer:\n\nTraining on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 8>\n ... 644 hams & 557 spams\n 0.000 10.413\n 1.398 6.104\n 1.398 5.027\nTraining on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 0>\n ... 644 hams & 557 spams\n 0.000 8.259\n 1.242 2.873\n 1.242 5.745\nTraining on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 3>\n ... 644 hams & 557 spams\n 1.398 5.206\n 1.398 4.488\n 0.000 9.336\nTraining on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 0>\n ... 644 hams & 557 spams\n 1.553 5.206\n 1.553 5.027\n 0.000 9.874\ntotal false pos 139 5.39596273292\ntotal false neg 970 43.5368043088\n\nAnd results from the tokenizer that looks at all headers except Date,\nReceived, and X-From_:\n\nTraining on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 8>\n ... 644 hams & 557 spams\n 0.000 7.540\n 0.932 4.847\n 0.932 3.232\nTraining on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 0>\n ... 644 hams & 557 spams\n 0.000 7.181\n 0.621 2.873\n 0.621 4.847\nTraining on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 3>\n ... 644 hams & 557 spams\n 1.087 4.129\n 1.087 3.052\n 0.000 6.822\nTraining on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 0>\n ... 644 hams & 557 spams\n 0.776 3.411\n 0.776 3.411\n 0.000 6.463\ntotal false pos 97 3.76552795031\ntotal false neg 738 33.1238779174\n\nIs it safe to conclude that avoiding any cleverness with headers is a\ngood thing?\n\n TP> Do that a few times making a random split each time (it won't be\n TP> long until you discover why directories of individual files are\n TP> a lot easier to work -- e.g., random.shuffle() makes this kind\n TP> of thing trivial for me).\n\nYou haven't looked at mboxtest.py, have you? I'm using\nrandom.shuffle(), too. You don't need to have many files to make an\narbitrary selection of messages from an mbox.\n\nI'll report some more results when they're done.\n\nJeremy\n\n\n" |
1ham
| easy_ham | "> TP> I'm reading this now as that you trained on about 220 spam and\n> TP> about 220 ham. That's less than 10% of the sizes of the\n> TP> training sets I've been using. Please try an experiment: train\n> TP> on 550 of each, and test once against the other 550 of each.\n\n[Jeremy]\n> This helps a lot!\n\nPossibly. I checked in a change to classifier.py overnight (getting rid of\nMINCOUNT) that gave a major improvement in the f-n rate too, independent of\ntokenization.\n\n> Here are results with the stock tokenizer:\n\nUnsure what \"stock tokenizer\" means to you. For example, it might mean\ntokenizer.tokenize, or mboxtest.MyTokenizer.tokenize.\n\n> Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox:\n> /home/jeremy/Mail/spam 8>\n> ... 644 hams & 557 spams\n> 0.000 10.413\n> 1.398 6.104\n> 1.398 5.027\n> Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox:\n> /home/jeremy/Mail/spam 0>\n> ... 644 hams & 557 spams\n> 0.000 8.259\n> 1.242 2.873\n> 1.242 5.745\n> Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox:\n> /home/jeremy/Mail/spam 3>\n> ... 644 hams & 557 spams\n> 1.398 5.206\n> 1.398 4.488\n> 0.000 9.336\n> Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox:\n> /home/jeremy/Mail/spam 0>\n> ... 644 hams & 557 spams\n> 1.553 5.206\n> 1.553 5.027\n> 0.000 9.874\n> total false pos 139 5.39596273292\n> total false neg 970 43.5368043088\n\nNote that those rates remain much higher than I got using just 220 of ham\nand 220 of spam. That remains A Mystery.\n\n> And results from the tokenizer that looks at all headers except Date,\n> Received, and X-From_:\n\nUnsure what that means too. For example, \"looks at\" might mean you enabled\nAnthony's count-them gimmick, and/or that you're tokenizing them yourself,\nand/or ...?\n\n> Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox:\n> /home/jeremy/Mail/spam 8>\n> ... 644 hams & 557 spams\n> 0.000 7.540\n> 0.932 4.847\n> 0.932 3.232\n> Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox:\n> /home/jeremy/Mail/spam 0>\n> ... 644 hams & 557 spams\n> 0.000 7.181\n> 0.621 2.873\n> 0.621 4.847\n> Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox:\n> /home/jeremy/Mail/spam 3>\n> ... 644 hams & 557 spams\n> 1.087 4.129\n> 1.087 3.052\n> 0.000 6.822\n> Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox:\n> /home/jeremy/Mail/spam 0>\n> ... 644 hams & 557 spams\n> 0.776 3.411\n> 0.776 3.411\n> 0.000 6.463\n> total false pos 97 3.76552795031\n> total false neg 738 33.1238779174\n>\n> Is it safe to conclude that avoiding any cleverness with headers is a\n> good thing?\n\nSince I don't know what you did, exactly, I can't guess. What you seemed to\nshow is that you did *something* clever with headers and that doing so\nhelped (the \"after\" numbers are better than the \"before\" numbers, right?).\nAssuming that what you did was override what's now\ntokenizer.Tokenizer.tokenize_headers with some other routine, and didn't\ncall the base Tokenizer.tokenize_headers at all, then you're missing\ncarefully tested treatment of just a few header fields, but adding many\ndozens of other header fields. There's no question that adding more header\nfields should help; tokenizer.Tokenizer.tokenize_headers doesn't do so only\nbecause my testing corpora are such that I can't add more headers without\ngetting major benefits for bogus reasons.\n\nApart from all that, you said you're skipping Received. By several\naccounts, that may be the most valuable of all the header fields. I'm\n(meaning tokenizer.Tokenizer.tokenize_headers) skipping them too for the\nreason explained above. Offline a week or two ago, Neil Schemenauer\nreported good results from this scheme:\n\n ip_re = re.compile(r'(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})')\n\n for header in msg.get_all(\"received\", ()):\n for ip in ip_re.findall(header):\n parts = ip.split(\".\")\n for n in range(1, 5):\n yield 'received:' + '.'.join(parts[:n])\n\nThis makes a lot of sense to me; I just checked it in, but left it disabled\nfor now.\n\n" |
1ham
| easy_ham | "Here's clarification of why I did:\n\nFirst test results using tokenizer.Tokenizer.tokenize_headers()\nunmodified. \n\nTraining on 644 hams & 557 spams\n 0.000 10.413\n 1.398 6.104\n 1.398 5.027\nTraining on 644 hams & 557 spams\n 0.000 8.259\n 1.242 2.873\n 1.242 5.745\nTraining on 644 hams & 557 spams\n 1.398 5.206\n 1.398 4.488\n 0.000 9.336\nTraining on 644 hams & 557 spams\n 1.553 5.206\n 1.553 5.027\n 0.000 9.874\ntotal false pos 139 5.39596273292\ntotal false neg 970 43.5368043088\n\nSecond test results using mboxtest.MyTokenizer.tokenize_headers().\nThis uses all headers except Received, Data, and X-From_.\n\nTraining on 644 hams & 557 spams\n 0.000 7.540\n 0.932 4.847\n 0.932 3.232\nTraining on 644 hams & 557 spams\n 0.000 7.181\n 0.621 2.873\n 0.621 4.847\nTraining on 644 hams & 557 spams\n 1.087 4.129\n 1.087 3.052\n 0.000 6.822\nTraining on 644 hams & 557 spams\n 0.776 3.411\n 0.776 3.411\n 0.000 6.463\ntotal false pos 97 3.76552795031\ntotal false neg 738 33.1238779174\n\nJeremy\n\n" |
1ham
| easy_ham | "> [Guido]\n> > Perhaps more useful would be if Tim could check in the pickle(s?)\n> > generated by one of his training runs, so that others can see how\n> > Tim's training data performs against their own corpora.\n\n[Tim]\n> I did that yesterday, but seems like nobody bit.\n\nI downloaded and played with it a bit, but had no time to do anything\nsystematic. It correctly recognized a spam that slipped through SA.\nBut it also identified as spam everything in my inbox that had any\nMIME structure or HTML parts, and several messages in my saved 'zope\ngeeks' list that happened to be using MIME and/or HTML.\n\nSo I guess I'll have to retrain it (yes, you told me so :-).\n\n> Just in case <wink>, I\n> uploaded a new version just now. Since MINCOUNT went away, UNKNOWN_SPAMPROB\n> is much less likely, and there's almost nothing that can be pruned away (so\n> the file is about 5x larger now).\n> \n> http://sf.net/project/showfiles.php?group_id=61702\n\nI'll try this when I have time.\n\n--Guido van Rossum (home page: http://www.python.org/~guido/)\n" |
1ham
| easy_ham | "Before we get too far down this road, what do people think of creating a\nspambayes package containing classifier and tokenizer? This is just to\nminimize clutter in site-packages.\n\nSkip\n" |
1ham
| easy_ham | "> Before we get too far down this road, what do people think of creating a\n> spambayes package containing classifier and tokenizer? This is just to\n> minimize clutter in site-packages.\n\nToo early IMO (if you mean to leave the various other tools out of\nit). If and when we package this, perhaps we should use Barry's trick\nfrom the email package for making the package itself the toplevel dir\nof the distribution (rather than requiring an extra directory level\njust so the package can be a subdir of the distro).\n\n--Guido van Rossum (home page: http://www.python.org/~guido/)\n\n" |
1ham
| easy_ham | "[Guido, on the classifier pickle on SF]\n> I downloaded and played with it a bit, but had no time to do anything\n> systematic.\n\nCool!\n\n> It correctly recognized a spam that slipped through SA.\n\nDitto.\n\n> But it also identified as spam everything in my inbox that had any\n> MIME structure or HTML parts, and several messages in my saved 'zope\n> geeks' list that happened to be using MIME and/or HTML.\n\nDo you know why? The strangest implied claim there is that it hates MIME\nindependent of HTML. For example, the spamprob of 'content-type:text/plain'\nin that pickle is under 0.21. 'content-type:multipart/alternative' gets\n0.93, but that's not a killer clue, and one bit of good content will more\nthan cancel it out.\n\nWRT hating HTML, possibilities include:\n\n1. It really had to do with something other than MIME/HTML.\n\n2. These are pure HTML (not multipart/alternative with a text/plain part),\n so that the tags aren't getting stripped. The pickled classifier\n despises all hints of HTML due to its c.l.py heritage.\n\n3. These are multipart/alternative with a text/plain part, but the\n latter doesn't contain the same text as the text/html part (for\n example, as Anthony reported, perhaps the text/plain part just\n says something like \"This is an HMTL message.\").\n\nIf it's #2, it would be easy to add an optional bool argument to tokenize()\nmeaning \"even if it is pure HTML, strip the tags anyway\". In fact, I'd like\nto do that and default it to True. The extreme hatred of HTML on tech lists\nstrikes me as, umm, extreme <wink>.\n\n> So I guess I'll have to retrain it (yes, you told me so :-).\n\nThat would be a different experiment. I'm certainly curious to see whether\nJeremy's much-worse-than-mine error rates are typical or aberrant.\n\n" |
1ham
| easy_ham | "> > But it also identified as spam everything in my inbox that had any\n> > MIME structure or HTML parts, and several messages in my saved 'zope\n> > geeks' list that happened to be using MIME and/or HTML.\n> \n> Do you know why? The strangest implied claim there is that it hates MIME\n> independent of HTML. For example, the spamprob of 'content-type:text/plain'\n> in that pickle is under 0.21. 'content-type:multipart/alternative' gets\n> 0.93, but that's not a killer clue, and one bit of good content will more\n> than cancel it out.\n\nI reran the experiment (with the new SpamHam1.pik, but it doesn't seem\nto make a difference). Here are the clues for the two spams in my\ninbox (in hammie.py's output format, which sorts the clues by\nprobability; the first two numbers are the message number and overall\nprobability; then line-folded):\n\n 66 1.00 S 'facility': 0.01; 'speaker': 0.01; 'stretch': 0.01;\n 'thursday': 0.01; 'young,': 0.01; 'mistakes': 0.12; 'growth':\n 0.85; '>content-type:text/plain': 0.85; 'please': 0.85; 'capital':\n 0.92; 'series': 0.92; 'subject:Don': 0.94; 'companies': 0.96;\n '>content-type:text/html': 0.96; 'fee': 0.96; 'money': 0.96;\n '8:00am': 0.99; '9:00am': 0.99; '>content-type:image/gif': 0.99;\n '>content-type:multipart/alternative': 0.99; 'attend': 0.99;\n 'companies,': 0.99; 'content-type/type:multipart/alternative':\n 0.99; 'content-type:multipart/related': 0.99; 'economy': 0.99;\n 'economy\"': 0.99\n\nThis has 6 content-types as spam clues, only one of which is related\nto HTML, despite there being an HTML alternative (and 12 other spam\nclues, vs. only 6 ham clues). This was an announcement of a public\nevent by our building owners, with a text part that was the same as\nthe HTML (AFAICT). Its language may be spammish, but the content-type\nclues didn't help. (BTW, it makes me wonder about the wisdom of\nkeeping punctuation -- 'economy' and 'economy\"' to me don't seem to\ndeserve two be counted as clues.)\n\n 76 1.00 S '(near': 0.01; 'alexandria': 0.01; 'conn': 0.01;\n 'from:adam': 0.01; 'from:email addr:panix': 0.01; 'poked': 0.01;\n 'thorugh': 0.01; 'though': 0.03; \"i'm\": 0.03; 'reflect': 0.05;\n \"i've\": 0.06; 'wednesday': 0.07; 'content-disposition:inline':\n 0.10; 'contacting': 0.93; 'sold': 0.96; 'financially': 0.98;\n 'prices': 0.98; 'rates': 0.99; 'discount.': 0.99; 'hotel': 0.99;\n 'hotels': 0.99; 'hotels.': 0.99; 'nights,': 0.99; 'plaza': 0.99;\n 'rates,': 0.99; 'rates.': 0.99; 'rooms': 0.99; 'season': 0.99;\n 'stations': 0.99; 'subject:Hotel': 0.99\n\nHere is the full message (Received: headers stripped), with apologies\nto Ziggy and David:\n\n\"\"\"\nDate: Fri, 06 Sep 2002 17:17:13 -0400\nFrom: Adam Turoff <ziggy@panix.com>\nSubject: Hotel information\nTo: guido@python.org, davida@activestate.com\nMessage-id: <20020906211713.GK7451@panix.com>\nMIME-version: 1.0\nContent-type: text/plain; charset=us-ascii\nContent-disposition: inline\nUser-Agent: Mutt/1.4i\n\nI've been looking into hotels. I poked around expedia for availability\nfrom March 26 to 29 (4 nights, wednesday thorugh saturday). \n\nI've also started contacting hotels for group rates; some of the group\nrates are no better than the regular rates, and they require signing a\ncontract with a minimum number of rooms sold (with someone financially\nresponsible for unbooked rooms). Most hotels are less than responsive...\n\n\tRadission - Barcelo Hotel (DuPont Circle)\n\t$125/night, $99/weekend\n\n\tState Plaza hotel (Foggy Bottom; near GWU)\n\t$119/night, $99/weekend\n\n\tHilton Silver Spring (Near Metro, in suburban MD)\n\t$99/hight, $74/weekend\n\n\tWindsor Park Hotel\n\tConn Ave, between DuPont Circle/Woodley Park Metro stations\n\t$95/night; needs a car\n\n\tEcono Lodge Alexandria (Near Metro, in suburban VA)\n\t$95/night\n\nThis is a hand picked list; I ignored anything over $125/night, even\nthough there are some really well situated hotels nearby at higher rates.\nAlso, I'm not sure how much these prices reflect an expedia-only\ndiscount. I can't vouch for any of these hotels, either.\n\nI also found out that the down season for DC Hotels are mid-june through\nmid-september, and mid-november through mid-january.\n\nZ.\n\"\"\"\n\nThis one has no MIME structure nor HTML! It even has a\nContent-disposition which is counted as a non-spam clue. It got\nf-p'ed because of the many hospitality-related and money-related\nterms. I'm surprised $125/night and similar aren't clues too. (And\nagain, several spam clues are duplicated with different variations:\n'hotel', 'hotels', 'hotels.', 'subject:Hotel', 'rates,', 'rates.'.\n\n> WRT hating HTML, possibilities include:\n> \n> 1. It really had to do with something other than MIME/HTML.\n> \n> 2. These are pure HTML (not multipart/alternative with a text/plain part),\n> so that the tags aren't getting stripped. The pickled classifier\n> despises all hints of HTML due to its c.l.py heritage.\n> \n> 3. These are multipart/alternative with a text/plain part, but the\n> latter doesn't contain the same text as the text/html part (for\n> example, as Anthony reported, perhaps the text/plain part just\n> says something like \"This is an HMTL message.\").\n> \n> If it's #2, it would be easy to add an optional bool argument to tokenize()\n> meaning \"even if it is pure HTML, strip the tags anyway\". In fact, I'd like\n> to do that and default it to True. The extreme hatred of HTML on tech lists\n> strikes me as, umm, extreme <wink>.\n\nI also looked in more detail at some f-p's in my geeks traffic. The\nfirst one's a doozie (that's the term, right? :-). It has lots of\nHTML clues that are apparently ignored. It was a multipart/mixed with\ntwo parts: a brief text/plain part containing one or two sentences, a\nmondo weird URL:\n\nhttp://x60.deja.com/[ST_rn=ps]/getdoc.xp?AN=687715863&CONTEXT=973121507.1408827441&hitnum=23\n\nand some employer-generated spammish boilerplate; the second part was\nthe HTML taken directly from the above URL. Clues:\n\n 43 1.00 S '\"main\"': 0.01; '(later': 0.01; '(lots': 0.01; '--paul':\n 0.01; '1995-2000': 0.01; 'adopt': 0.01; 'apps': 0.01; 'commands':\n 0.01; 'deja.com': 0.01; 'dejanews,': 0.01; 'discipline': 0.01;\n 'duct': 0.01; 'email addr:digicool': 0.01; 'email name:paul':\n 0.01; 'everitt': 0.01; 'exist,': 0.01; 'forwards': 0.01;\n 'framework': 0.01; 'from:email addr:digicool': 0.01; 'from:email\n name:<paul': 0.01; 'from:paul': 0.01; 'height': 0.01;\n 'hodge-podge': 0.01; 'http0:deja': 0.01; 'http0:zope': 0.01;\n 'http1:[st_rn': 0.01; 'http1:comp': 0.01; 'http1:getdoc': 0.01;\n 'http1:ps]': 0.01; 'http>1:22': 0.01; 'http>1:24': 0.01;\n 'http>1:57': 0.01; 'http>1:an': 0.01; 'http>1:author': 0.01;\n 'http>1:fmt': 0.01; 'http>1:getdoc': 0.01; 'http>1:pr': 0.01;\n 'http>1:products': 0.01; 'http>1:query': 0.01; 'http>1:search':\n 0.01; 'http>1:viewthread': 0.01; 'http>1:xp': 0.01; 'http>1:zope':\n 0.01; 'inventing': 0.01; 'jsp': 0.01; 'jsp.': 0.01; 'logic': 0.01;\n 'maps': 0.01; 'neo': 0.01; 'newsgroup,': 0.01; 'object': 0.01;\n 'popup': 0.01; 'probable': 0.01; 'query': 0.01; 'query,': 0.01;\n 'resizes': 0.01; 'servlet': 0.01; 'skip:? 20': 0.01; 'stems':\n 0.01; 'subject:JSP': 0.01; 'sucks!': 0.01; 'templating': 0.01;\n 'tempted': 0.01; 'url.': 0.01; 'usenet': 0.01; 'usenet,': 0.01;\n 'wrote': 0.01; 'x-mailer:mozilla 4.74 [en] (windows nt 5.0; u)':\n 0.01; 'zope': 0.01; '#000000;': 0.99; '#cc0000;': 0.99;\n '#ff3300;': 0.99; '#ff6600;': 0.99; '#ffffff;': 0.99; '©':\n 0.99; '>': 0.99; ' ': 0.99; '"no': 0.99;\n '.med': 0.99; '.small': 0.99; '0pt;': 0.99; '0px;': 0.99; '10px;':\n 0.99; '11pt;': 0.99; '12px;': 0.99; '18pt;': 0.99; '18px;': 0.99;\n '1pt;': 0.99; '2px;': 0.99; '640;': 0.99; '8pt;': 0.99; '<!--':\n 0.99; '</b>': 0.99; '</body>': 0.99; '</head>': 0.99; '</html>':\n 0.99; '</script>': 0.99; '</select>': 0.99; '</span>': 0.99;\n '</style>': 0.99; '</table>': 0.99; '</td>': 0.99; '</td></tr>':\n 0.99; '</tr>': 0.99; '</tr><tr': 0.99; '<b><a': 0.99; '<base':\n 0.99; '<body': 0.99; '<br>': 0.99; '<br> ': 0.99; '<br><a':\n 0.99; '<br><span': 0.99; '<font': 0.99; '<form': 0.99; '<head>':\n 0.99; '<html>': 0.99; '<img': 0.99; '<input': 0.99; '<meta': 0.99;\n '<option': 0.99; '<p>': 0.99; '<p>a': 0.99; '<script>': 0.99;\n '<select': 0.99; '<span': 0.99; '<style>': 0.99; '<table': 0.99;\n '<td': 0.99; '<td>': 0.99; '<td></td>': 0.99; '<td><img': 0.99;\n '<tr': 0.99; '<tr>': 0.99; '<tr><td': 0.99; '<tr><td><img': 0.99;\n 'absolute;': 0.99; 'align=\"left\"': 0.99; 'align=center': 0.99;\n 'align=left': 0.99; 'align=middle': 0.99; 'align=right': 0.99;\n 'align=right>': 0.99; 'alt=\"\"': 0.99; 'bold;': 0.99; 'border=0':\n 0.99; 'border=0>': 0.99; 'color:': 0.99; 'colspan=2': 0.99;\n 'colspan=2>': 0.99; 'colspan=4': 0.99; 'face=\"arial\"': 0.99;\n 'font-family:': 0.99; 'font-size:': 0.99; 'font-weight:': 0.99;\n 'footer': 0.99; 'for<br>': 0.99; 'fucking<br>': 0.99;\n 'height=\"1\"': 0.99; 'height=\"16\"': 0.99; 'height=1': 0.99;\n 'height=12': 0.99; 'height=125': 0.99; 'height=17': 0.99;\n 'height=18': 0.99; 'height=21': 0.99; 'height=4': 0.99;\n 'height=57': 0.99; 'height=60': 0.99; 'height=8': 0.99;\n 'hspace=0': 0.99; 'http0:g': 0.99; 'http0:web2': 0.99; 'http1:0':\n 0.99; 'http1:ads': 0.99; 'http1:d': 0.99; 'http1:page': 0.99;\n 'http1:site': 0.99; 'http>1:article': 0.99; 'http>1:back': 0.99;\n 'http>1:com': 0.99; 'http>1:d': 0.99; 'http>1:gif': 0.99;\n 'http>1:go': 0.99; 'http>1:group': 0.99; 'http>1:http': 0.99;\n 'http>1:post': 0.99; 'http>1:ps': 0.99; 'http>1:site': 0.99;\n 'http>1:st': 0.99; 'http>1:title': 0.99; 'http>1:yahoo': 0.99;\n 'inc.</a>': 0.99; 'jobs!': 0.99; 'normal;': 0.99; 'nowrap': 0.99;\n 'nowrap>': 0.99; 'nowrap><font': 0.99; 'padding:': 0.99;\n 'rowspan=2': 0.99; 'rowspan=3': 0.99; 'servlets,': 0.99;\n 'size=15': 0.99; 'size=35': 0.99; 'skip:< 10': 0.99; 'skip:b 60':\n 0.99; 'skip:h 110': 0.99; 'skip:h 170': 0.99; 'skip:h 200': 0.99;\n 'skip:h 240': 0.99; 'skip:h 250': 0.99; 'skip:h 290': 0.99;\n 'skip:v 40': 0.99; 'solid;': 0.99; 'text=#000000': 0.99; 'to<br>':\n 0.99; 'type=\"image\"': 0.99; 'type=\"text\"': 0.99; 'type=hidden':\n 0.99; 'type=image': 0.99; 'type=radio': 0.99; 'type=submit': 0.99;\n 'type=text': 0.99; 'valign=top': 0.99; 'valign=top>': 0.99;\n 'value=\"\">': 0.99; 'visibility:': 0.99; 'width:': 0.99;\n 'width=\"33\"': 0.99; 'width=1': 0.99; 'width=100%': 0.99;\n 'width=100%>': 0.99; 'width=12': 0.99; 'width=125': 0.99;\n 'width=130': 0.99; 'width=137': 0.99; 'width=2': 0.99; 'width=20':\n 0.99; 'width=25': 0.99; 'width=4': 0.99; 'width=468': 0.99;\n 'width=6': 0.99; 'width=72': 0.99; 'works<br>': 0.99\n\nThe second f-p had the same structure (and sender :-); the third f-p\nhad the same structure and a different sender. Ditto the fifth, sixth. (Not posting clues for\nbrevity.)\n\nThe fourth was different: plaintext with one very short sentence and a\nURL. Clues:\n\n 300 1.00 S 'from:email addr:digicool': 0.01; 'http1:news': 0.24;\n 'from:email addr:com>': 0.32; 'from:tres': 0.50; 'http>1:1114digi':\n 0.50; 'proto:http': 0.50; 'subject:Geeks': 0.50; 'x-mailer:mozilla\n 4.75 [en] (x11; u; linux 2.2.14-5.0smp i686)': 0.50; 'take': 0.54;\n 'bool:noorg': 0.61; 'http0:com': 0.66; 'skip:h 50': 0.83;\n 'http>1:htm': 0.90; 'subject:Software': 0.96; 'http>1:business':\n 0.99; 'http>1:local': 0.99; 'subject:firm': 0.99; 'us:': 0.99\n\nThe seventh was similar.\n\nI scanned a bunch more until I got bored, and most of them were either\nof the first form (brief text with URL followed by quoted HTML from\nwebsite) or the second (brief text with one or more URLs).\n\nIt's up to you to decide what to call this, but I think these are none\nof your #1, #2 or #3 (they're close to #3, but all are multipart/mixed\nrather than multipart/alternative).\n\n> > So I guess I'll have to retrain it (yes, you told me so :-).\n> \n> That would be a different experiment. I'm certainly curious to see whether\n> Jeremy's much-worse-than-mine error rates are typical or aberrant.\n\nIt's possible that the corpus you've trained on is more homogeneous\nthan you thought.\n\n--Guido van Rossum (home page: http://www.python.org/~guido/)\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-2,8655706,215/\nDate: 2002-10-08T03:31:00+01:00\n\nBBC reporter Donal MacIntyre wins high profile libel case against police.\n\n\n" |
1ham
| easy_ham | "[Guido]\n> I *meant* to say that they were 0.99 clues cancelled out by 0.01\n> clues. But that's wrong too! It looks I haven't grokked this part of\n> your code yet; this one has way more than 16 clues, and it seems the\n> classifier basically ended up counting way more 0.99 than 0.01 clues,\n> and no others made it into the list. I thought it was looking for\n> clues with values in between; apparently it found none that weren't\n> exactly 0.5?\n\nThere's a brief discussion of this before the definition of\nMAX_DISCRIMINATORS. All clues with prob MIN_SPAMPROB and MAX_SPAMPROB are\nsaved in min and max lists, and all other clues are fed into the nbest heap.\nThen the shorter of the min and max lists cancels out the same number of\nclues in the longer list. Whatever remains of the longer list (if anything)\nis then fed into the nbest heap too, but no more than MAX_DISCRIMINATORS of\nthem. In no case do more than MAX_DISCRIMINATORS clues enter into the final\nprobability calculation, but all of the min and max lists go into the list\nof clues (else you'd have no clue that massive cancellation was occurring;\nand massive cancellation may yet turn out to be a hook to signal that manual\nreview is needed). In your specific case, the excess of clues in the longer\nMAX_SPAMPROB list pushed everything else out of the nbest heap, and that's\nwhy you didn't see anything other than 0.01 and 0.99.\n\nBefore adding these special lists, the outcome when faced with many 0.01 and\n0.99 clues was too often a coin toss: whichever flavor just happened to\nappear MAX_DISCRIMINATORS//2 + 1 times first determined the final outcome.\n\n>> That sure sets the record for longest list of cancelling extreme clues!\n\n> This happened to be the longest one, but there were quite a few\n> similar ones.\n\nI just beat it <wink>: a tokenization scheme that folds case, and ignores\npunctuation, and strips a trailing 's' from words, and saves both word\nbigrams and word unigrams, turned up a low-probability very long spam with a\nlist of 410 0.01 clues and 125 0.99 clues! Yikes.\n\n> I wonder if there's anything we can learn from looking at the clues and\nthe\n> HTML. It was heavily marked-up HTML, with ads in the sidebar, but the\nbody\n> text was a serious discussion of \"OO and soft coding\" with lots of highly\n> technical words as clues (including Zope and ZEO).\n\nNo matter how often it says Zope, it gets only one 0.01 clue from doing so.\nDitto for ZEO. In contrast, HTML markup has many unique \"words\" that get\n0.99. BTW, this is a clear case where the assumption of\nconditionally-independent word probabilities is utterly bogus -- e.g., the\nprobability that \"<body>\" appears in a message is highly correlated with the\nprob of \"<br>\" appearing. By treating them as independent, naive Bayes\ngrossly misjudges the probability that both appear, and the only thing you\nget in return is something that can actually be computed <wink>.\n\nRead the \"What about HTML?\" section in tokenizer.py. From the very start,\nI've been investigating what would work best for the mailing lists hosted at\npython.org, and HTML decorations have so far been too strong a clue to\njustify ignoring it in that specific context. I haven't done anything\ngeared toward personal email, including the case of non-mailing-list email\nthat happens to go through python.org.\n\nI'd prefer to strip HTML tags from everything, but last time I tried that it\nstill had bad effects on the error rates in my corpora (the full test\nresults with and without HTML tag stripping is included in the \"What about\nHTML?\" comment block). But as the comment block also says,\n\n# XXX So, if another way is found to slash the f-n rate, the decision here\n# XXX not to strip HTML from HTML-only msgs should be revisited.\n\nand we've since done several things that gave significant f-n rate\nreductions. I should test that again now.\n\n> Are there any minable-but-unmined header lines in your corpus left?\n\nAlmost all of them -- apart from MIME decorations that appear in both\nheaders and bodies (like Content-Type), the *only* header lines the base\ntokenizer looks at now are Subject, From, X-Mailer, and Organization.\n\n> Or do we have to start with a different corpus before we can make\n> progress there?\n\nI would need different data, yes. My ham is too polluted with Mailman\nheader decorations (which I may or may not be able to clean out, but fudging\nthe data is a Mortal Sin and I haven't changed a byte so far), and my spam\ntoo polluted with header clues about the fellow who collected it. In\nparticular I have to skip To and Received headers now, and I suspect they're\ngoing to be very valuable in real life (for example, I don't even catch\n\"undisclosed recipients\" in the To header now!).\n\n> ...\n> No, sorry. These were all of the following structure:\n>\n> multipart/mixed\n> text/plain (brief text plus URL(s))\n> text/html (long HTML copied from website)\n\nAh! That explains why the HTML tags didn't get stripped. I'd again offer\nto add an optional argument to tokenize() so that they'd get stripped here\ntoo, but if it gets glossed over a third time that would feel too much like\na loss <wink>.\n\n>> This seems confused: Jeremy didn't use my trained classifier pickle,\n>> he trained his own classifier from scratch on his own corpora.\n>> ...\n\n> I think it's still corpus size.\n\nI reported on tests I ran with random samples of 220 spams and 220 hams from\nmy corpus (that means training on sets of those sizes as well as predicting\non sets of those sizes), and while that did harm the error rates, the error\nrates I saw were still much better than Jeremy reported when using 500 of\neach.\n\n\nAh, a full test run just finished, on the\n\n tokenization scheme that folds case, and ignores punctuation, and strips\na\n trailing 's' from words, and saves both word bigrams and word unigrams\n\nThis is the code:\n\n # Tokenize everything in the body.\n lastw = ''\n for w in word_re.findall(text):\n n = len(w)\n # Make sure this range matches in tokenize_word().\n if 3 <= n <= 12:\n if w[-1] == 's':\n w = w[:-1]\n yield w\n if lastw:\n yield lastw + w\n lastw = w + ' '\n\n elif n >= 3:\n lastw = ''\n for t in tokenize_word(w):\n yield t\n\nwhere\n\n word_re = re.compile(r\"[\\w$\\-\\x80-\\xff]+\")\n\nThis at least doubled the process size over what's done now. It helped the\nf-n rate significantly, but probably hurt the f-p rate (the f-p rate is too\nlow with only 4000 hams per run to be confident about changes of such small\n*absolute* magnitude -- 0.025% is a single message in the f-p table):\n\nfalse positive percentages\n 0.000 0.000 tied\n 0.000 0.075 lost +(was 0)\n 0.050 0.125 lost +150.00%\n 0.025 0.000 won -100.00%\n 0.075 0.025 won -66.67%\n 0.000 0.050 lost +(was 0)\n 0.100 0.175 lost +75.00%\n 0.050 0.050 tied\n 0.025 0.050 lost +100.00%\n 0.025 0.000 won -100.00%\n 0.050 0.125 lost +150.00%\n 0.050 0.025 won -50.00%\n 0.050 0.050 tied\n 0.000 0.025 lost +(was 0)\n 0.000 0.025 lost +(was 0)\n 0.075 0.050 won -33.33%\n 0.025 0.050 lost +100.00%\n 0.000 0.000 tied\n 0.025 0.100 lost +300.00%\n 0.050 0.150 lost +200.00%\n\nwon 5 times\ntied 4 times\nlost 11 times\n\ntotal unique fp went from 13 to 21\n\nfalse negative percentages\n 0.327 0.218 won -33.33%\n 0.400 0.218 won -45.50%\n 0.327 0.218 won -33.33%\n 0.691 0.691 tied\n 0.545 0.327 won -40.00%\n 0.291 0.218 won -25.09%\n 0.218 0.291 lost +33.49%\n 0.654 0.473 won -27.68%\n 0.364 0.327 won -10.16%\n 0.291 0.182 won -37.46%\n 0.327 0.254 won -22.32%\n 0.691 0.509 won -26.34%\n 0.582 0.473 won -18.73%\n 0.291 0.255 won -12.37%\n 0.364 0.218 won -40.11%\n 0.436 0.327 won -25.00%\n 0.436 0.473 lost +8.49%\n 0.218 0.218 tied\n 0.291 0.255 won -12.37%\n 0.254 0.364 lost +43.31%\n\nwon 15 times\ntied 2 times\nlost 3 times\n\ntotal unique fn went from 106 to 94\n\n" |
1ham
| easy_ham | "[Tim]\n> ...\n> I'd prefer to strip HTML tags from everything, but last time I\n> tried that it still had bad effects on the error rates in my\n> corpora (the full test results with and without HTML tag stripping\n> is included in the \"What about HTML?\" comment block). But as the\n> comment block also says,\n>\n> # XXX So, if another way is found to slash the f-n rate, the decision here\n> # XXX not to strip HTML from HTML-only msgs should be revisited.\n>\n> and we've since done several things that gave significant f-n rate\n> reductions. I should test that again now.\n\nI did so. Alas, stripping HTML tags from all text still hurts the f-n rate\nin my test data:\n\nfalse positive percentages\n 0.000 0.000 tied\n 0.000 0.000 tied\n 0.050 0.075 lost +50.00%\n 0.025 0.025 tied\n 0.075 0.025 won -66.67%\n 0.000 0.000 tied\n 0.100 0.100 tied\n 0.050 0.075 lost +50.00%\n 0.025 0.025 tied\n 0.025 0.000 won -100.00%\n 0.050 0.075 lost +50.00%\n 0.050 0.050 tied\n 0.050 0.025 won -50.00%\n 0.000 0.000 tied\n 0.000 0.000 tied\n 0.075 0.075 tied\n 0.025 0.025 tied\n 0.000 0.000 tied\n 0.025 0.025 tied\n 0.050 0.050 tied\n\nwon 3 times\ntied 14 times\nlost 3 times\n\ntotal unique fp went from 13 to 11\n\nfalse negative percentages\n 0.327 0.400 lost +22.32%\n 0.400 0.400 tied\n 0.327 0.473 lost +44.65%\n 0.691 0.654 won -5.35%\n 0.545 0.473 won -13.21%\n 0.291 0.364 lost +25.09%\n 0.218 0.291 lost +33.49%\n 0.654 0.654 tied\n 0.364 0.473 lost +29.95%\n 0.291 0.327 lost +12.37%\n 0.327 0.291 won -11.01%\n 0.691 0.654 won -5.35%\n 0.582 0.655 lost +12.54%\n 0.291 0.400 lost +37.46%\n 0.364 0.436 lost +19.78%\n 0.436 0.582 lost +33.49%\n 0.436 0.364 won -16.51%\n 0.218 0.291 lost +33.49%\n 0.291 0.400 lost +37.46%\n 0.254 0.327 lost +28.74%\n\nwon 5 times\ntied 2 times\nlost 13 times\n\ntotal unique fn went from 106 to 122\n\nLast time I tried this (see tokenizer.py comments), the f-n rate after\nstripping tags ranged from 0.982% to 1.781%, with a median of about 1.34%,\nso we've made tons of progress on the f-n rate since then. But the mere\npresence of HTML tags still remains a significant clue for c.l.py traffic,\nso I'm left with the same comment:\n\n> # XXX So, if another way is found to slash the f-n rate, the decision here\n> # XXX not to strip HTML from HTML-only msgs should be revisited.\n\nIf we want to take the focus of this away from c.l.py traffic, I can't say\nwhat effect HTML stripping would have (I don't have suitable test data to\nmeasure that on).\n\n" |
1ham
| easy_ham | "\n >> Before we get too far down this road, what do people think of\n >> creating a spambayes package containing classifier and tokenizer?\n >> This is just to minimize clutter in site-packages.\n\n Guido> Too early IMO (if you mean to leave the various other tools out\n Guido> of it).\n\nWell, I mentioned classifier and tokenize only because I thought they were\nthe only importable modules. The rest represent script-level code, right?\n\n Guido> If and when we package this, perhaps we should use Barry's trick\n Guido> from the email package for making the package itself the toplevel\n Guido> dir of the distribution (rather than requiring an extra directory\n Guido> level just so the package can be a subdir of the distro).\n\nThat would be perfect. I tried in the naive way last night, but wound up\nwith all .py files in the package, which wasn't my intent.\n\nSkip\n\n\n" |
1ham
| easy_ham | "These results are from timtest.py. I've got three sets of spam and ham\nwith about 500 messages in each set. Here's what happens when I enable\nmy latest \"received\" header code:\n\n false positive percentages\n 0.187 0.187 tied\n 0.749 0.562 won -24.97%\n 0.780 0.585 won -25.00%\n\n won 2 times\n tied 1 times\n lost 0 times\n\n total unique fp went from 19 to 17\n\n false negative percentages\n 2.072 1.318 won -36.39%\n 2.448 1.318 won -46.16%\n 0.574 0.765 lost +33.28%\n\n won 2 times\n tied 0 times\n lost 1 times\n\n total unique fn went from 43 to 28\n\nAnthony's header counting code does not seem to help.\n\n Neil\n" |
1ham
| easy_ham | "[Neil Schemenauer]\n> These results are from timtest.py. I've got three sets of spam and ham\n> with about 500 messages in each set. Here's what happens when I enable\n> my latest \"received\" header code:\n\nIf you've still got the summary files, please cvs up and try running cmp.py\nagain -- in the process of generalizing cmp.py, you managed to make it skip\nhalf the lines <wink>. That is, if you've got N sets, you *should* get\nN**2-N pairs for each error rate. You have 3 sets, so you should get 6\npairs of f-n rates and 6 pairs of f-p rates.\n\n> false positive percentages\n> 0.187 0.187 tied\n> 0.749 0.562 won -24.97%\n> 0.780 0.585 won -25.00%\n>\n> won 2 times\n> tied 1 times\n> lost 0 times\n>\n> total unique fp went from 19 to 17\n>\n> false negative percentages\n> 2.072 1.318 won -36.39%\n> 2.448 1.318 won -46.16%\n> 0.574 0.765 lost +33.28%\n>\n> won 2 times\n> tied 0 times\n> lost 1 times\n>\n> total unique fn went from 43 to 28\n\nLooks promising! Getting 6 lines of output for each block would give a\nclearer picture, of course.\n\n> Anthony's header counting code does not seem to help.\n\nIt helps my test data too much <wink/sigh>.\n\n" |
1ham
| easy_ham | "Neil trained a classifier using 3 sets with about 500 ham and spam in each.\nWe're missing half his test run results due to a cmp.py bug (since fixed);\nthe \"before custom fiddling\" figures on the 3 reported runs were:\n\n false positive percentages\n 0.187\n 0.749\n 0.780\n total unique fp 19\n\n false negative percentages\n 2.072\n 2.448\n 0.574\n total unique fn 43\n\nThe \"total unique\" figures counts all 6 runs; it's just the individual-run\nfp and fn percentages we're missing for 3 runs.\n\nJeremy reported these \"before custom fiddling\" figures on 4 sets with about\n600 ham and spam in each:\n\n false positive percentages\n 0.000\n 1.398\n 1.398\n 0.000\n 1.242\n 1.242\n 1.398\n 1.398\n 0.000\n 1.553\n 1.553\n 0.000\n total unique fp 139\n\n false negative percentages\n 10.413\n 6.104\n 5.027\n 8.259\n 2.873\n 5.745\n 5.206\n 4.488\n 9.336\n 5.206\n 5.027\n 9.874\n total unique fn 970\n\nSo things are clearly working much better for Neil. Both reported\nsignificant improvements in both f-n and f-p rates by folding in more header\nlines. Neal added Received analysis to the base tokenizer's header\nanalysis, and Jeremy skipped the base tokenizer's header analysis completely\nbut added base-subject-line-like but case-folded tokenization for almost all\nheader lines (excepting only Received, Data, X-From_, and, I *suspect*, all\nthose starting with 'x-vm').\n\nWhen I try 5 random pairs of 500-ham + 500-spam subsets in my test data, I\nsee:\n\n false positive percentages\n 0.000\n 0.000\n 0.200\n 0.000\n 0.200\n 0.000\n 0.200\n 0.000\n 0.000\n 0.200\n 0.400\n 0.000\n 0.200\n 0.000\n 0.200\n 0.400\n 0.000\n 0.400\n 0.200\n 0.600\n total unique fp 10\n\n false negative percentages\n 0.800\n 0.400\n 0.200\n 0.600\n 1.000\n 0.000\n 0.600\n 1.200\n 1.200\n 0.800\n 0.400\n 0.800\n 1.800\n 0.800\n 0.400\n 1.000\n 1.000\n 0.400\n 0.000\n 0.600\n total unique fn 36\n\nThis is much closer to what Neil saw, but still looks better. Another run\non a disjoint 5 random pairs looked much the same; total unique fp rose to\n12 and fn fell to 27; on a third run with another set of disjoint 5 random\npairs, likewise, with fp 12 and fn 40. So I'm pretty confident that it's\nnot going to matter which random subsets of 500 I take from my data.\n\nIt's hard to conclude anything given Jeremy's much worse results. If they\nwere in line with Neil's results, I'd suspect that I've over-tuned the\nalgorithm to statistical quirks in my corpora.\n\n" |
1ham
| easy_ham | "[Tim]\n> One effect of getting rid of MINCOUNT is that it latches on more\n> strongly to rare clues now, and those can be unique to the corpus\n> trained on (e.g., one trained ham says \"gryndlplyx!\", and a followup\n> new ham quotes it).\n\nThis may be a systematic bias in the testing procedure: in real life, msgs\ncome ordered in time. Say there's a thread that spans N messages on c.l.py.\nIn our testing setup, we'll train on a random sampling throughout its whole\nlifetime, and test likewise. New ham \"in the middle\" of this thread gets\nbenefit from that we trained on msgs that appeared both before and *after*\nit in real life. It's quite plausible that the f-p rate would rise without\nthis effect; in real life, at any given time some number of ham threads will\njust be starting their lives, and if they're at all unusual the trained data\nwill know little to nothing about them.\n\n" |
1ham
| easy_ham | "[Neale Pickett]\n> ...\n> If you can spare the memory, you might get better performance in this\n> case using the pickle store, since it only has to go to disk once (but\n> boy, does it ever go to disk!) I can't think of anything obvious to\n> speed things up once it's all loaded into memory, though.\n\nOn my box the current system scores about 50 msgs per second (starting in\nmemory, of course). While that can be a drag while waiting for one of my\nfull test runs to complete (one of those scores a message more than 120,000\ntimes, and trains more than 30,000 times), I've got no urge to do any speed\noptimizations -- if I were using this for my own email, I'd never notice the\ndrag. Guido will bitch like hell about waiting an extra second for his\n50-msg batches to score, but he's the boss so he bitches about everything\n<wink>.\n\n> That's profiler territory, and profiling is exactly the kind of\n> optimization I just said I wasn't going to do :)\n\nI haven't profiled yet, but *suspect* there aren't any egregious hot spots.\n5-gram'ing of long words with high-bit characters is likely overly expensive\n*when* it happens, but it doesn't happen that often, and as an approach to\nnon-English languages it sucks anyway (i.e., there's no point speeding\nsomething that ought to be replaced entirely).\n\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-2,8655710,215/\nDate: 2002-10-08T03:30:56+01:00\n\n*World latest: *Hundreds of Palestinians vent their anger as dozens of Israeli \ntanks withdrew after a gruelling three-hour raid on the Gaza strip.\n\n\n" |
1ham
| easy_ham | "[Guido]\n> There seem to be two \"drivers\" for the classifier now: Neale Pickett's\n> hammie.py, and the original GBayes.py. According to the README.txt,\n> GBayes.py hasn't been kept up to date.\n\nIt seemed that way to me when I ripped the classifier out of it -- I don't\nthink anyone has touched it after.\n\n> Is there anything in there that isn't covered by hammie.py?\n\nSomeone else will have to answer that (I don't use GBayes or hammie, at\nleast not yet).\n\n> About the only useful feature of GBayes.py that hammie.py doesn't (yet)\n> copy is -u, which calculates spamness for an entire mailbox. This\n> feature can easily be copied into hammie.py.\n\nThat's been done now, right?\n\n> (GBayes.py also has a large collection of tokenizers; but timtoken.py\n> rules, so I'm not sure how interesting that is now.)\n\nThose tokenizers are truly trivial to rewrite from scratch if they're\ninteresting. The tiny spam/ham collections in GBayes are also worthless\nnow. The \"self test\" feature didn't do anything except print its results;\nTester.py since became doctest'ed and verifies that some basic machinery\nactually delivers what it's supposed to deliver.\n\n> Therefore I propose to nuke GBayes.py, after adding a -u feature.\n\n+1 here.\n\n> Anyone against?\n\n" |
1ham
| easy_ham | "Just curious if subject line capitalization can be used as an indicator.\n\nEither the percentage of characters that are caps..\n\nOr, percentage starting with a capital letter (if number of words > xx)\n\n\n\nBrad Clements, bkc@murkworks.com (315)268-1000\nhttp://www.murkworks.com (315)268-9812 Fax\nAOL-IM: BKClements\n\n" |
1ham
| easy_ham | "Tim Peters wrote:\n> If you've still got the summary files, please cvs up and try running cmp.py\n> again -- in the process of generalizing cmp.py, you managed to make it skip\n> half the lines <wink>.\n\nWoops. I didn't have the summary files so I regenerated them using a\nslightly different set of data. Here are the results of enabling the\n\"received\" header processing:\n\n false positive percentages\n 0.707 0.530 won -25.04%\n 0.873 0.524 won -39.98%\n 0.301 0.301 tied\n 1.047 1.047 tied\n 0.602 0.452 won -24.92%\n 0.353 0.177 won -49.86%\n\n won 4 times\n tied 2 times\n lost 0 times\n\n total unique fp went from 17 to 14 won -17.65%\n\n false negative percentages\n 2.167 1.238 won -42.87%\n 0.969 0.969 tied\n 1.887 1.372 won -27.29%\n 1.616 1.292 won -20.05%\n 1.029 0.858 won -16.62%\n 1.548 1.548 tied\n\n won 4 times\n tied 2 times\n lost 0 times\n\n total unique fn went from 50 to 38 won -24.00%\n\nMy test set is different than Tim's in that all the email was received\nby the same account. Also, my set contains email sent to me, not to\nmailing lists (I use a different addresses for mailing lists). If\npeople cook up more ideas I will be happy to test them.\n\n Neil\n" |
1ham
| easy_ham | "[Tim]\n> ...\n> On my box the current system scores about 50 msgs per second (starting\n> in memory, of course).\n\nThat was a guess. Bothering to get a clock out, it was more like 80 per\nsecond. See? A 60% speedup without changing a thing <wink>.\n\n" |
1ham
| easy_ham | "[Neil Schemenauer]\n> Woops. I didn't have the summary files so I regenerated them using a\n> slightly different set of data. Here are the results of enabling the\n> \"received\" header processing:\n>\n> false positive percentages\n> 0.707 0.530 won -25.04%\n> 0.873 0.524 won -39.98%\n> 0.301 0.301 tied\n> 1.047 1.047 tied\n> 0.602 0.452 won -24.92%\n> 0.353 0.177 won -49.86%\n>\n> won 4 times\n> tied 2 times\n> lost 0 times\n>\n> total unique fp went from 17 to 14 won -17.65%\n>\n> false negative percentages\n> 2.167 1.238 won -42.87%\n> 0.969 0.969 tied\n> 1.887 1.372 won -27.29%\n> 1.616 1.292 won -20.05%\n> 1.029 0.858 won -16.62%\n> 1.548 1.548 tied\n>\n> won 4 times\n> tied 2 times\n> lost 0 times\n>\n> total unique fn went from 50 to 38 won -24.00%\n>\n> My test set is different than Tim's in that all the email was received\n> by the same account. Also, my set contains email sent to me, not to\n> mailing lists (I use a different addresses for mailing lists).\n\nEnabling the Received headers works even better for me <wink>; here's the\nf-n section from a quick run on 500-element subsets:\n\n 0.600 0.200 won -66.67%\n 0.200 0.200 tied\n 0.200 0.000 won -100.00%\n 0.800 0.400 won -50.00%\n 0.400 0.200 won -50.00%\n 0.400 0.000 won -100.00%\n 0.200 0.000 won -100.00%\n 1.000 0.400 won -60.00%\n 0.800 0.200 won -75.00%\n 1.200 0.600 won -50.00%\n 0.400 0.200 won -50.00%\n 2.000 0.800 won -60.00%\n 0.400 0.400 tied\n 1.200 0.600 won -50.00%\n 0.400 0.000 won -100.00%\n 2.000 1.000 won -50.00%\n 0.400 0.000 won -100.00%\n 0.800 0.000 won -100.00%\n 0.000 0.200 lost +(was 0)\n 0.400 0.000 won -100.00%\n\nwon 17 times\ntied 2 times\nlost 1 times\n\ntotal unique fn went from 38 to 15 won -60.53%\n\nA huge improvement, but for wrong reasons ... except not entirely! The most\npowerful discriminator in the whole database on one training set became:\n\n 'received:unknown' 881 0.99\n\nThat's got nothing to do with BruceG, right?\n\n 'received:bfsmedia.com'\n\nwas also a strong spam indicator across all training sets. I'm jealous.\n\n> If people cook up more ideas I will be happy to test them.\n\nNeil, are using your own tokenizer now, or the tokenizer.Tokenizer.tokenize\ngenerator? Whichever, someone who's not afraid of their headers should try\nadding mboxtest.MyTokenizer.tokenize_headers into the mix, once in lieu of\ntokenizer.Tokenizer.tokenize_headers(), and again in addition to it. Jeremy\nreported on just the former.\n\n" |
1ham
| easy_ham | "> I'd prefer to strip HTML tags from everything, but last time I tried\n> that it still had bad effects on the error rates in my corpora\n\nYour corpora are biased in this respect though -- newsgroups have a\nstrong social taboo on posting HTML, but in many people's personal\ninboxes it is quite abundant.\n\nGetting a good ham corpus may prove to be a bigger hurdle than I\nthough! My own saved mail doesn't reflect what I receive, since I\nsave and throw away selectively (much more so than in the past :-).\n\n> > multipart/mixed\n> > text/plain (brief text plus URL(s))\n> > text/html (long HTML copied from website)\n> \n> Ah! That explains why the HTML tags didn't get stripped. I'd again\n> offer to add an optional argument to tokenize() so that they'd get\n> stripped here too, but if it gets glossed over a third time that\n> would feel too much like a loss <wink>.\n\nI'll bite. Sounds like a good idea to strip the HTML in this case;\nI'd like to see how this improves the f-p rate on this corpus.\n\n--Guido van Rossum (home page: http://www.python.org/~guido/)\n" |
1ham
| easy_ham | "[Tim]\n>> I'd prefer to strip HTML tags from everything, but last time I tried\n>> that it still had bad effects on the error rates in my corpora\n\n[Guido]\n> Your corpora are biased in this respect though -- newsgroups have a\n> strong social taboo on posting HTML, but in many people's personal\n> inboxes it is quite abundant.\n\nWe're in violent agreement there: the comments in tokenizer.py say that as\nstrongly as possible, and I've repeated it endlessly here too. But so long\nas I was the only one doing serious testing, it was a dubious idea to make\nthe code maximally clumsy for me to use on the c.l.py task <wink>.\n\n> Getting a good ham corpus may prove to be a bigger hurdle than I\n> though! My own saved mail doesn't reflect what I receive, since I\n> save and throw away selectively (much more so than in the past :-).\n\nYup, the system picks up on *everything* in the tokens. Graham's proposed\n\"delete as ham\" and \"delete as spam\" keys would probably work very well for\nmotivated geeks. But Paul Svensson has pointed out here that they probably\nwouldn't work nearly so well for real people.\n\n>> Ah! That explains why the HTML tags didn't get stripped. I'd again\n>> offer to add an optional argument to tokenize() so that they'd get\n>> stripped here too, but if it gets glossed over a third time that\n>> would feel too much like a loss <wink>.\n\n> I'll bite. Sounds like a good idea to strip the HTML in this case;\n> I'd like to see how this improves the f-p rate on this corpus.\n\nI'll soon check in this change:\n\n def tokenize_body(self, msg, retain_pure_html_tags=False):\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n \"\"\"Generate a stream of tokens from an email Message.\n\n If a multipart/alternative section has both text/plain and text/html\n sections, the text/html section is ignored. This may not be a good\n idea (e.g., the sections may have different content).\n\n HTML tags are always stripped from text/plain sections.\n\n By default, HTML tags are also stripped from text/html sections.\n However, doing so hurts the false negative rate on Tim's\n comp.lang.python tests (where HTML-only messages are almost never\n legitimate traffic). If optional argument retain_pure_html_tags\n is specified and True, HTML tags are retained in text/html sections.\n \"\"\"\n\nYou should do a cvs up and establish a new baseline first, as I checked in a\npure-win change in the wee hours that cut the fp and fn rates in my tests.\n\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-2,8655712,215/\nDate: 2002-10-08T03:30:54+01:00\n\n*Medicine and health: *The discovery of the myriad little deaths which lead to \nlife brought the most coveted prize in world medicine to two Britons and an \nAmerican yesterday.\n\n\n" |
1ham
| easy_ham | "\n >> If and when we package this, perhaps we should use Barry's trick ...\n\n Greg> It's not a *trick*! It just requires this\n\n Greg> package_dir = {'spambayes': '.'}\n\n Greg> in the setup script.\n\nThat has the nasty side effect of placing all .py files in the package.\nWhat about obvious executable scripts (like timtest or hammie)? How can I\nkeep them out of the package?\n\nSkip\n\n" |
1ham
| easy_ham | "[Skip Montanaro]\n> That has the nasty side effect of placing all .py files in the package.\n> What about obvious executable scripts (like timtest or hammie)? How can I\n> keep them out of the package?\n\nPut them in a scripts folder?\n\n// m\n\n-\n\n" |
1ham
| easy_ham | "\nBecause I get mail through several different email addresses, I frequently\nget duplicates (or triplicates or more-plicates) of various spam messages.\nIn saving spam for later analysis I haven't always been careful to avoid\nsaving such duplicates.\n\nI wrote a script some time ago to try an minimize the duplicates I see by\ncalculating a loose checksum, but I still have some duplicates. Should I\ndelete the duplicates before training or not? Would people be interested in\nthe script? I'd be happy to extricate it from my local modules and check it\ninto CVS.\n\nSkip\n\n" |
1ham
| easy_ham | ">>>>> \"TP\" == Tim Peters <tim.one@comcast.net> writes:\n\n >> First test results using tokenizer.Tokenizer.tokenize_headers()\n >> unmodified. ... Second test results using\n >> mboxtest.MyTokenizer.tokenize_headers(). This uses all headers\n >> except Received, Data, and X-From_. ...\n\n TP> Try the latter again, but call the base tokenize_headers() too.\n\nSorry. I haven't found the time to try any more test runs. Perhaps\nlater today.\n\nJeremy\n\n" |
1ham
| easy_ham | "\n Guido> Why would we care about installing a few extra files, as long as\n Guido> they're inside a package?\n\nI guess you needn't worry about that. It just doesn't seem \"clean\" to me.\n\nS\n\n" |
1ham
| easy_ham | " >> I wrote a script some time ago to try an minimize the duplicates I\n >> see by calculating a loose checksum, but I still have some\n >> duplicates. Should I delete the duplicates before training or not?\n\n Tim> People just can't stop thinking <wink>. The classifier should work\n Tim> best when trained on a wholly random spattering of real life. If\n Tim> real life contains duplicates, then that's what the classifier\n Tim> should see.\n\nA bit more detail. I get destined for many addresses: skip@pobox.com,\nskip@calendar.com, concerts@musi-cal.com, webmaster@mojam.com, etc. I\noriginally wrote (a slightly different version of) the loosecksum.py script\nI'm about to check in to avoid manually scanning all those presumed spams\nwhich are really identical. Once a message was identified as spam, what I\nrefer to as a loose checksum was computed to try and avoid saving the same\nspam multiple times for later review.\n\n >> Would people be interested in the script? I'd be happy to extricate\n >> it from my local modules and check it into CVS.\n\n Tim> Sure! I think it's relevant, but maybe for another purpose. Paul\n Tim> Svensson is thinking harder about real people <wink> than the rest\n Tim> of us, and he may be able to get use out of approaches that\n Tim> identify closely related spam. For example, some amount of spam is\n Tim> going to end up in the ham training data in real life use, and any\n Tim> sort of similarity score to a piece of known spam may be an aid in\n Tim> finding and purging it.\n\nI'll check it in. Let me know if you find it useful.\n\nSkip\n\n" |
1ham
| easy_ham | "On 09 September 2002, Tim Peters said:\n> > Would people be interested in the script? I'd be happy to extricate\n> > it from my local modules and check it into CVS.\n> \n> Sure! I think it's relevant, but maybe for another purpose. Paul Svensson\n> is thinking harder about real people <wink> than the rest of us, and he may\n> be able to get use out of approaches that identify closely related spam.\n> For example, some amount of spam is going to end up in the ham training data\n> in real life use, and any sort of similarity score to a piece of known spam\n> may be an aid in finding and purging it.\n\nOTOH, look into DCC (Distributed Checksum Clearinghouse,\nhttp://www.rhyolite.com/anti-spam/dcc/), which uses fuzzy checksums.\nIt's quite likely that DCC's checksumming scheme is better than\nsomething any of us would throw together for personal use (no offense,\nSkip!). But I have no personal experience of it.\n\n Greg\n-- \nGreg Ward <gward@python.net> http://www.gerg.ca/\nIf it can't be expressed in figures, it is not science--it is opinion.\n" |
1ham
| easy_ham | "[followups to spambayes@python.org please, unless you're specifically\n concerned about some particular bit of email policy for python.org]\n\nOK, after much fiddling with and tweaking of /etc/exim/exim4.conf and\n/etc/exim/local_scan.py on mail.python.org, I am fairly confident that\nI can start harvesting all incoming email at a moment's notice. For the\nrecord, here's how it all works:\n\n * exim4.conf works almost exactly the same as before if the file\n /etc/exim/harvest does not exist. That is, any \"junk mail\n condition\" that can be detected by Exim ACLs (access control lists)\n is handled entirely in exim4.conf: the message is rejected before it\n ever gets to local_scan.py. This covers such diverse cases as\n \"message from known spammer\" (reject after every RCPT TO command),\n \"no message-id header\", and \"8-bit chars in subject\" (both rejected\n after the message headers/body are read).\n\n The main things I have changed in the absence of /etc/exim/harvest\n are:\n - don't check for 8-bit chars in \"From\" header -- the vast\n majority of hits for this test were bounces from some\n Asian ISP; the remaining hits should be handled by SpamAssassin\n - do header sender verification (ie. ensure that there's a\n verifiable email address in at least one of \"From\", \"Reply-to\",\n and \"Sender\") as late as possible, because it requires DNS\n lookups which can be slow (and can also make messages that\n should have been rejected merely be deferred, if those DNS\n lookups timeout)\n\n * if /etc/exim/harvest exists, then the behaviour of all of those\n ACLs in exim4.conf suddenly changes: instead of rejecting recipients\n or messages, they add an X-reject header to the message. This\n header is purely for internal use; it records the name of the folder\n to which the rejected message should be saved, and also gives the\n SMTP error message which should ultimately be used to reject\n the message.\n\n Thus, those messages will now be seen by local_scan.py, which now\n looks for the X-reject header. If found, it uses the folder name\n specified there to save the message, and then rejects it with the\n SMTP error message also given in X-reject. (Currently X-reject is\n retained in saved messages.)\n\n If a message was not tagged with X-reject, then local_scan.py\n runs the usual virus and spam checks. (Namely, my homebrew\n scan for attachments with filenames that look like Windows\n executables, and a run through SpamAssassin.) The logic is\n basically this:\n if virus:\n folder = \"virus\"\n else:\n run through SpamAssassin\n if score >= 10.0:\n folder = \"rejected-spam\"\n elif score >= 5.0:\n folder = \"caught-spam\"\n\n Finally, local_scan.py writes the message to the designated folder.\n By far the biggest folder will be \"accepted\" -- the server handles\n 2000-5000 incoming messages per day, of which maybe 100-500 are junk\n mail. (Oops, just realized I haven't written the code that actually\n saves the message -- d'ohh! Also haven't written anything to\n discriminate personal email, which I must do. Sigh.)\n\n * finally, the big catch: waiting until after you've read the message\n headers and body to actually reject the message is problematic,\n because certain broken MTAs (including those used by some spammers)\n don't consider a 5xx after DATA as a permanent error, but keep\n retrying. D'ohh. This is a minor annoyance currently, where a fair\n amount of stuff is rejected at RCPT TO time. But in harvest mode,\n *everything* (with the exception of people probing for open relays)\n will be rejected at DATA time. So I have cooked up something called\n the ASBL, or automated sender blacklist. This is just a Berkeley DB\n file that maps (sender_ip, sender_address) to an expiry time. When\n local_scan() rejects a message from (sender_ip, sender_address) --\n for whatever reason, including finding an X-reject header added by\n an ACL in exim4.conf -- it adds a record to the ASBL, with an expiry\n time 3 hours in the future. Meanwhile, there's an ACL in exim4.conf\n that checks for records in the ASBL; if there's a record for the\n current (sender_ip, sender_address) that hasn't expired yet, we\n reject all recipients without ever looking at the message headers or\n body.\n\n The downside of this from the point-of-view of corpus collection is\n that if some jerk is busily spamming *@python.org, one SMTP\n connection per address, we will most likely only get one copy. This\n is a win if you're just thinking about reducing server load and\n bandwidth, but I'm not sure if it's helpful for training spam\n detectors. Tim?\n\nHappy harvesting --\n\n Greg\n-- \nGreg Ward <gward@python.net> http://www.gerg.ca/\nBudget's in the red? Let's tax religion!\n -- Dead Kennedys\n" |
1ham
| easy_ham | "We've not only reduced the f-p and f-n rates in my test runs, we've also\nmade the score distributions substantially sharper. This is bad news for\nGreg, because the non-existent \"middle ground\" is becoming even less\nexistent <wink>:\n\nHam distribution for all runs:\n* = 1333 items\n 0.00 79975 ************************************************************\n 2.50 1 *\n 5.00 0\n 7.50 0\n 10.00 2 *\n 12.50 1 *\n 15.00 0\n 17.50 0\n 20.00 0\n 22.50 1 *\n 25.00 0\n 27.50 0\n 30.00 0\n 32.50 0\n 35.00 0\n 37.50 1 *\n 40.00 0\n 42.50 0\n 45.00 0\n 47.50 0\n 50.00 0\n 52.50 0\n 55.00 0\n 57.50 0\n 60.00 1 *\n 62.50 0\n 65.00 1 *\n 67.50 0\n 70.00 0\n 72.50 0\n 75.00 0\n 77.50 0\n 80.00 0\n 82.50 0\n 85.00 0\n 87.50 0\n 90.00 0\n 92.50 0\n 95.00 0\n 97.50 17 *\n\nSpam distribution for all runs:\n* = 914 items\n 0.00 118 *\n 2.50 7 *\n 5.00 0\n 7.50 2 *\n 10.00 1 *\n 12.50 1 *\n 15.00 3 *\n 17.50 1 *\n 20.00 1 *\n 22.50 1 *\n 25.00 0\n 27.50 0\n 30.00 4 *\n 32.50 3 *\n 35.00 4 *\n 37.50 2 *\n 40.00 0\n 42.50 1 *\n 45.00 1 *\n 47.50 0\n 50.00 2 *\n 52.50 3 *\n 55.00 1 *\n 57.50 2 *\n 60.00 0\n 62.50 1 *\n 65.00 1 *\n 67.50 10 *\n 70.00 2 *\n 72.50 1 *\n 75.00 2 *\n 77.50 1 *\n 80.00 0\n 82.50 0\n 85.00 1 *\n 87.50 4 *\n 90.00 2 *\n 92.50 5 *\n 95.00 14 *\n 97.50 54806 ************************************************************\n\nAs usual for me, this is an aggregate of 20 runs, each both training and\npredicting on 4000 c.l.py ham + ~2750 BruceG spam.\n\nOnly 25 ham scores out of 80,000 are above 0.025 now (and, yes, the\n\"Nigerian scam\"-quoting msg is still counted as ham -- I haven't taken\nanything out of the ham corpus since remving the \"If AOL were a car\" spam),\nthe f-p rate wouldn't have changed at all if the spamprob cutoff were\ndropped from 0.90 to 0.675, dropping the cutoff to 0.40 would have added\nonly 2 false positives, and dropping it to 0.15 would have added only\nanother 2 more!\n\nIt's spooky.\n\n" |
1ham
| easy_ham | "\n>>> Tim Peters wrote\n> We've not only reduced the f-p and f-n rates in my test runs, we've also\n> made the score distributions substantially sharper. This is bad news for\n> Greg, because the non-existent \"middle ground\" is becoming even less\n> existent <wink>:\n\nWell, I've finally got around to pulling down the SF code. Starting\nwith it, and absolutely zero local modifications, I see the following:\n\nHam distribution for all runs:\n* = 589 items\n 0.00 35292 ************************************************************\n 2.50 36 *\n 5.00 21 *\n 7.50 12 *\n 10.00 6 *\n 12.50 9 *\n 15.00 6 *\n 17.50 3 *\n 20.00 8 *\n 22.50 5 *\n 25.00 3 *\n 27.50 18 *\n 30.00 9 *\n 32.50 1 *\n 35.00 4 *\n 37.50 3 *\n 40.00 0 \n 42.50 3 *\n 45.00 3 *\n 47.50 4 *\n 50.00 9 *\n 52.50 5 *\n 55.00 5 *\n 57.50 3 *\n 60.00 4 *\n 62.50 2 *\n 65.00 2 *\n 67.50 6 *\n 70.00 1 *\n 72.50 3 *\n 75.00 2 *\n 77.50 4 *\n 80.00 3 *\n 82.50 3 *\n 85.00 6 *\n 87.50 8 *\n 90.00 4 *\n 92.50 8 *\n 95.00 15 *\n 97.50 441 *\n\nSpam distribution for all runs:\n* = 504 items\n 0.00 393 *\n 2.50 17 *\n 5.00 18 *\n 7.50 12 *\n 10.00 4 *\n 12.50 6 *\n 15.00 11 *\n 17.50 10 *\n 20.00 10 *\n 22.50 5 *\n 25.00 3 *\n 27.50 19 *\n 30.00 8 *\n 32.50 2 *\n 35.00 0 \n 37.50 1 *\n 40.00 5 *\n 42.50 5 *\n 45.00 7 *\n 47.50 2 *\n 50.00 5 *\n 52.50 1 *\n 55.00 9 *\n 57.50 11 *\n 60.00 6 *\n 62.50 4 *\n 65.00 3 *\n 67.50 5 *\n 70.00 7 *\n 72.50 9 *\n 75.00 2 *\n 77.50 13 *\n 80.00 3 *\n 82.50 7 *\n 85.00 15 *\n 87.50 16 *\n 90.00 11 *\n 92.50 16 *\n 95.00 45 *\n 97.50 30226 ************************************************************\n\n\nMy next (current) task is to complete the corpus I've got - it's currently\ngot ~ 9000 ham, 7800 spam, and about 9200 currently unsorted. I'm tossing \nup using either hammie or spamassassin to do the initial sort - previously\nI've used various forms of 'grep' for keywords and a little gui thing to \npop a message up and let me say 'spam/ham', but that's just getting too, too\ntedious.\n\nI can't make it available en masse, but I will look at finding some of\nthe more 'interesting' uglies. One thing I've seen (consider this \n'anecdotal' for now) is that the 'skip' tokens end up in a _lot_ of the \nf-ps.\n\nAnthony\n" |
1ham
| easy_ham | "[Skip Montanaro]\n> After my latest cvs up, timtest fails with\n>\n> Traceback (most recent call last):\n> File \"/home/skip/src/spambayes/timtest.py\", line 294, in ?\n> drive(nsets)\n> File \"/home/skip/src/spambayes/timtest.py\", line 264, in drive\n> d = Driver()\n> File \"/home/skip/src/spambayes/timtest.py\", line 152, in __init__\n> self.global_ham_hist = Hist(options.nbuckets)\n> AttributeError: 'OptionsClass' object has no attribute 'nbuckets'\n>\n> I'm running it as\n>\n> timtest -n5 > Data/timtest.out\n>\n> from my ~/Mail directory (not from my ~/src/spambayes directory). If I\n> create a symlink to ~/src/spambayes/bayes.ini it works once again, but\n> shouldn't there be an nbuckets attribute with a default value already?\n\nI never used ConfigParser before, but I read that its read() method silently\nignores files that don't exist. If 'bayes.ini' isn't found, *none* of the\noptions will be defined. Since you want to run this from a directory other\nthan my spambayes directory, it's up to you to check in changes to make that\npossible <wink>.\n\n" |
1ham
| easy_ham | "[Tim]\n> I never used ConfigParser before, but I read that its read() \n> method silently ignores files that don't exist. If 'bayes.ini'\n> isn't found, *none* of the options will be defined. ...\n\nNote that I since got rid of bayes.ini (it's embedded in Options.py \nnow), so search-path issues won't burn you here anymore. The intended \nway to customize the tokenizer and testers is via creating your own \nbayescustomize.ini. You'll get burned by search-path issues wrt that \ninstead now <0.7 wink>.\n\n" |
1ham
| easy_ham | "[Anthony Baxter]\n> Well, I've finally got around to pulling down the SF code. Starting\n> with it, and absolutely zero local modifications, I see the following:\n\nHow many runs is this summarizing? For each, how many ham&spam were in the\ntraining set? How many in the prediction sets? What were the error rates\n(run rates.py over your output file)?\n\nThe effect of set sizes on accuracy rates isn't known. I've informally\nreported some results from just a few controlled experiments on that.\nJeremy reported improved accuracy by doubling the training set size, but\nthat wasn't a controlled experiment (things besides just training set size\nchanged between \"before\" and \"after\").\n\n> Ham distribution for all runs:\n> * = 589 items\n> 0.00 35292 ************************************************************\n> 2.50 36 *\n> 5.00 21 *\n> 7.50 12 *\n> 10.00 6 *\n> ...\n> 90.00 4 *\n> 92.50 8 *\n> 95.00 15 *\n> 97.50 441 *\n>\n> Spam distribution for all runs:\n> * = 504 items\n> 0.00 393 *\n> 2.50 17 *\n> 5.00 18 *\n> 7.50 12 *\n> 10.00 4 *\n> ...\n> 90.00 11 *\n> 92.50 16 *\n> 95.00 45 *\n> 97.50 30226 ************************************************************\n>\n>\n> My next (current) task is to complete the corpus I've got - it's currently\n> got ~ 9000 ham, 7800 spam, and about 9200 currently unsorted. I'm tossing\n> up using either hammie or spamassassin to do the initial sort -\npreviously\n> I've used various forms of 'grep' for keywords and a little gui thing to\n> pop a message up and let me say 'spam/ham', but that's just getting too,\ntoo\n> tedious.\n\nYup, tagging data is mondo tedious, and mistakes hurt.\n\nI expect hammie will do a much better job on this already than hand\ngrepping. Be sure to stare at the false positives and get the spam out of\nthere.\n\n> I can't make it available en masse, but I will look at finding some of\n> the more 'interesting' uglies. One thing I've seen (consider this\n> 'anecdotal' for now) is that the 'skip' tokens end up in a _lot_ of the\n> f-ps.\n\nWith probabilities favoring ham or spam? A skip token is produced in lieu\nof \"word\" more than 12 chars long and without any high-bit characters. It's\npossible that they helped me because raw HTML produces lots of these.\nHowever, if you're running current CVS, Tokenizer/retain_pure_html_tags\ndefaults to False now, so HTML decorations should vanish before body\ntokenization.\n\n" |
1ham
| easy_ham | "I've been running hammie on all my incoming messages, and I noticed that\nmultipart/alternative messages are totally hosed: they have no content,\njust the MIME boundaries. For instance, the following message:\n\n------------------------------8<------------------------------\nFrom: somebody <someone@somewhere.org>\nTo: neale@woozle.org\nSubject: Booga\nContent-type: multipart/alternative; boundary=\"snot\"\n\nThis is a multi-part message in MIME format.\n\n--snot\nContent-type: text/plain; charset=iso-8859-1\nContent-transfer-encoding: 7BIT\n\nHi there.\n--snot\nContent-type: text/html; charset=iso-8859-1\nContent-transfer-encoding: 7BIT\n\n<pre>Hi there.</pre>\n--snot--\n------------------------------8<------------------------------\n\nComes out like this:\n\n------------------------------8<------------------------------\nFrom: somebody <someone@somewhere.org>\nTo: neale@woozle.org\nSubject: Booga\nContent-type: multipart/alternative; boundary=\"snot\"\nX-Hammie-Disposition: No; 0.74; [unrelated gar removed]\n\nThis is a multi-part message in MIME format.\n\n--snot\n\n--snot--\n------------------------------8<------------------------------\n\nI'm using \"Python 2.3a0 (#1, Sep 9 2002, 22:56:24)\".\n\nI've fixed it with the following patch to Tim's tokenizer, but I have to\nadmit that I'm baffled as to why it works. Maybe there's some subtle\ninteraction between generators and lists that I can't understand. Or\nsomething. Being as I'm baffled, I don't imagine any theory I come up\nwith will be anywhere close to reality.\n\nIn any case, be advised that (at least for me) hammie will eat\nmultipart/alternative messages until this patch is applied. The patch\nseems rather bogus though, so I'm not checking it in, in the hope that\nthere's a better fix I just wasn't capable of discovering :)\n\n------------------------------8<------------------------------\nIndex: tokenizer.py\n===================================================================\nRCS file: /cvsroot/spambayes/spambayes/tokenizer.py,v\nretrieving revision 1.15\ndiff -u -r1.15 tokenizer.py\n--- tokenizer.py\t10 Sep 2002 18:15:49 -0000\t1.15\n+++ tokenizer.py\t11 Sep 2002 05:01:16 -0000\n@@ -1,3 +1,4 @@\n+#! /usr/bin/env python\n \"\"\"Module to tokenize email messages for spam filtering.\"\"\"\n \n import email\n@@ -507,7 +508,8 @@\n htmlpart = textpart = None\n stack = part.get_payload()\n while stack:\n- subpart = stack.pop()\n+ subpart = stack[0]\n+ stack = stack[1:]\n ctype = subpart.get_content_type()\n if ctype == 'text/plain':\n textpart = subpart\n------------------------------8<------------------------------\n\n\n" |
1ham
| easy_ham | "So then, Neale Pickett <neale@woozle.org> is all like:\n\n> Maybe there's some subtle interaction between generators and lists\n> that I can't understand. Or something. Being as I'm baffled, I don't\n> imagine any theory I come up with will be anywhere close to reality.\n\nAnd then, just as I was about to fall asleep, I figured it out. The\ntokenizer now has an extra [:], and all is well. I feel like a real\nchawbacon for not realizing this earlier. :*)\n\nBlaming it on staying up past bedtime,\n\nNeale\n" |
1ham
| easy_ham | "On 10 September 2002, Tim Peters said:\n> Why would spam be likely to end up with two instances of Return-Path\n> in the headers?\n\nPossibly another qmail-ism from Bruce Guenter's spam collection. Or\nmaybe Anthony's right about spammers being stupid and blindly copying\nheaders. (Well, of course he's right about spammers being stupid; it's\njust this particular aspect of stupidity that's open to question.)\n\n Greg\n-- \nGreg Ward <gward@python.net> http://www.gerg.ca/\nThink honk if you're a telepath.\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-2,8655714,215/\nDate: 2002-10-08T03:30:52+01:00\n\n*Business: *The long-awaited recovery in Britain's manufacturing sector ground \nto a halt in August, despite a sharp rise in car production, official figures \nshowed yesterday.\n\n\n" |
1ham
| easy_ham | "\n\n> How were these msgs broken up into the 5 sets? Set4 in particular is giving\n> the other sets severe problems, and Set5 blows the f-n rate on everything\n> it's predicting -- when the rates across runs within a training set vary by\n> as much as a factor of 25, it suggests there was systematic bias in the way\n> the sets were chosen. For example, perhaps they were broken into sets by\n> arrival time. If that's what you did, you should go back and break them\n> into sets randomly instead. If you did partition them randomly, the wild\n> variance across runs is mondo mysterious.\n\nThey weren't partitioned in any particular scheme - I think I'll write a\nreshuffler and move them all around, just in case (fwiw, I'm using MH \nstyle folders with numbered files - means you can just use MH tools to \nmanipulate the sets.)\n\n\n> For whatever reason, there appear to be few of those in BruceG's spam\n> collection. I added code to strip uuencoded sections, and pump out uuencode\n> summary tokens instead. I'll check it in. It didn't make a significant\n> difference on my usual test run (a single spam in my Set4 is now judged as\n> ham by the other 4 sets; nothing else changed). It does shrink the database\n> size here by a few percent. Let us know whether it helps you!\n\nI'll give it a go.\n\n\n-- \nAnthony Baxter <anthony@interlink.com.au> \nIt's never too late to have a happy childhood.\n\n" |
1ham
| easy_ham | "[Neale Pickett]\n> And then, just as I was about to fall asleep, I figured it out. The\n> tokenizer now has an extra [:], and all is well. I feel like a real\n> chawbacon for not realizing this earlier. :*)\n\nGood eye, Neale! Thanks for the fix.\n\n> Blaming it on staying up past bedtime,\n\nBlame it on Barry. I do <wink>.\n" |
1ham
| easy_ham | "[Tim]\n>> Why would spam be likely to end up with two instances of Return-Path\n>> in the headers?\n\n[Greg Ward]\n> Possibly another qmail-ism from Bruce Guenter's spam collection.\n\nDoesn't seem *likely*, as it appeared in about 900 of about 14,000 spams.\nIt could be specific to one of his bait addresses, though -- don't know. A\nnice thing about a statistical inferencer is that you really don't have to\nknow why a thing works, just whether it works <wink>.\n\n> Or maybe Anthony's right about spammers being stupid and blindly copying\n> headers. (Well, of course he's right about spammers being stupid; it's\n> just this particular aspect of stupidity that's open to question.)\n\nI'm going to blow it off -- it's just another instance of being pointlessly\nbaffled by a mixed corpus half of which I don't know enough about.\n\n" |
1ham
| easy_ham | "\n Anthony> They weren't partitioned in any particular scheme - I think\n Anthony> I'll write a reshuffler and move them all around, ...\n\nHmmm. How about you create empty Data/Ham/Set[12345], stuff all your\nfiles into a Data/Ham/reservoir folder, then run the rebal.py script to\nrandomly parcel messages out to the various real directories?\n\nI suspect you can pull the same stunt for your Data/Spam stuff.\n\nSkip\n" |
1ham
| easy_ham | "[Skip]\n> Hmmm. How about you create empty Data/Ham/Set[12345], stuff all your\n> files into a Data/Ham/reservoir folder, then run the rebal.py script to\n> randomly parcel messages out to the various real directories?\n\nI'm afraid rebal is quadratic-time in the # of msgs it shuffles around --\nsince it was only intended to move a few files around, it's dead simple.\n\nAn easy thing is to start the same way: move all the files into a single\ndirectory. Then do random.shuffle() on an os.listdir() of that directory.\nThen it's trivial to split the result into N slices, and move the files into\nN other directories accordingly.\n\n> I suspect you can pull the same stunt for your Data/Spam stuff.\n\nYup!\n\n" |
1ham
| easy_ham | "\n> They weren't partitioned in any particular scheme - I think I'll write a\n> reshuffler and move them all around, just in case (fwiw, I'm using MH \n> style folders with numbered files - means you can just use MH tools to \n> manipulate the sets.)\n\nFreak show. Obviously there _was_ some sort of patterns to the data:\n\nTraining on Data/Ham/Set1 & Data/Spam/Set1 ... 1798 hams & 1546 spams\n 0.779 0.582\n 0.834 0.840\n 0.945 0.452\n 0.667 1.164\nTraining on Data/Ham/Set2 & Data/Spam/Set2 ... 1798 hams & 1547 spams\n 1.112 0.776\n 0.834 0.969\n 0.779 0.646\n 0.667 1.100\nTraining on Data/Ham/Set3 & Data/Spam/Set3 ... 1798 hams & 1548 spams\n 1.168 0.582\n 1.001 0.646\n 0.834 0.582\n 0.667 0.453\nTraining on Data/Ham/Set4 & Data/Spam/Set4 ... 1798 hams & 1547 spams\n 0.779 0.712\n 0.779 0.582\n 0.556 0.840\n 0.779 0.970\nTraining on Data/Ham/Set5 & Data/Spam/Set5 ... 1798 hams & 1546 spams\n 0.612 0.517\n 0.779 0.517\n 0.723 0.711\n 0.667 0.582\ntotal false pos 144 1.60177975528\ntotal false neg 101 1.30592190328\n\n(before the shuffle, I was seeing:\ntotal false pos 273 3.03501945525\ntotal false neg 367 4.74282760403\n)\n\nFor sake of comparision, here's what I see for partitioned into 2 sets:\n\nTraining on Data/Ham/Set1 & Data/Spam/Set1 ... 4492 hams & 3872 spams\n 0.490 0.776\nTraining on Data/Ham/Set2 & Data/Spam/Set2 ... 4493 hams & 3868 spams\n 0.401 0.491\ntotal false pos 40 0.445186421814\ntotal false neg 49 0.633074935401\n\nmore later...\n\nAnthony\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nDamian Conway Publishes Exegesis 5\n posted by hfb on Friday August 23, @14:06 (perl6)\n http://use.perl.org/article.pl?sid=02/08/23/187226\n\n2nd Open Source CMS Conference\n posted by ziggy on Friday August 23, @18:30 (events)\n http://use.perl.org/article.pl?sid=02/08/23/1837242\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "\nForwarded-by: Rob Windsor <windsor@warthog.com>\nForwarded-by: \"David Dietz\" <kansas1@mynewroads.com>\n\nThe latest proposal to drive the Taliban and Al Qaeda out of the\nMountains of Afghanistan is to send in the ASF (Alabama Special\nForces) Billy Bob, Bubba, Boo, Scooter, Cooter and Junior are being\nsent in with the following information about the Taliban:\n\n1. There is no limit.\n2. The season opened last weekend.\n3. They taste just like chicken.\n4. They hate beer, pickup trucks, country music, and Jesus.\n5. Some are queer.\n6. They don't like barbecue.\n\nAnd most importantly...\n\n7. They were responsible for Dale Earnhardt's death.\n\nWe estimate it should be over in just about two days.\n\n\n" |
1ham
| easy_ham | "\nForwarded-by: Rob Windsor <windsor@warthog.com>\nForwarded-by: \"Dave Bruce\" <dbruce@wwd.net>\nForwarded by: Gary Williams <garyaw1990@aol.com>\n\nA Mother had 3 virgin daughters. They were all getting married within a \nshort time period. Because Mom was a bit worried about how their sex \nlife would get started, she made them all promise to send a postcard \nfrom the honeymoon with a few words on how marital sex felt. \n\nThe first girl sent a card from Hawaii two days after the wedding. The \ncard said nothing but \"Maxwell House\". Mom was puzzled at first, but \nthen went to the kitchen and got out the Nescafe jar. It said: \"Good till \nthe last drop.\" Mom blushed, but was pleased for her daughter. \n\nThe second girl sent the card from Vermont a week after the wedding, \nand the card read: \"Benson & Hedges\". Mom now knew to go straight to \nher husband's cigarettes, and she read from the Benson & Hedges pack: \n\"Extra Long, King Size\". She was again slightly embarrassed but still \nhappy for her daughter. \n\nThe third girl left for her honeymoon in the Caribbean. Mom waited for \na week, nothing. Another week went by and still nothing. Then after a \nwhole month, a card finally arrived. Written on it with shaky \nhandwriting were the words: \"British Airways\". Mom took out her latest \nHarper's Bazaar magazine, flipped through the pages fearing the worst, \nand finally found the ad for the airline. The ad said: \"Three times a \nday, seven days a week, both ways.\" \n\nMom fainted. \n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * This Week on perl5-porters (19-25 August 2002)\n * Slashdot Taking Questions to Ask Larry Wall\n\n+--------------------------------------------------------------------+\n| This Week on perl5-porters (19-25 August 2002) |\n| posted by rafael on Monday August 26, @07:42 (summaries) |\n| http://use.perl.org/article.pl?sid=02/08/26/1154225 |\n+--------------------------------------------------------------------+\n\nI guess those thunderstorms came. And how they came. From an even wetter\nthan normal country on the shores of the North Sea, comes this weeks\nperl5-porters summary.\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/08/26/1154225\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/08/26/1154225\n\n\n+--------------------------------------------------------------------+\n| Slashdot Taking Questions to Ask Larry Wall |\n| posted by pudge on Monday August 26, @14:41 (perl6) |\n| http://use.perl.org/article.pl?sid=02/08/26/1845224 |\n+--------------------------------------------------------------------+\n\nPlease, if you feel inclined, make sure some [0]reasonable questions are\nasked of him.\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/08/26/1845224\n\nLinks:\n 0. http://interviews.slashdot.org/article.pl?sid=02/08/25/236217&tid=145\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThis Week on perl5-porters (19-25 August 2002)\n posted by rafael on Monday August 26, @07:42 (summaries)\n http://use.perl.org/article.pl?sid=02/08/26/1154225\n\nSlashdot Taking Questions to Ask Larry Wall\n posted by pudge on Monday August 26, @14:41 (perl6)\n http://use.perl.org/article.pl?sid=02/08/26/1845224\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "\nForwarded-by: Per Hammer <perh@inrs.co.uk>\n\nhttp://news.bbc.co.uk/1/hi/world/asia-pacific/2218715.stm\n\nBrothel duty for Australian MP\n\nA conservative Member of Parliament in Australia is set to \nspend the day as a \"slave\" at one of Western Australia's most \nnotorious brothels. \n\nLiberal Party member Barry Haase was \"won\" in a charity auction \nafter the madam of Langtree's brothel in the mining town of \nKalgoorlie made the highest offer for his services for a day. \n\n[...]\n\n\"I hope he will leave with an informed decision on what \nAustralian brothels are all about and it will help him in his \npolitical career to make informed decisions that he might not \nhave been able to make before,\" Ms Kenworthy said. \n\nMr Haase, a member of Prime Minister John Howard's party \nseemed relaxed about the prospect of working in a brothel. \n\n\"You can't be half-hearted about fundraising for significant \ncharities and I think I'm big enough to play the game,\" he said. \n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\n.NET and Perl, Working Together\n posted by pudge on Tuesday August 27, @09:17 (links)\n http://use.perl.org/article.pl?sid=02/08/27/1317253\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nTwo OSCON Lightning Talks Online\n posted by gnat on Friday August 30, @18:37 (news)\n http://use.perl.org/article.pl?sid=02/08/30/2238234\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * Two OSCON Lightning Talks Online\n\n+--------------------------------------------------------------------+\n| Two OSCON Lightning Talks Online |\n| posted by gnat on Friday August 30, @18:37 (news) |\n| http://use.perl.org/article.pl?sid=02/08/30/2238234 |\n+--------------------------------------------------------------------+\n\n[0]gnat writes \"The first two OSCON 2002 lightning talks are available\nfrom [1]the perl.org website. They are Dan Brian on \"What Sucks and What\nRocks\" (in [2]QuickTime and [3]mp3), and Brian Ingerson on \"Your Own\nPersonal Hashbang\" (in [4]QuickTime and [5]mp3). Enjoy!\"\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/08/30/2238234\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/08/30/2238234\n\nLinks:\n 0. mailto:gnat@frii.com\n 1. http://www.perl.org/tpc/2002/\n 2. http://www.perl.org/tpc/2002/movies/lt-1/\n 3. http://www.perl.org/tpc/2002/audio/lt-1/\n 4. http://www.perl.org/tpc/2002/movies/lt-2/\n 5. http://www.perl.org/tpc/2002/audio/lt-2/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-1,8643939,1440/\nDate: Not supplied\n\nWorld chess champion Vladimir Kramnik takes the lead over the computer Deep \nFritz, after the machine makes a peculiar mistake\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nPerl Ports Page\n posted by hfb on Saturday August 31, @13:40 (cpan)\n http://use.perl.org/article.pl?sid=02/08/31/1744247\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * This Week on perl5-porters (26 August / 1st September 2002)\n * The Perl Review, v0 i5\n\n+--------------------------------------------------------------------+\n| This Week on perl5-porters (26 August / 1st September 2002) |\n| posted by rafael on Monday September 02, @03:47 (summaries) |\n| http://use.perl.org/article.pl?sid=02/09/02/0755208 |\n+--------------------------------------------------------------------+\n\nThis week, we're back to our regularly scheduled p5p report, straight\nfrom my keyboard's mouth. Many thanks to Elizabeth Mattjisen who provided\nthe two previous reports, while I was away from p5p and from whatever\nmight evocate more or less a computer.\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/02/0755208\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/02/0755208\n\n\n+--------------------------------------------------------------------+\n| The Perl Review, v0 i5 |\n| posted by ziggy on Monday September 02, @14:20 (news) |\n| http://use.perl.org/article.pl?sid=02/09/02/1823229 |\n+--------------------------------------------------------------------+\n\n[0]brian_d_foy writes \"The latest issue of The Perl Review is ready:\n[1]http://www.theperlreview.com\n\n * Extreme Mowing -- Andy Lester\n\n * Perl Assembly Language -- Phil Crow\n\n * What Perl Programmers Should Know About Java -- Beth Linker\n\n * Filehandle Ties -- Robby Walker\n\n * The Iterator Design Pattern -- brian d foy\n Enjoy!\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/02/1823229\n\nLinks:\n 0. http://www.theperlreview.com\n 1. http://www.theperlreview.com/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThis Week on perl5-porters (26 August / 1st September 2002)\n posted by rafael on Monday September 02, @03:47 (summaries)\n http://use.perl.org/article.pl?sid=02/09/02/0755208\n\nThe Perl Review, v0 i5\n posted by ziggy on Monday September 02, @14:20 (news)\n http://use.perl.org/article.pl?sid=02/09/02/1823229\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nPerl CMS Systems\n posted by ziggy on Tuesday September 03, @05:00 (tools)\n http://use.perl.org/article.pl?sid=02/09/02/1827239\n\n1998 Perl Conference CD Online\n posted by gnat on Tuesday September 03, @19:34 (news)\n http://use.perl.org/article.pl?sid=02/09/03/2334251\n\nBricolage 1.4.0 Escapes!\n posted by chip on Tuesday September 03, @19:57 (tools)\n http://use.perl.org/article.pl?sid=02/09/04/002204\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * Perl CMS Systems\n * 1998 Perl Conference CD Online\n * Bricolage 1.4.0 Escapes!\n\n+--------------------------------------------------------------------+\n| Perl CMS Systems |\n| posted by ziggy on Tuesday September 03, @05:00 (tools) |\n| http://use.perl.org/article.pl?sid=02/09/02/1827239 |\n+--------------------------------------------------------------------+\n\nKLB writes \"[0]Krow, one of the authors of [1]Slash, has written up a\n[2]review on [3]Linux.com of two other Perl CMS systems, the E2 and LJ\nengines. Makes for interesting reading.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/02/1827239\n\nLinks:\n 0. http://krow.net/~\n 1. http://slashcode.com/\n 2. http://newsforge.com/article.pl?sid=02/08/28/0013255&mode=thread&tid=49\n 3. http://linux.com/\n\n\n+--------------------------------------------------------------------+\n| 1998 Perl Conference CD Online |\n| posted by gnat on Tuesday September 03, @19:34 (news) |\n| http://use.perl.org/article.pl?sid=02/09/03/2334251 |\n+--------------------------------------------------------------------+\n\n[0]gnat writes \"[1]The 1998 Perl Conference CD is online on perl.org.\nEnjoy the blast from the past (was [2]this Damian's first public\nappearance?)\" (thanks to Daniel Berger for packratting the CD!)\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/03/2334251\n\nLinks:\n 0. mailto:gnat@oreilly.com\n 1. http://www.perl.org/tpc/1998/\n 2. http://www.perl.org/tpc/1998/User_Applications/Declarative%20Command-line%20Inter/\n\n\n+--------------------------------------------------------------------+\n| Bricolage 1.4.0 Escapes! |\n| posted by chip on Tuesday September 03, @19:57 (tools) |\n| http://use.perl.org/article.pl?sid=02/09/04/002204 |\n+--------------------------------------------------------------------+\n\n[0]Theory writes \"Bricolage 1.4.0 has finally escaped the shackles of its\nCVS repository! ... Bricolage is a full-featured, enterprise-class\ncontent management and publishing system. It offers a browser-based\ninterface for ease-of use, a full-fledged templating system with complete\nprogramming language support for flexibility, and many other features\n(see below). It operates in an Apache/mod_perl environment, and uses the\nPostgreSQL RDBMS for its repository.\"\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/04/002204\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/04/002204\n\nLinks:\n 0. http://bricolage.cc/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * Perl \"Meetup\"\n\n+--------------------------------------------------------------------+\n| Perl \"Meetup\" |\n| posted by ziggy on Thursday September 05, @19:12 (news) |\n| http://use.perl.org/article.pl?sid=02/09/05/2316234 |\n+--------------------------------------------------------------------+\n\n[0]davorg writes \"The people at [1]Meetup have set up a [2]Perl Meetup.\nThe first one takes place on September 19th. I'll probably go along to\nthe one in London to see what happens, but I'd be very interested in\nhearing any opinions on what this achieves that the existing Perl Mongers\ngroups don't.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/05/2316234\n\nLinks:\n 0. mailto:dave@dave.org.uk\n 1. http://www.meetup.com/\n 2. http://perl.meetup.com/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n" |
1ham
| easy_ham | "\nForwarded-by: Sven Guckes <guckes@math.fu-berlin.de>\n\nhttp://news.com.au/common/story_page/0,4057,5037762%255E13762,00.html\n\nPort San Luis, California\nSeptember 05, 2002\n\nA WHALE which suddenly breached and crashed into the bow of a\nfishing boat killed a restaurant owner on board.\n\nJerry Tibbs, 51, of Bakersfield, California, was aboard his boat,\nThe BBQ, when the whale hit and tossed him into the sea five miles\noff Port San Luis.\n\nThree other fishermen stayed aboard the damaged boat, which was\ntowed to shore by the US Coast Guard.\n\nTibbs and his three friends were just ending a day fishing for\nalbacore when the accident occurred, authorities said.\n\nTibbs' body was found after a search lasting more than 18 hours\nCoast Guard officials said it was the first time they could recall\nan accident caused by a whale hitting a boat.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThis Week on perl5-porters (2-8 September 2002)\n posted by rafael on Monday September 09, @07:33 (summaries)\n http://use.perl.org/article.pl?sid=02/09/09/1147243\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * This Week on perl5-porters (2-8 September 2002)\n\n+--------------------------------------------------------------------+\n| This Week on perl5-porters (2-8 September 2002) |\n| posted by rafael on Monday September 09, @07:33 (summaries) |\n| http://use.perl.org/article.pl?sid=02/09/09/1147243 |\n+--------------------------------------------------------------------+\n\nAs September begins, the perl5-porters, ignoring the changing weather,\ncontinue to work. This week, some small things, and a few bigger ones,\nare selected in the report. Read below.\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/09/1147243\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/09/1147243\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nDynDNS.org Offers Free DNS To Perl Sites\n posted by KM on Tuesday September 10, @08:23 (news)\n http://use.perl.org/article.pl?sid=02/09/10/1225228\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * DynDNS.org Offers Free DNS To Perl Sites\n\n+--------------------------------------------------------------------+\n| DynDNS.org Offers Free DNS To Perl Sites |\n| posted by KM on Tuesday September 10, @08:23 (news) |\n| http://use.perl.org/article.pl?sid=02/09/10/1225228 |\n+--------------------------------------------------------------------+\n\n[0]krellis writes \"[0]DynDNS.org today [1]announced that it will provide\nfree premium DNS services (primary and secondary DNS hosting) to any\ndomains involved in the Perl Community. Read the press release for full\ndetails, [2]create an account, and [3]request credit under the Perl DNS\noffer! Never lose traffic to your Perl site due to failed DNS again!\"\nSweet. Thanks.\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/10/1225228\n\nLinks:\n 0. http://www.dyndns.org/\n 1. http://www.dyndns.org/news/2002/perl-dns.php\n 2. https://members.dyndns.org/policy.shtml\n 3. https://members.dyndns.org/nic/perl\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThe Perl Journal Returns Online\n posted by pudge on Wednesday September 11, @21:59 (links)\n http://use.perl.org/article.pl?sid=02/09/12/026254\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * The Perl Journal Returns Online\n\n+--------------------------------------------------------------------+\n| The Perl Journal Returns Online |\n| posted by pudge on Wednesday September 11, @21:59 (links) |\n| http://use.perl.org/article.pl?sid=02/09/12/026254 |\n+--------------------------------------------------------------------+\n\nCMP, owners of [0]The Perl Journal, have brought the journal back, in the\nform of an online monthly magazine, in PDF form. The subscription rate is\n$12 a year. They need 3,000 subscriptions to move forward (no word if\nexisting subscriptions will be honored, or included in the 3,000).\n[0]Read the site for more details.\n\nI think some of the more interesting notes are that it will include \"a\nhealthy dose of opinion\", as well a broadening of coverage including\nlanguages other than Perl (will this mean a name change?) and platforms\nother than Unix (I'd always thought one of TPJ's strengths was that it\ncovered a wide variety of platforms).\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/12/026254\n\nLinks:\n 0. http://www.tpj.com/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\n\"Perl 6: Right Here, Right Now\" slides ava\n posted by gnat on Friday September 13, @12:01 (news)\n http://use.perl.org/article.pl?sid=02/09/13/162209\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * \"Perl 6: Right Here, Right Now\" slides ava\n\n+--------------------------------------------------------------------+\n| \"Perl 6: Right Here, Right Now\" slides ava |\n| posted by gnat on Friday September 13, @12:01 (news) |\n| http://use.perl.org/article.pl?sid=02/09/13/162209 |\n+--------------------------------------------------------------------+\n\n[0]gnat writes \"The wonderful Leon Brocard has released the slides from\nhis lightning talk to the London perlmongers, [1]Perl 6: Right Here,\nRight Now, showing the current perl6 compiler in action.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/13/162209\n\nLinks:\n 0. mailto:gnat@oreilly.com\n 1. http://astray.com/perl6_now/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nNew Perl Mongers Web Site\n posted by KM on Monday September 16, @08:41 (groups)\n http://use.perl.org/article.pl?sid=02/09/16/1243234\n\nJava vs. Perl\n posted by pudge on Monday September 16, @11:15 (java)\n http://use.perl.org/article.pl?sid=02/09/16/1448246\n\nThis Week on perl5-porters (9-15 September 2002)\n posted by rafael on Monday September 16, @16:17 (summaries)\n http://use.perl.org/article.pl?sid=02/09/16/2026255\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * New Perl Mongers Web Site\n * Java vs. Perl\n * This Week on perl5-porters (9-15 September 2002)\n\n+--------------------------------------------------------------------+\n| New Perl Mongers Web Site |\n| posted by KM on Monday September 16, @08:41 (groups) |\n| http://use.perl.org/article.pl?sid=02/09/16/1243234 |\n+--------------------------------------------------------------------+\n\n[0]davorg writes \"Leon Brocard has been working hard to update the\n[1]Perl Mongers web site. We're still going thru the process of cleaning\nup the data about the Perl Monger groups, so if you see something that\nisn't quite right then please [2]let us know.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/16/1243234\n\nLinks:\n 0. mailto:dave@dave.org.uk\n 1. http://www.pm.org/\n 2. mailto:user_groups@pm.org\n\n\n+--------------------------------------------------------------------+\n| Java vs. Perl |\n| posted by pudge on Monday September 16, @11:15 (java) |\n| http://use.perl.org/article.pl?sid=02/09/16/1448246 |\n+--------------------------------------------------------------------+\n\nIt seems the older Perl gets, the more willing people are to believe that\nit sucks, without any reasonable facts. [0]davorg writes \"You may have\nseen the article [1]Can Java technology beat Perl on its home turf with\npattern matching in large files? that there has been some debate about on\nboth #perl and comp.lang.perl.misc today. One of the biggest criticisms\nof the article was that the author hasn't published the Perl code that he\nis comparing his Java with.\"\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/16/1448246\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/16/1448246\n\nLinks:\n 0. mailto:dave@dave.org.uk\n 1. http://developer.java.sun.com/developer/qow/archive/184/index.jsp\n\n\n+--------------------------------------------------------------------+\n| This Week on perl5-porters (9-15 September 2002) |\n| posted by rafael on Monday September 16, @16:17 (summaries) |\n| http://use.perl.org/article.pl?sid=02/09/16/2026255 |\n+--------------------------------------------------------------------+\n\nThis was not a very busy week, with people packing for YAPC::Europe, and\nall that... Nevertheless, the smoke tests were running, the bug reports\nwere flying, and an appropriate amount of patches were sent. Read about\nprintf formats, serialized tied thingies, built-in leak testing, syntax\noddities, et alii.\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/16/2026255\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/16/2026255\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "URL: http://www.newsisfree.com/click/-1,8639022,1440/\nDate: Not supplied\n\nThe medicine prize goes to research that revealed how cell suicide sculpts the \nbody and - when disrupted - causes disease\n\n\n" |
1ham
| easy_ham | "\nForwarded-by: Nev Dull <nev@sleepycat.com>\nForwarded-by: newsletter@tvspy.com\nExcerpted: ShopTalk - September 13, 2002\n\n\"I'm a tad furry, so animal rights issues come into play.\"\n\tRobin Williams, telling Entertainment Weekly why he won't do\n\tnude scenes in movies.\n\n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nJohnny U: Johnny Unitas was the National Football League's most valuable\nplayer twice - and he led Baltimore to victory in \"Super Bowl Five.\"\nFor those of you younger than 30: this WAS modern football. The game\nwas played on artificial turf. (Richard Burkard/\nhttp://www.Laughline.com)\n\nAnnouncement: How telling is it that the death of Johnny Unitas was\nannounced by the Baltimore Ravens - and not the Colts, who now play in\nIndianapolis? When the Colt owners moved out of Baltimore years ago,\nthey apparently left all the history books behind. (Burkard)\n\nDick Disappears: Vice President Dick Cheney remains at an undisclosed\nlocation. The move is for security reasons. The Bush Administration\nis trying to keep him at a safe distance from would-be subpoenas. (Alan\nRay - http://www.araycomedy.com)\n\nNoelle Nabbed: Jeb Bush's daughter Noelle is in trouble with the law\nagain. When she was a child, her dad would read her favorite bedtime\nstory. Goldilocks and the Three Strikes. (Ray)\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nSubscribe to The Perl Review\n posted by pudge on Tuesday September 17, @08:00 (links)\n http://use.perl.org/article.pl?sid=02/09/17/121210\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * Subscribe to The Perl Review\n\n+--------------------------------------------------------------------+\n| Subscribe to The Perl Review |\n| posted by pudge on Tuesday September 17, @08:00 (links) |\n| http://use.perl.org/article.pl?sid=02/09/17/121210 |\n+--------------------------------------------------------------------+\n\n[0]barryp writes \"You can now pledge a subscription to [1]The Perl Review.\nThe plan is to produce four print magazines per year. Cost: $12/year\n(US); $20/year (international). Let's all make this happen by signing\nup!\" The web site says that they'll attempt to go print if they get\nenough subscription pledges.\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/17/121210\n\nLinks:\n 0. mailto:paul.barry@itcarlow.ie\n 1. http://www.theperlreview.com/\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "\nForwarded-by: William Knowles <wk@c4i.org>\n\nhttp://www.thesun.co.uk/article/0,,2-2002430339,00.html\n\nBy PAUL CROSBIE\nSep 16, 2002\n\nVIRGIN'S latest airliner is being revamped after randy passengers\ndiscovered a tiny cabin was just the place to join the Mile High Club.\n\nThe £130million Airbus 340-600 is fitted with a 5ft x 4ft mother and\nbaby room with a plastic table meant for changing nappies.\n\nBut couples keep wrecking it by sneaking in for a quick bonk.\n\nVirgin has replaced the table several times even though the plane only\ncame into service a few weeks ago.\n\nIt is named Claudia Nine after sexy model Claudia Schiffer, 32, who\nlaunched it in July.\n\nNow Virgin bosses have asked Airbus to build a stronger table.\n\nAt first, German engineers responsible for the jet's interior were\nbaffled by the problem.\n\nThe table is designed to take the weight of a mum and baby.\n\nOne Airbus worker said: \"We couldn't work it out. Then the penny\ndropped. It didn't occur to the Germans that this might happen. It\ncaused great amusement.\"\n\nThe firm say the cost of strengthening the tables will be about £200.\n\nA Virgin spokeswoman said: \"Those determined to join the Mile High\nClub will do so despite the lack of comforts.\n\n\"We don't mind couples having a good time but this is not something\nthat we would encourage because of air regulations.\"\n\nThe new Airbus is the world's longest airliner, with teasing slogan\n\"Mine is bigger than yours\". Virgin is using it on flights to the Far\nEast and the US.\n\n\n" |
1ham
| easy_ham | "\nForwarded-by: Flower\n\nDid you know that you can tell from the skin whether a person is\nsexually active or not?\n\n1. Sex is a beauty treatment. Scientific tests find that when woman\n make love they produce more of the hormone estrogen, which makes\n hair shiny and skin smooth.\n\n2. Gentle, relaxed lovemaking reduces your chances of suffering\n dermatitis, skin rashes and blemishes. The sweat produced cleanses\n the pores and makes your skin glow.\n\n3. Lovemaking can burn up those calories you piled on during that\n romantic dinner.\n\n4. Sex is one of the safest sports you can take up. It stretches\n and tones up just about every muscle in the body. It's more enjoyable\n than swimming 20 laps, and you don't need special sneakers!\n\n5. Sex is an instant cure for mild depression. It releases the body\n endorphin into the bloodstream, producing a sense of euphoria and\n leaving you with a feeling of well-being.\n\n6. The more sex you have, the more you will be offered. The sexually\n active body gives off greater quantities of chemicals called\n pheromones. These subtle sex perfumes drive the opposite sex crazy!\n\n7. Sex is the safest tranquilizer in the world. IT IS 10 TIMES MORE\n EFFECTIVE THAN VALIUM.\n\n8. Kissing each day will keep the dentist away. Kissing encourages\n saliva to wash food from the teeth and lowers the level of the acid\n that causes decay, preventing plaque build-up.\n\n9. Sex actually relieves headaches. A lovemaking session can release\n the tension that restricts blood vessels in the brain.\n\n10. A lot of lovemaking can unblock a stuffy nose. Sex is a national\n antihistamine. It can help combat asthma and hay fever.\n\nENJOY SEX!\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nHow much does Perl, PHP, Java, or Lisp suck?\n posted by pudge on Wednesday September 18, @08:08 (links)\n http://use.perl.org/article.pl?sid=02/09/17/189201\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * How much does Perl, PHP, Java, or Lisp suck?\n\n+--------------------------------------------------------------------+\n| How much does Perl, PHP, Java, or Lisp suck? |\n| posted by pudge on Wednesday September 18, @08:08 (links) |\n| http://use.perl.org/article.pl?sid=02/09/17/189201 |\n+--------------------------------------------------------------------+\n\n[0]brian_d_foy writes \"A long time ago Don Marti started the OS\nSucks-Rules-O-Meter, and Jon Orwant wrote his own Sucks-Rules-O-Meter for\ncomputer languages. Recently Dan Brian improved on that with a little bit\nof natural language processing. Now [1]The Perl Review makes pretty\npictures of it all. Based on searches of AltaVista and Google, we found\nthat not a lot of people think PHP or Lisp sucks, a lot think C++ and\nJava suck, and they put Perl is somewhere in the middle. Does Perl suck\nmore than it use to suck, or has PHP just shot way ahead?\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/17/189201\n\nLinks:\n 0. http://www.theperlreview.com\n 1. http://www.theperlreview.com/at_a_glance.html\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nPerlQT 3 Released\n posted by ziggy on Thursday September 19, @10:41 (tools)\n http://use.perl.org/article.pl?sid=02/09/19/1443213\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "On Mon, 7 Oct 2002, Thomas Vander Stichele wrote:\n\n> Hi,\n> \n> In my build scripts, I have problems with some of the kernel packages.\n> \n> For kernel-sources, I get :\n> \n> Package kernel-source is a virtual package provided by:\n> kernel-source#2.4.18-3 2.4.18-3\n> kernel-source#2.4.18-3 2.4.18-3\n> \n> on running apt-get install kernel-source\n> \n> Now, first of all, this doesn't really tell me what the two options are ;)\n> Second, is there some way I can tell apt-get to install either ? This is \n> done from automatic build scripts so I'd like it to proceed anyway.\n\nThat's just apt's way of telling you the package is in \"AllowDuplicated\", \nmeaning multiple versions of the package can be installed at the same \ntime. Yes the output is a bit strange.. especially when there's only one \nversion available.\n\n'apt-get install kernel-source#2.4.18-3' will install it...\n\n-- \n\t- Panu -\n\n\n_______________________________________________\nRPM-List mailing list <RPM-List@freshrpms.net>\nhttp://lists.freshrpms.net/mailman/listinfo/rpm-list\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * PerlQT 3 Released\n\n+--------------------------------------------------------------------+\n| PerlQT 3 Released |\n| posted by ziggy on Thursday September 19, @10:41 (tools) |\n| http://use.perl.org/article.pl?sid=02/09/19/1443213 |\n+--------------------------------------------------------------------+\n\n[0]Dom2 writes \"As seen on the dot, a new version of [1]PerlQT is out!\nApparently sporting a perl version of [2]uic for developing ui's from\nXML. A [3]tutorial is available for those wanting to know more details.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/19/1443213\n\nLinks:\n 0. mailto:dom@happygiraffe.net\n 1. http://perlqt.infonium.com/\n 2. http://doc.trolltech.com/3.0/uic.html\n 3. http://perlqt.infonium.com/dist/current/doc/index.html\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nYAPC 2003 Call For Venues\n posted by KM on Sunday September 22, @09:08 (news)\n http://use.perl.org/article.pl?sid=02/09/22/1312225\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * YAPC 2003 Call For Venues\n\n+--------------------------------------------------------------------+\n| YAPC 2003 Call For Venues |\n| posted by KM on Sunday September 22, @09:08 (news) |\n| http://use.perl.org/article.pl?sid=02/09/22/1312225 |\n+--------------------------------------------------------------------+\n\n[0]Yet Another Society/[1]The Perl Foundation is making a call for venues\nfor the 2003 YAPC::America. Don't forget to check the [2]venue\nrequirements, the YAPC::Venue module on CPAN, and talk to the organizers\nof YAPCs past. The Perl Foundation aims to announce the venue in November\n2002.\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/22/1312225\n\nLinks:\n 0. http://yetanother.org\n 1. http://perlfoundation.org\n 2. http://www.yapc.org/venue-reqs.txt\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThis Week on perl5-porters (16-22 September 2002)\n posted by rafael on Monday September 23, @07:58 (summaries)\n http://use.perl.org/article.pl?sid=02/09/23/125230\n\nThe Great Perl Monger Cull Of 2002\n posted by ziggy on Monday September 23, @16:38 (news)\n http://use.perl.org/article.pl?sid=02/09/23/2041201\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * This Week on perl5-porters (16-22 September 2002)\n * The Great Perl Monger Cull Of 2002\n\n+--------------------------------------------------------------------+\n| This Week on perl5-porters (16-22 September 2002) |\n| posted by rafael on Monday September 23, @07:58 (summaries) |\n| http://use.perl.org/article.pl?sid=02/09/23/125230 |\n+--------------------------------------------------------------------+\n\nThat's on a week like this that you realize that lots of porters are\nEuropean (and managed to free themselves for YAPC::Europe.) Or were they,\non the contrary, too busy in the big blue room ? On the other hand, the\nnumber of bug reports stayed at its habitual average level.\n\nThis story continues at:\n http://use.perl.org/article.pl?sid=02/09/23/125230\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/23/125230\n\n\n+--------------------------------------------------------------------+\n| The Great Perl Monger Cull Of 2002 |\n| posted by ziggy on Monday September 23, @16:38 (news) |\n| http://use.perl.org/article.pl?sid=02/09/23/2041201 |\n+--------------------------------------------------------------------+\n\n[0]davorg writes \"If you take a look at [1]list of local groups on the\n[2]Perl Mongers web site, you'll see that it's just got a good deal\nshorter. Over the last month or so, I've been making strenuous efforts to\ncontact all of the groups we had listed to see which ones were still\nactive. What you see is the result of this exercise. Almost half of the\ngroups have been removed because they haven't responded to my emails.\n\nIf your local group still exists but is no longer listed, then that means\nthat I don't have an update to date contact for your group. Please [3]let\nme know if that's the case.\"\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/23/2041201\n\nLinks:\n 0. mailto:dave@dave.org.uk\n 1. http://www.pm.org/groups/\n 2. http://www.pm.org/\n 3. mailto:user_groups@pm.org\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nUsing Web Services with Perl and AppleScript\n posted by pudge on Wednesday September 25, @08:12 (links)\n http://use.perl.org/article.pl?sid=02/09/25/129231\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Newsletter\n\nIn this issue:\n * Using Web Services with Perl and AppleScript\n\n+--------------------------------------------------------------------+\n| Using Web Services with Perl and AppleScript |\n| posted by pudge on Wednesday September 25, @08:12 (links) |\n| http://use.perl.org/article.pl?sid=02/09/25/129231 |\n+--------------------------------------------------------------------+\n\n[0]jonasbn writes \"An article on [1]Perl, AppleScript, and Web Services\nby Randal L. Schwartz has been published on O'ReillyNet.\" See AppleScript\nand Perl work together!\n\nDiscuss this story at:\n http://use.perl.org/comments.pl?sid=02/09/25/129231\n\nLinks:\n 0. mailto:jonasbnNO@SPAMe-diot.dk\n 1. http://www.oreillynet.com/pub/a/javascript/synd/2002/09/24/applescript_perl.html\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |
1ham
| easy_ham | "use Perl Daily Headline Mailer\n\nThis Week on perl5-porters (23-29 September 2002)\n posted by rafael on Monday September 30, @17:26 (summaries)\n http://use.perl.org/article.pl?sid=02/09/30/2151221\n\n\n\n\nCopyright 1997-2002 pudge. All rights reserved.\n\n\n======================================================================\n\nYou have received this message because you subscribed to it\non use Perl. To stop receiving this and other\nmessages from use Perl, or to add more messages\nor change your preferences, please go to your user page.\n\n\thttp://use.perl.org/my/messages/\n\nYou can log in and change your preferences from there.\n\n\n" |