text
stringlengths
2
299k
is_spam
int64
0
1
[Spambayes] understanding high false negative rate > TP> I'm reading this now as that you trained on about 220 spam and > TP> about 220 ham. That's less than 10% of the sizes of the > TP> training sets I've been using. Please try an experiment: train > TP> on 550 of each, and test once against the other 550 of each. [Jeremy] > This helps a lot! Possibly. I checked in a change to classifier.py overnight (getting rid of MINCOUNT) that gave a major improvement in the f-n rate too, independent of tokenization. > Here are results with the stock tokenizer: Unsure what "stock tokenizer" means to you. For example, it might mean tokenizer.tokenize, or mboxtest.MyTokenizer.tokenize. > Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: > /home/jeremy/Mail/spam 8> > ... 644 hams & 557 spams > 0.000 10.413 > 1.398 6.104 > 1.398 5.027 > Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: > /home/jeremy/Mail/spam 0> > ... 644 hams & 557 spams > 0.000 8.259 > 1.242 2.873 > 1.242 5.745 > Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: > /home/jeremy/Mail/spam 3> > ... 644 hams & 557 spams > 1.398 5.206 > 1.398 4.488 > 0.000 9.336 > Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: > /home/jeremy/Mail/spam 0> > ... 644 hams & 557 spams > 1.553 5.206 > 1.553 5.027 > 0.000 9.874 > total false pos 139 5.39596273292 > total false neg 970 43.5368043088 Note that those rates remain much higher than I got using just 220 of ham and 220 of spam. That remains A Mystery. > And results from the tokenizer that looks at all headers except Date, > Received, and X-From_: Unsure what that means too. For example, "looks at" might mean you enabled Anthony's count-them gimmick, and/or that you're tokenizing them yourself, and/or ...? > Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: > /home/jeremy/Mail/spam 8> > ... 644 hams & 557 spams > 0.000 7.540 > 0.932 4.847 > 0.932 3.232 > Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: > /home/jeremy/Mail/spam 0> > ... 644 hams & 557 spams > 0.000 7.181 > 0.621 2.873 > 0.621 4.847 > Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: > /home/jeremy/Mail/spam 3> > ... 644 hams & 557 spams > 1.087 4.129 > 1.087 3.052 > 0.000 6.822 > Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: > /home/jeremy/Mail/spam 0> > ... 644 hams & 557 spams > 0.776 3.411 > 0.776 3.411 > 0.000 6.463 > total false pos 97 3.76552795031 > total false neg 738 33.1238779174 > > Is it safe to conclude that avoiding any cleverness with headers is a > good thing? Since I don't know what you did, exactly, I can't guess. What you seemed to show is that you did *something* clever with headers and that doing so helped (the "after" numbers are better than the "before" numbers, right?). Assuming that what you did was override what's now tokenizer.Tokenizer.tokenize_headers with some other routine, and didn't call the base Tokenizer.tokenize_headers at all, then you're missing carefully tested treatment of just a few header fields, but adding many dozens of other header fields. There's no question that adding more header fields should help; tokenizer.Tokenizer.tokenize_headers doesn't do so only because my testing corpora are such that I can't add more headers without getting major benefits for bogus reasons. Apart from all that, you said you're skipping Received. By several accounts, that may be the most valuable of all the header fields. I'm (meaning tokenizer.Tokenizer.tokenize_headers) skipping them too for the reason explained above. Offline a week or two ago, Neil Schemenauer reported good results from this scheme: ip_re = re.compile(r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})') for header in msg.get_all("received", ()): for ip in ip_re.findall(header): parts = ip.split(".") for n in range(1, 5): yield 'received:' + '.'.join(parts[:n]) This makes a lot of sense to me; I just checked it in, but left it disabled for now.
0
[Spambayes] test sets? [Guido] > Perhaps more useful would be if Tim could check in the pickle(s?) > generated by one of his training runs, so that others can see how > Tim's training data performs against their own corpora. I did that yesterday, but seems like nobody bit. Just in case <wink>, I uploaded a new version just now. Since MINCOUNT went away, UNKNOWN_SPAMPROB is much less likely, and there's almost nothing that can be pruned away (so the file is about 5x larger now). http://sf.net/project/showfiles.php?group_id=61702 > This could also be the starting point for a self-contained distribution > (you've got to start with *something*, and training with python-list data > seems just as good as anything else). The only way to know anything here is for someone to try it.
0
[Spambayes] understanding high false negative rate Here's clarification of why I did: First test results using tokenizer.Tokenizer.tokenize_headers() unmodified. Training on 644 hams & 557 spams 0.000 10.413 1.398 6.104 1.398 5.027 Training on 644 hams & 557 spams 0.000 8.259 1.242 2.873 1.242 5.745 Training on 644 hams & 557 spams 1.398 5.206 1.398 4.488 0.000 9.336 Training on 644 hams & 557 spams 1.553 5.206 1.553 5.027 0.000 9.874 total false pos 139 5.39596273292 total false neg 970 43.5368043088 Second test results using mboxtest.MyTokenizer.tokenize_headers(). This uses all headers except Received, Data, and X-From_. Training on 644 hams & 557 spams 0.000 7.540 0.932 4.847 0.932 3.232 Training on 644 hams & 557 spams 0.000 7.181 0.621 2.873 0.621 4.847 Training on 644 hams & 557 spams 1.087 4.129 1.087 3.052 0.000 6.822 Training on 644 hams & 557 spams 0.776 3.411 0.776 3.411 0.000 6.463 total false pos 97 3.76552795031 total false neg 738 33.1238779174 Jeremy
0
[Spambayes] test sets? > [Guido] > > Perhaps more useful would be if Tim could check in the pickle(s?) > > generated by one of his training runs, so that others can see how > > Tim's training data performs against their own corpora. [Tim] > I did that yesterday, but seems like nobody bit. I downloaded and played with it a bit, but had no time to do anything systematic. It correctly recognized a spam that slipped through SA. But it also identified as spam everything in my inbox that had any MIME structure or HTML parts, and several messages in my saved 'zope geeks' list that happened to be using MIME and/or HTML. So I guess I'll have to retrain it (yes, you told me so :-). > Just in case <wink>, I > uploaded a new version just now. Since MINCOUNT went away, UNKNOWN_SPAMPROB > is much less likely, and there's almost nothing that can be pruned away (so > the file is about 5x larger now). > > http://sf.net/project/showfiles.php?group_id=61702 I'll try this when I have time. --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] spambayes package? Before we get too far down this road, what do people think of creating a spambayes package containing classifier and tokenizer? This is just to minimize clutter in site-packages. Skip
0
[Spambayes] understanding high false negative rate [Jeremy Hylton] > Here's clarification of why I did: That's pretty much what I had guessed. Thanks! One more to try: > First test results using tokenizer.Tokenizer.tokenize_headers() > unmodified. > ... > Second test results using mboxtest.MyTokenizer.tokenize_headers(). > This uses all headers except Received, Data, and X-From_. > ... Try the latter again, but call the base tokenize_headers() too.
0
[Spambayes] spambayes package? > Before we get too far down this road, what do people think of creating a > spambayes package containing classifier and tokenizer? This is just to > minimize clutter in site-packages. Too early IMO (if you mean to leave the various other tools out of it). If and when we package this, perhaps we should use Barry's trick from the email package for making the package itself the toplevel dir of the distribution (rather than requiring an extra directory level just so the package can be a subdir of the distro). --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] test sets? [Guido, on the classifier pickle on SF] > I downloaded and played with it a bit, but had no time to do anything > systematic. Cool! > It correctly recognized a spam that slipped through SA. Ditto. > But it also identified as spam everything in my inbox that had any > MIME structure or HTML parts, and several messages in my saved 'zope > geeks' list that happened to be using MIME and/or HTML. Do you know why? The strangest implied claim there is that it hates MIME independent of HTML. For example, the spamprob of 'content-type:text/plain' in that pickle is under 0.21. 'content-type:multipart/alternative' gets 0.93, but that's not a killer clue, and one bit of good content will more than cancel it out. WRT hating HTML, possibilities include: 1. It really had to do with something other than MIME/HTML. 2. These are pure HTML (not multipart/alternative with a text/plain part), so that the tags aren't getting stripped. The pickled classifier despises all hints of HTML due to its c.l.py heritage. 3. These are multipart/alternative with a text/plain part, but the latter doesn't contain the same text as the text/html part (for example, as Anthony reported, perhaps the text/plain part just says something like "This is an HMTL message."). If it's #2, it would be easy to add an optional bool argument to tokenize() meaning "even if it is pure HTML, strip the tags anyway". In fact, I'd like to do that and default it to True. The extreme hatred of HTML on tech lists strikes me as, umm, extreme <wink>. > So I guess I'll have to retrain it (yes, you told me so :-). That would be a different experiment. I'm certainly curious to see whether Jeremy's much-worse-than-mine error rates are typical or aberrant.
0
[Spambayes] test sets? > > But it also identified as spam everything in my inbox that had any > > MIME structure or HTML parts, and several messages in my saved 'zope > > geeks' list that happened to be using MIME and/or HTML. > > Do you know why? The strangest implied claim there is that it hates MIME > independent of HTML. For example, the spamprob of 'content-type:text/plain' > in that pickle is under 0.21. 'content-type:multipart/alternative' gets > 0.93, but that's not a killer clue, and one bit of good content will more > than cancel it out. I reran the experiment (with the new SpamHam1.pik, but it doesn't seem to make a difference). Here are the clues for the two spams in my inbox (in hammie.py's output format, which sorts the clues by probability; the first two numbers are the message number and overall probability; then line-folded): 66 1.00 S 'facility': 0.01; 'speaker': 0.01; 'stretch': 0.01; 'thursday': 0.01; 'young,': 0.01; 'mistakes': 0.12; 'growth': 0.85; '>content-type:text/plain': 0.85; 'please': 0.85; 'capital': 0.92; 'series': 0.92; 'subject:Don': 0.94; 'companies': 0.96; '>content-type:text/html': 0.96; 'fee': 0.96; 'money': 0.96; '8:00am': 0.99; '9:00am': 0.99; '>content-type:image/gif': 0.99; '>content-type:multipart/alternative': 0.99; 'attend': 0.99; 'companies,': 0.99; 'content-type/type:multipart/alternative': 0.99; 'content-type:multipart/related': 0.99; 'economy': 0.99; 'economy"': 0.99 This has 6 content-types as spam clues, only one of which is related to HTML, despite there being an HTML alternative (and 12 other spam clues, vs. only 6 ham clues). This was an announcement of a public event by our building owners, with a text part that was the same as the HTML (AFAICT). Its language may be spammish, but the content-type clues didn't help. (BTW, it makes me wonder about the wisdom of keeping punctuation -- 'economy' and 'economy"' to me don't seem to deserve two be counted as clues.) 76 1.00 S '(near': 0.01; 'alexandria': 0.01; 'conn': 0.01; 'from:adam': 0.01; 'from:email addr:panix': 0.01; 'poked': 0.01; 'thorugh': 0.01; 'though': 0.03; "i'm": 0.03; 'reflect': 0.05; "i've": 0.06; 'wednesday': 0.07; 'content-disposition:inline': 0.10; 'contacting': 0.93; 'sold': 0.96; 'financially': 0.98; 'prices': 0.98; 'rates': 0.99; 'discount.': 0.99; 'hotel': 0.99; 'hotels': 0.99; 'hotels.': 0.99; 'nights,': 0.99; 'plaza': 0.99; 'rates,': 0.99; 'rates.': 0.99; 'rooms': 0.99; 'season': 0.99; 'stations': 0.99; 'subject:Hotel': 0.99 Here is the full message (Received: headers stripped), with apologies to Ziggy and David: """ Date: Fri, 06 Sep 2002 17:17:13 -0400 From: Adam Turoff <ziggy@panix.com> Subject: Hotel information To: guido@python.org, davida@activestate.com Message-id: <20020906211713.GK7451@panix.com> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-disposition: inline User-Agent: Mutt/1.4i I've been looking into hotels. I poked around expedia for availability from March 26 to 29 (4 nights, wednesday thorugh saturday). I've also started contacting hotels for group rates; some of the group rates are no better than the regular rates, and they require signing a contract with a minimum number of rooms sold (with someone financially responsible for unbooked rooms). Most hotels are less than responsive... Radission - Barcelo Hotel (DuPont Circle) $125/night, $99/weekend State Plaza hotel (Foggy Bottom; near GWU) $119/night, $99/weekend Hilton Silver Spring (Near Metro, in suburban MD) $99/hight, $74/weekend Windsor Park Hotel Conn Ave, between DuPont Circle/Woodley Park Metro stations $95/night; needs a car Econo Lodge Alexandria (Near Metro, in suburban VA) $95/night This is a hand picked list; I ignored anything over $125/night, even though there are some really well situated hotels nearby at higher rates. Also, I'm not sure how much these prices reflect an expedia-only discount. I can't vouch for any of these hotels, either. I also found out that the down season for DC Hotels are mid-june through mid-september, and mid-november through mid-january. Z. """ This one has no MIME structure nor HTML! It even has a Content-disposition which is counted as a non-spam clue. It got f-p'ed because of the many hospitality-related and money-related terms. I'm surprised $125/night and similar aren't clues too. (And again, several spam clues are duplicated with different variations: 'hotel', 'hotels', 'hotels.', 'subject:Hotel', 'rates,', 'rates.'. > WRT hating HTML, possibilities include: > > 1. It really had to do with something other than MIME/HTML. > > 2. These are pure HTML (not multipart/alternative with a text/plain part), > so that the tags aren't getting stripped. The pickled classifier > despises all hints of HTML due to its c.l.py heritage. > > 3. These are multipart/alternative with a text/plain part, but the > latter doesn't contain the same text as the text/html part (for > example, as Anthony reported, perhaps the text/plain part just > says something like "This is an HMTL message."). > > If it's #2, it would be easy to add an optional bool argument to tokenize() > meaning "even if it is pure HTML, strip the tags anyway". In fact, I'd like > to do that and default it to True. The extreme hatred of HTML on tech lists > strikes me as, umm, extreme <wink>. I also looked in more detail at some f-p's in my geeks traffic. The first one's a doozie (that's the term, right? :-). It has lots of HTML clues that are apparently ignored. It was a multipart/mixed with two parts: a brief text/plain part containing one or two sentences, a mondo weird URL: http://x60.deja.com/[ST_rn=ps]/getdoc.xp?AN=687715863&CONTEXT=973121507.1408827441&hitnum=23 and some employer-generated spammish boilerplate; the second part was the HTML taken directly from the above URL. Clues: 43 1.00 S '"main"': 0.01; '(later': 0.01; '(lots': 0.01; '--paul': 0.01; '1995-2000': 0.01; 'adopt': 0.01; 'apps': 0.01; 'commands': 0.01; 'deja.com': 0.01; 'dejanews,': 0.01; 'discipline': 0.01; 'duct': 0.01; 'email addr:digicool': 0.01; 'email name:paul': 0.01; 'everitt': 0.01; 'exist,': 0.01; 'forwards': 0.01; 'framework': 0.01; 'from:email addr:digicool': 0.01; 'from:email name:<paul': 0.01; 'from:paul': 0.01; 'height': 0.01; 'hodge-podge': 0.01; 'http0:deja': 0.01; 'http0:zope': 0.01; 'http1:[st_rn': 0.01; 'http1:comp': 0.01; 'http1:getdoc': 0.01; 'http1:ps]': 0.01; 'http>1:22': 0.01; 'http>1:24': 0.01; 'http>1:57': 0.01; 'http>1:an': 0.01; 'http>1:author': 0.01; 'http>1:fmt': 0.01; 'http>1:getdoc': 0.01; 'http>1:pr': 0.01; 'http>1:products': 0.01; 'http>1:query': 0.01; 'http>1:search': 0.01; 'http>1:viewthread': 0.01; 'http>1:xp': 0.01; 'http>1:zope': 0.01; 'inventing': 0.01; 'jsp': 0.01; 'jsp.': 0.01; 'logic': 0.01; 'maps': 0.01; 'neo': 0.01; 'newsgroup,': 0.01; 'object': 0.01; 'popup': 0.01; 'probable': 0.01; 'query': 0.01; 'query,': 0.01; 'resizes': 0.01; 'servlet': 0.01; 'skip:? 20': 0.01; 'stems': 0.01; 'subject:JSP': 0.01; 'sucks!': 0.01; 'templating': 0.01; 'tempted': 0.01; 'url.': 0.01; 'usenet': 0.01; 'usenet,': 0.01; 'wrote': 0.01; 'x-mailer:mozilla 4.74 [en] (windows nt 5.0; u)': 0.01; 'zope': 0.01; '#000000;': 0.99; '#cc0000;': 0.99; '#ff3300;': 0.99; '#ff6600;': 0.99; '#ffffff;': 0.99; '&copy;': 0.99; '&gt;': 0.99; '&nbsp;&nbsp;': 0.99; '&quot;no': 0.99; '.med': 0.99; '.small': 0.99; '0pt;': 0.99; '0px;': 0.99; '10px;': 0.99; '11pt;': 0.99; '12px;': 0.99; '18pt;': 0.99; '18px;': 0.99; '1pt;': 0.99; '2px;': 0.99; '640;': 0.99; '8pt;': 0.99; '<!--': 0.99; '</b>': 0.99; '</body>': 0.99; '</head>': 0.99; '</html>': 0.99; '</script>': 0.99; '</select>': 0.99; '</span>': 0.99; '</style>': 0.99; '</table>': 0.99; '</td>': 0.99; '</td></tr>': 0.99; '</tr>': 0.99; '</tr><tr': 0.99; '<b><a': 0.99; '<base': 0.99; '<body': 0.99; '<br>': 0.99; '<br>&nbsp;': 0.99; '<br><a': 0.99; '<br><span': 0.99; '<font': 0.99; '<form': 0.99; '<head>': 0.99; '<html>': 0.99; '<img': 0.99; '<input': 0.99; '<meta': 0.99; '<option': 0.99; '<p>': 0.99; '<p>a': 0.99; '<script>': 0.99; '<select': 0.99; '<span': 0.99; '<style>': 0.99; '<table': 0.99; '<td': 0.99; '<td>': 0.99; '<td></td>': 0.99; '<td><img': 0.99; '<tr': 0.99; '<tr>': 0.99; '<tr><td': 0.99; '<tr><td><img': 0.99; 'absolute;': 0.99; 'align="left"': 0.99; 'align=center': 0.99; 'align=left': 0.99; 'align=middle': 0.99; 'align=right': 0.99; 'align=right>': 0.99; 'alt=""': 0.99; 'bold;': 0.99; 'border=0': 0.99; 'border=0>': 0.99; 'color:': 0.99; 'colspan=2': 0.99; 'colspan=2>': 0.99; 'colspan=4': 0.99; 'face="arial"': 0.99; 'font-family:': 0.99; 'font-size:': 0.99; 'font-weight:': 0.99; 'footer': 0.99; 'for<br>': 0.99; 'fucking<br>': 0.99; 'height="1"': 0.99; 'height="16"': 0.99; 'height=1': 0.99; 'height=12': 0.99; 'height=125': 0.99; 'height=17': 0.99; 'height=18': 0.99; 'height=21': 0.99; 'height=4': 0.99; 'height=57': 0.99; 'height=60': 0.99; 'height=8': 0.99; 'hspace=0': 0.99; 'http0:g': 0.99; 'http0:web2': 0.99; 'http1:0': 0.99; 'http1:ads': 0.99; 'http1:d': 0.99; 'http1:page': 0.99; 'http1:site': 0.99; 'http>1:article': 0.99; 'http>1:back': 0.99; 'http>1:com': 0.99; 'http>1:d': 0.99; 'http>1:gif': 0.99; 'http>1:go': 0.99; 'http>1:group': 0.99; 'http>1:http': 0.99; 'http>1:post': 0.99; 'http>1:ps': 0.99; 'http>1:site': 0.99; 'http>1:st': 0.99; 'http>1:title': 0.99; 'http>1:yahoo': 0.99; 'inc.</a>': 0.99; 'jobs!': 0.99; 'normal;': 0.99; 'nowrap': 0.99; 'nowrap>': 0.99; 'nowrap><font': 0.99; 'padding:': 0.99; 'rowspan=2': 0.99; 'rowspan=3': 0.99; 'servlets,': 0.99; 'size=15': 0.99; 'size=35': 0.99; 'skip:< 10': 0.99; 'skip:b 60': 0.99; 'skip:h 110': 0.99; 'skip:h 170': 0.99; 'skip:h 200': 0.99; 'skip:h 240': 0.99; 'skip:h 250': 0.99; 'skip:h 290': 0.99; 'skip:v 40': 0.99; 'solid;': 0.99; 'text=#000000': 0.99; 'to<br>': 0.99; 'type="image"': 0.99; 'type="text"': 0.99; 'type=hidden': 0.99; 'type=image': 0.99; 'type=radio': 0.99; 'type=submit': 0.99; 'type=text': 0.99; 'valign=top': 0.99; 'valign=top>': 0.99; 'value="">': 0.99; 'visibility:': 0.99; 'width:': 0.99; 'width="33"': 0.99; 'width=1': 0.99; 'width=100%': 0.99; 'width=100%>': 0.99; 'width=12': 0.99; 'width=125': 0.99; 'width=130': 0.99; 'width=137': 0.99; 'width=2': 0.99; 'width=20': 0.99; 'width=25': 0.99; 'width=4': 0.99; 'width=468': 0.99; 'width=6': 0.99; 'width=72': 0.99; 'works<br>': 0.99 The second f-p had the same structure (and sender :-); the third f-p had the same structure and a different sender. Ditto the fifth, sixth. (Not posting clues for brevity.) The fourth was different: plaintext with one very short sentence and a URL. Clues: 300 1.00 S 'from:email addr:digicool': 0.01; 'http1:news': 0.24; 'from:email addr:com>': 0.32; 'from:tres': 0.50; 'http>1:1114digi': 0.50; 'proto:http': 0.50; 'subject:Geeks': 0.50; 'x-mailer:mozilla 4.75 [en] (x11; u; linux 2.2.14-5.0smp i686)': 0.50; 'take': 0.54; 'bool:noorg': 0.61; 'http0:com': 0.66; 'skip:h 50': 0.83; 'http>1:htm': 0.90; 'subject:Software': 0.96; 'http>1:business': 0.99; 'http>1:local': 0.99; 'subject:firm': 0.99; 'us:': 0.99 The seventh was similar. I scanned a bunch more until I got bored, and most of them were either of the first form (brief text with URL followed by quoted HTML from website) or the second (brief text with one or more URLs). It's up to you to decide what to call this, but I think these are none of your #1, #2 or #3 (they're close to #3, but all are multipart/mixed rather than multipart/alternative). > > So I guess I'll have to retrain it (yes, you told me so :-). > > That would be a different experiment. I'm certainly curious to see whether > Jeremy's much-worse-than-mine error rates are typical or aberrant. It's possible that the corpus you've trained on is more homogeneous than you thought. --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] test sets? > > I also looked in more detail at some f-p's in my geeks traffic. The > > first one's a doozie (that's the term, right? :-). It has lots of > > HTML clues that are apparently ignored. > > ?? The clues below are *loaded* with snippets unique to HTML (like '<br>'). I *meant* to say that they were 0.99 clues cancelled out by 0.01 clues. But that's wrong too! It looks I haven't grokked this part of your code yet; this one has way more than 16 clues, and it seems the classifier basically ended up counting way more 0.99 than 0.01 clues, and no others made it into the list. I thought it was looking for clues with values in between; apparently it found none that weren't exactly 0.5? > That sure sets the record for longest list of cancelling extreme clues! This happened to be the longest one, but there were quite a few similar ones. I wonder if there's anything we can learn from looking at the clues and the HTML. It was heavily marked-up HTML, with ads in the sidebar, but the body text was a serious discussion of "OO and soft coding" with lots of highly technical words as clues (including Zope and ZEO). > That there are *any* 0.50 clues in here means the scheme ran out of > anything interesting to look at. Adding in more header lines should > cure that. Are there any minable-but-unmined header lines in your corpus left? Or do we have to start with a different corpus before we can make progress there? > > The seventh was similar. > > > > I scanned a bunch more until I got bored, and most of them were either > > of the first form (brief text with URL followed by quoted HTML from > > website) > > If those were text/plain, the HTML tags should have been stripped. I'm > still confused about this part. No, sorry. These were all of the following structure: multipart/mixed text/plain (brief text plus URL(s)) text/html (long HTML copied from website) I guess you get this when you click on "mail this page" in some browsers. > That HTML tags aren't getting stripped remains the biggest mystery to me. Still? > This seems confused: Jeremy didn't use my trained classifier pickle, > he trained his own classifier from scratch on his own corpora. > That's an entirely different kind of experiment from the one you're > trying (indeed, you're the only one so far to report results from > trying my pickle on their own email, and I never expected *that* to > work well; it's a much bigger mystery to me why Jeremy got such > relatively worse results from training his own -- and he's the only > one so far to report results from *that* experiment). I think it's still corpus size. --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] test sets? [Guido] > I *meant* to say that they were 0.99 clues cancelled out by 0.01 > clues. But that's wrong too! It looks I haven't grokked this part of > your code yet; this one has way more than 16 clues, and it seems the > classifier basically ended up counting way more 0.99 than 0.01 clues, > and no others made it into the list. I thought it was looking for > clues with values in between; apparently it found none that weren't > exactly 0.5? There's a brief discussion of this before the definition of MAX_DISCRIMINATORS. All clues with prob MIN_SPAMPROB and MAX_SPAMPROB are saved in min and max lists, and all other clues are fed into the nbest heap. Then the shorter of the min and max lists cancels out the same number of clues in the longer list. Whatever remains of the longer list (if anything) is then fed into the nbest heap too, but no more than MAX_DISCRIMINATORS of them. In no case do more than MAX_DISCRIMINATORS clues enter into the final probability calculation, but all of the min and max lists go into the list of clues (else you'd have no clue that massive cancellation was occurring; and massive cancellation may yet turn out to be a hook to signal that manual review is needed). In your specific case, the excess of clues in the longer MAX_SPAMPROB list pushed everything else out of the nbest heap, and that's why you didn't see anything other than 0.01 and 0.99. Before adding these special lists, the outcome when faced with many 0.01 and 0.99 clues was too often a coin toss: whichever flavor just happened to appear MAX_DISCRIMINATORS//2 + 1 times first determined the final outcome. >> That sure sets the record for longest list of cancelling extreme clues! > This happened to be the longest one, but there were quite a few > similar ones. I just beat it <wink>: a tokenization scheme that folds case, and ignores punctuation, and strips a trailing 's' from words, and saves both word bigrams and word unigrams, turned up a low-probability very long spam with a list of 410 0.01 clues and 125 0.99 clues! Yikes. > I wonder if there's anything we can learn from looking at the clues and the > HTML. It was heavily marked-up HTML, with ads in the sidebar, but the body > text was a serious discussion of "OO and soft coding" with lots of highly > technical words as clues (including Zope and ZEO). No matter how often it says Zope, it gets only one 0.01 clue from doing so. Ditto for ZEO. In contrast, HTML markup has many unique "words" that get 0.99. BTW, this is a clear case where the assumption of conditionally-independent word probabilities is utterly bogus -- e.g., the probability that "<body>" appears in a message is highly correlated with the prob of "<br>" appearing. By treating them as independent, naive Bayes grossly misjudges the probability that both appear, and the only thing you get in return is something that can actually be computed <wink>. Read the "What about HTML?" section in tokenizer.py. From the very start, I've been investigating what would work best for the mailing lists hosted at python.org, and HTML decorations have so far been too strong a clue to justify ignoring it in that specific context. I haven't done anything geared toward personal email, including the case of non-mailing-list email that happens to go through python.org. I'd prefer to strip HTML tags from everything, but last time I tried that it still had bad effects on the error rates in my corpora (the full test results with and without HTML tag stripping is included in the "What about HTML?" comment block). But as the comment block also says, # XXX So, if another way is found to slash the f-n rate, the decision here # XXX not to strip HTML from HTML-only msgs should be revisited. and we've since done several things that gave significant f-n rate reductions. I should test that again now. > Are there any minable-but-unmined header lines in your corpus left? Almost all of them -- apart from MIME decorations that appear in both headers and bodies (like Content-Type), the *only* header lines the base tokenizer looks at now are Subject, From, X-Mailer, and Organization. > Or do we have to start with a different corpus before we can make > progress there? I would need different data, yes. My ham is too polluted with Mailman header decorations (which I may or may not be able to clean out, but fudging the data is a Mortal Sin and I haven't changed a byte so far), and my spam too polluted with header clues about the fellow who collected it. In particular I have to skip To and Received headers now, and I suspect they're going to be very valuable in real life (for example, I don't even catch "undisclosed recipients" in the To header now!). > ... > No, sorry. These were all of the following structure: > > multipart/mixed > text/plain (brief text plus URL(s)) > text/html (long HTML copied from website) Ah! That explains why the HTML tags didn't get stripped. I'd again offer to add an optional argument to tokenize() so that they'd get stripped here too, but if it gets glossed over a third time that would feel too much like a loss <wink>. >> This seems confused: Jeremy didn't use my trained classifier pickle, >> he trained his own classifier from scratch on his own corpora. >> ... > I think it's still corpus size. I reported on tests I ran with random samples of 220 spams and 220 hams from my corpus (that means training on sets of those sizes as well as predicting on sets of those sizes), and while that did harm the error rates, the error rates I saw were still much better than Jeremy reported when using 500 of each. Ah, a full test run just finished, on the tokenization scheme that folds case, and ignores punctuation, and strips a trailing 's' from words, and saves both word bigrams and word unigrams This is the code: # Tokenize everything in the body. lastw = '' for w in word_re.findall(text): n = len(w) # Make sure this range matches in tokenize_word(). if 3 <= n <= 12: if w[-1] == 's': w = w[:-1] yield w if lastw: yield lastw + w lastw = w + ' ' elif n >= 3: lastw = '' for t in tokenize_word(w): yield t where word_re = re.compile(r"[\w$\-\x80-\xff]+") This at least doubled the process size over what's done now. It helped the f-n rate significantly, but probably hurt the f-p rate (the f-p rate is too low with only 4000 hams per run to be confident about changes of such small *absolute* magnitude -- 0.025% is a single message in the f-p table): false positive percentages 0.000 0.000 tied 0.000 0.075 lost +(was 0) 0.050 0.125 lost +150.00% 0.025 0.000 won -100.00% 0.075 0.025 won -66.67% 0.000 0.050 lost +(was 0) 0.100 0.175 lost +75.00% 0.050 0.050 tied 0.025 0.050 lost +100.00% 0.025 0.000 won -100.00% 0.050 0.125 lost +150.00% 0.050 0.025 won -50.00% 0.050 0.050 tied 0.000 0.025 lost +(was 0) 0.000 0.025 lost +(was 0) 0.075 0.050 won -33.33% 0.025 0.050 lost +100.00% 0.000 0.000 tied 0.025 0.100 lost +300.00% 0.050 0.150 lost +200.00% won 5 times tied 4 times lost 11 times total unique fp went from 13 to 21 false negative percentages 0.327 0.218 won -33.33% 0.400 0.218 won -45.50% 0.327 0.218 won -33.33% 0.691 0.691 tied 0.545 0.327 won -40.00% 0.291 0.218 won -25.09% 0.218 0.291 lost +33.49% 0.654 0.473 won -27.68% 0.364 0.327 won -10.16% 0.291 0.182 won -37.46% 0.327 0.254 won -22.32% 0.691 0.509 won -26.34% 0.582 0.473 won -18.73% 0.291 0.255 won -12.37% 0.364 0.218 won -40.11% 0.436 0.327 won -25.00% 0.436 0.473 lost +8.49% 0.218 0.218 tied 0.291 0.255 won -12.37% 0.254 0.364 lost +43.31% won 15 times tied 2 times lost 3 times total unique fn went from 106 to 94
0
[Spambayes] test sets? [Tim] > ... > I'd prefer to strip HTML tags from everything, but last time I > tried that it still had bad effects on the error rates in my > corpora (the full test results with and without HTML tag stripping > is included in the "What about HTML?" comment block). But as the > comment block also says, > > # XXX So, if another way is found to slash the f-n rate, the decision here > # XXX not to strip HTML from HTML-only msgs should be revisited. > > and we've since done several things that gave significant f-n rate > reductions. I should test that again now. I did so. Alas, stripping HTML tags from all text still hurts the f-n rate in my test data: false positive percentages 0.000 0.000 tied 0.000 0.000 tied 0.050 0.075 lost +50.00% 0.025 0.025 tied 0.075 0.025 won -66.67% 0.000 0.000 tied 0.100 0.100 tied 0.050 0.075 lost +50.00% 0.025 0.025 tied 0.025 0.000 won -100.00% 0.050 0.075 lost +50.00% 0.050 0.050 tied 0.050 0.025 won -50.00% 0.000 0.000 tied 0.000 0.000 tied 0.075 0.075 tied 0.025 0.025 tied 0.000 0.000 tied 0.025 0.025 tied 0.050 0.050 tied won 3 times tied 14 times lost 3 times total unique fp went from 13 to 11 false negative percentages 0.327 0.400 lost +22.32% 0.400 0.400 tied 0.327 0.473 lost +44.65% 0.691 0.654 won -5.35% 0.545 0.473 won -13.21% 0.291 0.364 lost +25.09% 0.218 0.291 lost +33.49% 0.654 0.654 tied 0.364 0.473 lost +29.95% 0.291 0.327 lost +12.37% 0.327 0.291 won -11.01% 0.691 0.654 won -5.35% 0.582 0.655 lost +12.54% 0.291 0.400 lost +37.46% 0.364 0.436 lost +19.78% 0.436 0.582 lost +33.49% 0.436 0.364 won -16.51% 0.218 0.291 lost +33.49% 0.291 0.400 lost +37.46% 0.254 0.327 lost +28.74% won 5 times tied 2 times lost 13 times total unique fn went from 106 to 122 Last time I tried this (see tokenizer.py comments), the f-n rate after stripping tags ranged from 0.982% to 1.781%, with a median of about 1.34%, so we've made tons of progress on the f-n rate since then. But the mere presence of HTML tags still remains a significant clue for c.l.py traffic, so I'm left with the same comment: > # XXX So, if another way is found to slash the f-n rate, the decision here > # XXX not to strip HTML from HTML-only msgs should be revisited. If we want to take the focus of this away from c.l.py traffic, I can't say what effect HTML stripping would have (I don't have suitable test data to measure that on).
0
[Spambayes] spambayes package? >> Before we get too far down this road, what do people think of >> creating a spambayes package containing classifier and tokenizer? >> This is just to minimize clutter in site-packages. Guido> Too early IMO (if you mean to leave the various other tools out Guido> of it). Well, I mentioned classifier and tokenize only because I thought they were the only importable modules. The rest represent script-level code, right? Guido> If and when we package this, perhaps we should use Barry's trick Guido> from the email package for making the package itself the toplevel Guido> dir of the distribution (rather than requiring an extra directory Guido> level just so the package can be a subdir of the distro). That would be perfect. I tried in the naive way last night, but wound up with all .py files in the package, which wasn't my intent. Skip
0
[Spambayes] testing results These results are from timtest.py. I've got three sets of spam and ham with about 500 messages in each set. Here's what happens when I enable my latest "received" header code: false positive percentages 0.187 0.187 tied 0.749 0.562 won -24.97% 0.780 0.585 won -25.00% won 2 times tied 1 times lost 0 times total unique fp went from 19 to 17 false negative percentages 2.072 1.318 won -36.39% 2.448 1.318 won -46.16% 0.574 0.765 lost +33.28% won 2 times tied 0 times lost 1 times total unique fn went from 43 to 28 Anthony's header counting code does not seem to help. Neil
0
[Spambayes] testing results [Neil Schemenauer] > These results are from timtest.py. I've got three sets of spam and ham > with about 500 messages in each set. Here's what happens when I enable > my latest "received" header code: If you've still got the summary files, please cvs up and try running cmp.py again -- in the process of generalizing cmp.py, you managed to make it skip half the lines <wink>. That is, if you've got N sets, you *should* get N**2-N pairs for each error rate. You have 3 sets, so you should get 6 pairs of f-n rates and 6 pairs of f-p rates. > false positive percentages > 0.187 0.187 tied > 0.749 0.562 won -24.97% > 0.780 0.585 won -25.00% > > won 2 times > tied 1 times > lost 0 times > > total unique fp went from 19 to 17 > > false negative percentages > 2.072 1.318 won -36.39% > 2.448 1.318 won -46.16% > 0.574 0.765 lost +33.28% > > won 2 times > tied 0 times > lost 1 times > > total unique fn went from 43 to 28 Looks promising! Getting 6 lines of output for each block would give a clearer picture, of course. > Anthony's header counting code does not seem to help. It helps my test data too much <wink/sigh>.
0
[Spambayes] testing results Neil trained a classifier using 3 sets with about 500 ham and spam in each. We're missing half his test run results due to a cmp.py bug (since fixed); the "before custom fiddling" figures on the 3 reported runs were: false positive percentages 0.187 0.749 0.780 total unique fp 19 false negative percentages 2.072 2.448 0.574 total unique fn 43 The "total unique" figures counts all 6 runs; it's just the individual-run fp and fn percentages we're missing for 3 runs. Jeremy reported these "before custom fiddling" figures on 4 sets with about 600 ham and spam in each: false positive percentages 0.000 1.398 1.398 0.000 1.242 1.242 1.398 1.398 0.000 1.553 1.553 0.000 total unique fp 139 false negative percentages 10.413 6.104 5.027 8.259 2.873 5.745 5.206 4.488 9.336 5.206 5.027 9.874 total unique fn 970 So things are clearly working much better for Neil. Both reported significant improvements in both f-n and f-p rates by folding in more header lines. Neal added Received analysis to the base tokenizer's header analysis, and Jeremy skipped the base tokenizer's header analysis completely but added base-subject-line-like but case-folded tokenization for almost all header lines (excepting only Received, Data, X-From_, and, I *suspect*, all those starting with 'x-vm'). When I try 5 random pairs of 500-ham + 500-spam subsets in my test data, I see: false positive percentages 0.000 0.000 0.200 0.000 0.200 0.000 0.200 0.000 0.000 0.200 0.400 0.000 0.200 0.000 0.200 0.400 0.000 0.400 0.200 0.600 total unique fp 10 false negative percentages 0.800 0.400 0.200 0.600 1.000 0.000 0.600 1.200 1.200 0.800 0.400 0.800 1.800 0.800 0.400 1.000 1.000 0.400 0.000 0.600 total unique fn 36 This is much closer to what Neil saw, but still looks better. Another run on a disjoint 5 random pairs looked much the same; total unique fp rose to 12 and fn fell to 27; on a third run with another set of disjoint 5 random pairs, likewise, with fp 12 and fn 40. So I'm pretty confident that it's not going to matter which random subsets of 500 I take from my data. It's hard to conclude anything given Jeremy's much worse results. If they were in line with Neil's results, I'd suspect that I've over-tuned the algorithm to statistical quirks in my corpora.
0
[Spambayes] test sets? [Tim] > One effect of getting rid of MINCOUNT is that it latches on more > strongly to rare clues now, and those can be unique to the corpus > trained on (e.g., one trained ham says "gryndlplyx!", and a followup > new ham quotes it). This may be a systematic bias in the testing procedure: in real life, msgs come ordered in time. Say there's a thread that spans N messages on c.l.py. In our testing setup, we'll train on a random sampling throughout its whole lifetime, and test likewise. New ham "in the middle" of this thread gets benefit from that we trained on msgs that appeared both before and *after* it in real life. It's quite plausible that the f-p rate would rise without this effect; in real life, at any given time some number of ham threads will just be starting their lives, and if they're at all unusual the trained data will know little to nothing about them.
0
[Spambayes] test sets? [Greg Ward] > Case of headers is definitely helpful. SpamAssassin has a rule for it > -- if you have headers like "DATE" or "SUBJECT", you get a few more > points. Across my data, all-caps DATE, SUBJECT, TO, etc indeed appear only in the spam collections. OTOH, they don't appear often -- less than 1% of spam messages have at least one of these all-cap header lines. But when I'm fighting what are now sub-1% f-n rates, even rare clues can help!
0
[Spambayes] Ditching WordInfo [Neale Pickett] > ... > If you can spare the memory, you might get better performance in this > case using the pickle store, since it only has to go to disk once (but > boy, does it ever go to disk!) I can't think of anything obvious to > speed things up once it's all loaded into memory, though. On my box the current system scores about 50 msgs per second (starting in memory, of course). While that can be a drag while waiting for one of my full test runs to complete (one of those scores a message more than 120,000 times, and trains more than 30,000 times), I've got no urge to do any speed optimizations -- if I were using this for my own email, I'd never notice the drag. Guido will bitch like hell about waiting an extra second for his 50-msg batches to score, but he's the boss so he bitches about everything <wink>. > That's profiler territory, and profiling is exactly the kind of > optimization I just said I wasn't going to do :) I haven't profiled yet, but *suspect* there aren't any egregious hot spots. 5-gram'ing of long words with high-bit characters is likely overly expensive *when* it happens, but it doesn't happen that often, and as an approach to non-English languages it sucks anyway (i.e., there's no point speeding something that ought to be replaced entirely).
0
[Spambayes] hammie.py vs. GBayes.py [Guido] > There seem to be two "drivers" for the classifier now: Neale Pickett's > hammie.py, and the original GBayes.py. According to the README.txt, > GBayes.py hasn't been kept up to date. It seemed that way to me when I ripped the classifier out of it -- I don't think anyone has touched it after. > Is there anything in there that isn't covered by hammie.py? Someone else will have to answer that (I don't use GBayes or hammie, at least not yet). > About the only useful feature of GBayes.py that hammie.py doesn't (yet) > copy is -u, which calculates spamness for an entire mailbox. This > feature can easily be copied into hammie.py. That's been done now, right? > (GBayes.py also has a large collection of tokenizers; but timtoken.py > rules, so I'm not sure how interesting that is now.) Those tokenizers are truly trivial to rewrite from scratch if they're interesting. The tiny spam/ham collections in GBayes are also worthless now. The "self test" feature didn't do anything except print its results; Tester.py since became doctest'ed and verifies that some basic machinery actually delivers what it's supposed to deliver. > Therefore I propose to nuke GBayes.py, after adding a -u feature. +1 here. > Anyone against?
0
[Spambayes] All Cap or Cap Word Subjects Just curious if subject line capitalization can be used as an indicator. Either the percentage of characters that are caps.. Or, percentage starting with a capital letter (if number of words > xx) Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax AOL-IM: BKClements
0
[Spambayes] All Cap or Cap Word Subjects [Brad Clements] > Just curious if subject line capitalization can be used as an indicator. > > Either the percentage of characters that are caps.. > > Or, percentage starting with a capital letter (if number of words > xx) Supply a mod to tokenizer.py and I'll test it (eventually <wink>). Note that the tokenizer already *preserves* case in subject-line words, because experiment showed that this was better than folding case away in this specific context (but experiment also showed-- against my expectations --that preserving case everywhere didn't make a significant difference to either error rate -- the subject line is a special case for this).
0
[Spambayes] testing results Tim Peters wrote: > If you've still got the summary files, please cvs up and try running cmp.py > again -- in the process of generalizing cmp.py, you managed to make it skip > half the lines <wink>. Woops. I didn't have the summary files so I regenerated them using a slightly different set of data. Here are the results of enabling the "received" header processing: false positive percentages 0.707 0.530 won -25.04% 0.873 0.524 won -39.98% 0.301 0.301 tied 1.047 1.047 tied 0.602 0.452 won -24.92% 0.353 0.177 won -49.86% won 4 times tied 2 times lost 0 times total unique fp went from 17 to 14 won -17.65% false negative percentages 2.167 1.238 won -42.87% 0.969 0.969 tied 1.887 1.372 won -27.29% 1.616 1.292 won -20.05% 1.029 0.858 won -16.62% 1.548 1.548 tied won 4 times tied 2 times lost 0 times total unique fn went from 50 to 38 won -24.00% My test set is different than Tim's in that all the email was received by the same account. Also, my set contains email sent to me, not to mailing lists (I use a different addresses for mailing lists). If people cook up more ideas I will be happy to test them. Neil
0
[Spambayes] Ditching WordInfo [Tim] > ... > On my box the current system scores about 50 msgs per second (starting > in memory, of course). That was a guess. Bothering to get a clock out, it was more like 80 per second. See? A 60% speedup without changing a thing <wink>.
0
[Spambayes] testing results [Neil Schemenauer] > Woops. I didn't have the summary files so I regenerated them using a > slightly different set of data. Here are the results of enabling the > "received" header processing: > > false positive percentages > 0.707 0.530 won -25.04% > 0.873 0.524 won -39.98% > 0.301 0.301 tied > 1.047 1.047 tied > 0.602 0.452 won -24.92% > 0.353 0.177 won -49.86% > > won 4 times > tied 2 times > lost 0 times > > total unique fp went from 17 to 14 won -17.65% > > false negative percentages > 2.167 1.238 won -42.87% > 0.969 0.969 tied > 1.887 1.372 won -27.29% > 1.616 1.292 won -20.05% > 1.029 0.858 won -16.62% > 1.548 1.548 tied > > won 4 times > tied 2 times > lost 0 times > > total unique fn went from 50 to 38 won -24.00% > > My test set is different than Tim's in that all the email was received > by the same account. Also, my set contains email sent to me, not to > mailing lists (I use a different addresses for mailing lists). Enabling the Received headers works even better for me <wink>; here's the f-n section from a quick run on 500-element subsets: 0.600 0.200 won -66.67% 0.200 0.200 tied 0.200 0.000 won -100.00% 0.800 0.400 won -50.00% 0.400 0.200 won -50.00% 0.400 0.000 won -100.00% 0.200 0.000 won -100.00% 1.000 0.400 won -60.00% 0.800 0.200 won -75.00% 1.200 0.600 won -50.00% 0.400 0.200 won -50.00% 2.000 0.800 won -60.00% 0.400 0.400 tied 1.200 0.600 won -50.00% 0.400 0.000 won -100.00% 2.000 1.000 won -50.00% 0.400 0.000 won -100.00% 0.800 0.000 won -100.00% 0.000 0.200 lost +(was 0) 0.400 0.000 won -100.00% won 17 times tied 2 times lost 1 times total unique fn went from 38 to 15 won -60.53% A huge improvement, but for wrong reasons ... except not entirely! The most powerful discriminator in the whole database on one training set became: 'received:unknown' 881 0.99 That's got nothing to do with BruceG, right? 'received:bfsmedia.com' was also a strong spam indicator across all training sets. I'm jealous. > If people cook up more ideas I will be happy to test them. Neil, are using your own tokenizer now, or the tokenizer.Tokenizer.tokenize generator? Whichever, someone who's not afraid of their headers should try adding mboxtest.MyTokenizer.tokenize_headers into the mix, once in lieu of tokenizer.Tokenizer.tokenize_headers(), and again in addition to it. Jeremy reported on just the former.
0
[Spambayes] spambayes package? On 07 September 2002, Guido van Rossum said: > If and when we package this, perhaps we should use Barry's trick > from the email package for making the package itself the toplevel dir > of the distribution (rather than requiring an extra directory level > just so the package can be a subdir of the distro). It's not a *trick*! It just requires this package_dir = {'spambayes': '.'} in the setup script. harrumph! "trick" indeed... Greg -- Greg Ward <gward@python.net> http://www.gerg.ca/ A committee is a life form with six or more legs and no brain.
0
[Spambayes] test sets? > I'd prefer to strip HTML tags from everything, but last time I tried > that it still had bad effects on the error rates in my corpora Your corpora are biased in this respect though -- newsgroups have a strong social taboo on posting HTML, but in many people's personal inboxes it is quite abundant. Getting a good ham corpus may prove to be a bigger hurdle than I though! My own saved mail doesn't reflect what I receive, since I save and throw away selectively (much more so than in the past :-). > > multipart/mixed > > text/plain (brief text plus URL(s)) > > text/html (long HTML copied from website) > > Ah! That explains why the HTML tags didn't get stripped. I'd again > offer to add an optional argument to tokenize() so that they'd get > stripped here too, but if it gets glossed over a third time that > would feel too much like a loss <wink>. I'll bite. Sounds like a good idea to strip the HTML in this case; I'd like to see how this improves the f-p rate on this corpus. --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] test sets? [Tim] >> I'd prefer to strip HTML tags from everything, but last time I tried >> that it still had bad effects on the error rates in my corpora [Guido] > Your corpora are biased in this respect though -- newsgroups have a > strong social taboo on posting HTML, but in many people's personal > inboxes it is quite abundant. We're in violent agreement there: the comments in tokenizer.py say that as strongly as possible, and I've repeated it endlessly here too. But so long as I was the only one doing serious testing, it was a dubious idea to make the code maximally clumsy for me to use on the c.l.py task <wink>. > Getting a good ham corpus may prove to be a bigger hurdle than I > though! My own saved mail doesn't reflect what I receive, since I > save and throw away selectively (much more so than in the past :-). Yup, the system picks up on *everything* in the tokens. Graham's proposed "delete as ham" and "delete as spam" keys would probably work very well for motivated geeks. But Paul Svensson has pointed out here that they probably wouldn't work nearly so well for real people. >> Ah! That explains why the HTML tags didn't get stripped. I'd again >> offer to add an optional argument to tokenize() so that they'd get >> stripped here too, but if it gets glossed over a third time that >> would feel too much like a loss <wink>. > I'll bite. Sounds like a good idea to strip the HTML in this case; > I'd like to see how this improves the f-p rate on this corpus. I'll soon check in this change: def tokenize_body(self, msg, retain_pure_html_tags=False): ^^^^^^^^^^^^^^^^^^^^^^^^^^^ """Generate a stream of tokens from an email Message. If a multipart/alternative section has both text/plain and text/html sections, the text/html section is ignored. This may not be a good idea (e.g., the sections may have different content). HTML tags are always stripped from text/plain sections. By default, HTML tags are also stripped from text/html sections. However, doing so hurts the false negative rate on Tim's comp.lang.python tests (where HTML-only messages are almost never legitimate traffic). If optional argument retain_pure_html_tags is specified and True, HTML tags are retained in text/html sections. """ You should do a cvs up and establish a new baseline first, as I checked in a pure-win change in the wee hours that cut the fp and fn rates in my tests.
0
[Spambayes] spambayes package? >> If and when we package this, perhaps we should use Barry's trick ... Greg> It's not a *trick*! It just requires this Greg> package_dir = {'spambayes': '.'} Greg> in the setup script. That has the nasty side effect of placing all .py files in the package. What about obvious executable scripts (like timtest or hammie)? How can I keep them out of the package? Skip
0
[Spambayes] spambayes package? > That has the nasty side effect of placing all .py files in the > package. What about obvious executable scripts (like timtest or > hammie)? How can I keep them out of the package? Why would we care about installing a few extra files, as long as they're inside a package? --Guido van Rossum (home page: http://www.python.org/~guido/)
0
[Spambayes] spambayes package? [Skip Montanaro] > That has the nasty side effect of placing all .py files in the package. > What about obvious executable scripts (like timtest or hammie)? How can I > keep them out of the package? Put them in a scripts folder? // m -
0
[Spambayes] deleting "duplicate" spam before training? good idea or Because I get mail through several different email addresses, I frequently get duplicates (or triplicates or more-plicates) of various spam messages. In saving spam for later analysis I haven't always been careful to avoid saving such duplicates. I wrote a script some time ago to try an minimize the duplicates I see by calculating a loose checksum, but I still have some duplicates. Should I delete the duplicates before training or not? Would people be interested in the script? I'd be happy to extricate it from my local modules and check it into CVS. Skip
0
[Spambayes] understanding high false negative rate >>>>> "TP" == Tim Peters <tim.one@comcast.net> writes: >> First test results using tokenizer.Tokenizer.tokenize_headers() >> unmodified. ... Second test results using >> mboxtest.MyTokenizer.tokenize_headers(). This uses all headers >> except Received, Data, and X-From_. ... TP> Try the latter again, but call the base tokenize_headers() too. Sorry. I haven't found the time to try any more test runs. Perhaps later today. Jeremy
0
[Spambayes] deleting "duplicate" spam before training? good idea [Skip Montanaro] > Because I get mail through several different email addresses, I > frequently get duplicates (or triplicates or more-plicates) of > various spam messages. In saving spam for later analysis I haven't > always been careful to avoid saving such duplicates. > > I wrote a script some time ago to try an minimize the duplicates I see > by calculating a loose checksum, but I still have some duplicates. > Should I delete the duplicates before training or not? People just can't stop thinking <wink>. The classifier should work best when trained on a wholly random spattering of real life. If real life contains duplicates, then that's what the classifier should see. > Would people be interested in the script? I'd be happy to extricate > it from my local modules and check it into CVS. Sure! I think it's relevant, but maybe for another purpose. Paul Svensson is thinking harder about real people <wink> than the rest of us, and he may be able to get use out of approaches that identify closely related spam. For example, some amount of spam is going to end up in the ham training data in real life use, and any sort of similarity score to a piece of known spam may be an aid in finding and purging it.
0
[Spambayes] spambayes package? Guido> Why would we care about installing a few extra files, as long as Guido> they're inside a package? I guess you needn't worry about that. It just doesn't seem "clean" to me. S
0
[Spambayes] deleting "duplicate" spam before training? good idea >> I wrote a script some time ago to try an minimize the duplicates I >> see by calculating a loose checksum, but I still have some >> duplicates. Should I delete the duplicates before training or not? Tim> People just can't stop thinking <wink>. The classifier should work Tim> best when trained on a wholly random spattering of real life. If Tim> real life contains duplicates, then that's what the classifier Tim> should see. A bit more detail. I get destined for many addresses: skip@pobox.com, skip@calendar.com, concerts@musi-cal.com, webmaster@mojam.com, etc. I originally wrote (a slightly different version of) the loosecksum.py script I'm about to check in to avoid manually scanning all those presumed spams which are really identical. Once a message was identified as spam, what I refer to as a loose checksum was computed to try and avoid saving the same spam multiple times for later review. >> Would people be interested in the script? I'd be happy to extricate >> it from my local modules and check it into CVS. Tim> Sure! I think it's relevant, but maybe for another purpose. Paul Tim> Svensson is thinking harder about real people <wink> than the rest Tim> of us, and he may be able to get use out of approaches that Tim> identify closely related spam. For example, some amount of spam is Tim> going to end up in the ham training data in real life use, and any Tim> sort of similarity score to a piece of known spam may be an aid in Tim> finding and purging it. I'll check it in. Let me know if you find it useful. Skip
0
[Spambayes] deleting "duplicate" spam before training? good idea On 09 September 2002, Tim Peters said: > > Would people be interested in the script? I'd be happy to extricate > > it from my local modules and check it into CVS. > > Sure! I think it's relevant, but maybe for another purpose. Paul Svensson > is thinking harder about real people <wink> than the rest of us, and he may > be able to get use out of approaches that identify closely related spam. > For example, some amount of spam is going to end up in the ham training data > in real life use, and any sort of similarity score to a piece of known spam > may be an aid in finding and purging it. OTOH, look into DCC (Distributed Checksum Clearinghouse, http://www.rhyolite.com/anti-spam/dcc/), which uses fuzzy checksums. It's quite likely that DCC's checksumming scheme is better than something any of us would throw together for personal use (no offense, Skip!). But I have no personal experience of it. Greg -- Greg Ward <gward@python.net> http://www.gerg.ca/ If it can't be expressed in figures, it is not science--it is opinion.
0
[Spambayes] deleting "duplicate" spam before training? good idea Greg> OTOH, look into DCC (Distributed Checksum Clearinghouse, Greg> http://www.rhyolite.com/anti-spam/dcc/), which uses fuzzy Greg> checksums. It's quite likely that DCC's checksumming scheme is Greg> better than something any of us would throw together for personal Greg> use (no offense, Skip!). None taken. I wrote my little script before I was aware DCC existed. Even now, it seems like overkill for my use. Skip
0
[Spambayes] python.org email harvesting ready to roll [followups to spambayes@python.org please, unless you're specifically concerned about some particular bit of email policy for python.org] OK, after much fiddling with and tweaking of /etc/exim/exim4.conf and /etc/exim/local_scan.py on mail.python.org, I am fairly confident that I can start harvesting all incoming email at a moment's notice. For the record, here's how it all works: * exim4.conf works almost exactly the same as before if the file /etc/exim/harvest does not exist. That is, any "junk mail condition" that can be detected by Exim ACLs (access control lists) is handled entirely in exim4.conf: the message is rejected before it ever gets to local_scan.py. This covers such diverse cases as "message from known spammer" (reject after every RCPT TO command), "no message-id header", and "8-bit chars in subject" (both rejected after the message headers/body are read). The main things I have changed in the absence of /etc/exim/harvest are: - don't check for 8-bit chars in "From" header -- the vast majority of hits for this test were bounces from some Asian ISP; the remaining hits should be handled by SpamAssassin - do header sender verification (ie. ensure that there's a verifiable email address in at least one of "From", "Reply-to", and "Sender") as late as possible, because it requires DNS lookups which can be slow (and can also make messages that should have been rejected merely be deferred, if those DNS lookups timeout) * if /etc/exim/harvest exists, then the behaviour of all of those ACLs in exim4.conf suddenly changes: instead of rejecting recipients or messages, they add an X-reject header to the message. This header is purely for internal use; it records the name of the folder to which the rejected message should be saved, and also gives the SMTP error message which should ultimately be used to reject the message. Thus, those messages will now be seen by local_scan.py, which now looks for the X-reject header. If found, it uses the folder name specified there to save the message, and then rejects it with the SMTP error message also given in X-reject. (Currently X-reject is retained in saved messages.) If a message was not tagged with X-reject, then local_scan.py runs the usual virus and spam checks. (Namely, my homebrew scan for attachments with filenames that look like Windows executables, and a run through SpamAssassin.) The logic is basically this: if virus: folder = "virus" else: run through SpamAssassin if score >= 10.0: folder = "rejected-spam" elif score >= 5.0: folder = "caught-spam" Finally, local_scan.py writes the message to the designated folder. By far the biggest folder will be "accepted" -- the server handles 2000-5000 incoming messages per day, of which maybe 100-500 are junk mail. (Oops, just realized I haven't written the code that actually saves the message -- d'ohh! Also haven't written anything to discriminate personal email, which I must do. Sigh.) * finally, the big catch: waiting until after you've read the message headers and body to actually reject the message is problematic, because certain broken MTAs (including those used by some spammers) don't consider a 5xx after DATA as a permanent error, but keep retrying. D'ohh. This is a minor annoyance currently, where a fair amount of stuff is rejected at RCPT TO time. But in harvest mode, *everything* (with the exception of people probing for open relays) will be rejected at DATA time. So I have cooked up something called the ASBL, or automated sender blacklist. This is just a Berkeley DB file that maps (sender_ip, sender_address) to an expiry time. When local_scan() rejects a message from (sender_ip, sender_address) -- for whatever reason, including finding an X-reject header added by an ACL in exim4.conf -- it adds a record to the ASBL, with an expiry time 3 hours in the future. Meanwhile, there's an ACL in exim4.conf that checks for records in the ASBL; if there's a record for the current (sender_ip, sender_address) that hasn't expired yet, we reject all recipients without ever looking at the message headers or body. The downside of this from the point-of-view of corpus collection is that if some jerk is busily spamming *@python.org, one SMTP connection per address, we will most likely only get one copy. This is a win if you're just thinking about reducing server load and bandwidth, but I'm not sure if it's helpful for training spam detectors. Tim? Happy harvesting -- Greg -- Greg Ward <gward@python.net> http://www.gerg.ca/ Budget's in the red? Let's tax religion! -- Dead Kennedys
0
[Spambayes] Current histograms We've not only reduced the f-p and f-n rates in my test runs, we've also made the score distributions substantially sharper. This is bad news for Greg, because the non-existent "middle ground" is becoming even less existent <wink>: Ham distribution for all runs: * = 1333 items 0.00 79975 ************************************************************ 2.50 1 * 5.00 0 7.50 0 10.00 2 * 12.50 1 * 15.00 0 17.50 0 20.00 0 22.50 1 * 25.00 0 27.50 0 30.00 0 32.50 0 35.00 0 37.50 1 * 40.00 0 42.50 0 45.00 0 47.50 0 50.00 0 52.50 0 55.00 0 57.50 0 60.00 1 * 62.50 0 65.00 1 * 67.50 0 70.00 0 72.50 0 75.00 0 77.50 0 80.00 0 82.50 0 85.00 0 87.50 0 90.00 0 92.50 0 95.00 0 97.50 17 * Spam distribution for all runs: * = 914 items 0.00 118 * 2.50 7 * 5.00 0 7.50 2 * 10.00 1 * 12.50 1 * 15.00 3 * 17.50 1 * 20.00 1 * 22.50 1 * 25.00 0 27.50 0 30.00 4 * 32.50 3 * 35.00 4 * 37.50 2 * 40.00 0 42.50 1 * 45.00 1 * 47.50 0 50.00 2 * 52.50 3 * 55.00 1 * 57.50 2 * 60.00 0 62.50 1 * 65.00 1 * 67.50 10 * 70.00 2 * 72.50 1 * 75.00 2 * 77.50 1 * 80.00 0 82.50 0 85.00 1 * 87.50 4 * 90.00 2 * 92.50 5 * 95.00 14 * 97.50 54806 ************************************************************ As usual for me, this is an aggregate of 20 runs, each both training and predicting on 4000 c.l.py ham + ~2750 BruceG spam. Only 25 ham scores out of 80,000 are above 0.025 now (and, yes, the "Nigerian scam"-quoting msg is still counted as ham -- I haven't taken anything out of the ham corpus since remving the "If AOL were a car" spam), the f-p rate wouldn't have changed at all if the spamprob cutoff were dropped from 0.90 to 0.675, dropping the cutoff to 0.40 would have added only 2 false positives, and dropping it to 0.15 would have added only another 2 more! It's spooky.
0
[Spambayes] Current histograms >>> Tim Peters wrote > We've not only reduced the f-p and f-n rates in my test runs, we've also > made the score distributions substantially sharper. This is bad news for > Greg, because the non-existent "middle ground" is becoming even less > existent <wink>: Well, I've finally got around to pulling down the SF code. Starting with it, and absolutely zero local modifications, I see the following: Ham distribution for all runs: * = 589 items 0.00 35292 ************************************************************ 2.50 36 * 5.00 21 * 7.50 12 * 10.00 6 * 12.50 9 * 15.00 6 * 17.50 3 * 20.00 8 * 22.50 5 * 25.00 3 * 27.50 18 * 30.00 9 * 32.50 1 * 35.00 4 * 37.50 3 * 40.00 0 42.50 3 * 45.00 3 * 47.50 4 * 50.00 9 * 52.50 5 * 55.00 5 * 57.50 3 * 60.00 4 * 62.50 2 * 65.00 2 * 67.50 6 * 70.00 1 * 72.50 3 * 75.00 2 * 77.50 4 * 80.00 3 * 82.50 3 * 85.00 6 * 87.50 8 * 90.00 4 * 92.50 8 * 95.00 15 * 97.50 441 * Spam distribution for all runs: * = 504 items 0.00 393 * 2.50 17 * 5.00 18 * 7.50 12 * 10.00 4 * 12.50 6 * 15.00 11 * 17.50 10 * 20.00 10 * 22.50 5 * 25.00 3 * 27.50 19 * 30.00 8 * 32.50 2 * 35.00 0 37.50 1 * 40.00 5 * 42.50 5 * 45.00 7 * 47.50 2 * 50.00 5 * 52.50 1 * 55.00 9 * 57.50 11 * 60.00 6 * 62.50 4 * 65.00 3 * 67.50 5 * 70.00 7 * 72.50 9 * 75.00 2 * 77.50 13 * 80.00 3 * 82.50 7 * 85.00 15 * 87.50 16 * 90.00 11 * 92.50 16 * 95.00 45 * 97.50 30226 ************************************************************ My next (current) task is to complete the corpus I've got - it's currently got ~ 9000 ham, 7800 spam, and about 9200 currently unsorted. I'm tossing up using either hammie or spamassassin to do the initial sort - previously I've used various forms of 'grep' for keywords and a little gui thing to pop a message up and let me say 'spam/ham', but that's just getting too, too tedious. I can't make it available en masse, but I will look at finding some of the more 'interesting' uglies. One thing I've seen (consider this 'anecdotal' for now) is that the 'skip' tokens end up in a _lot_ of the f-ps. Anthony
0
[Spambayes] timtest broke? After my latest cvs up, timtest fails with Traceback (most recent call last): File "/home/skip/src/spambayes/timtest.py", line 294, in ? drive(nsets) File "/home/skip/src/spambayes/timtest.py", line 264, in drive d = Driver() File "/home/skip/src/spambayes/timtest.py", line 152, in __init__ self.global_ham_hist = Hist(options.nbuckets) AttributeError: 'OptionsClass' object has no attribute 'nbuckets' I'm running it as timtest -n5 > Data/timtest.out from my ~/Mail directory (not from my ~/src/spambayes directory). If I create a symlink to ~/src/spambayes/bayes.ini it works once again, but shouldn't there be an nbuckets attribute with a default value already? Skip
0
[Spambayes] timtest broke? [Skip Montanaro] > After my latest cvs up, timtest fails with > > Traceback (most recent call last): > File "/home/skip/src/spambayes/timtest.py", line 294, in ? > drive(nsets) > File "/home/skip/src/spambayes/timtest.py", line 264, in drive > d = Driver() > File "/home/skip/src/spambayes/timtest.py", line 152, in __init__ > self.global_ham_hist = Hist(options.nbuckets) > AttributeError: 'OptionsClass' object has no attribute 'nbuckets' > > I'm running it as > > timtest -n5 > Data/timtest.out > > from my ~/Mail directory (not from my ~/src/spambayes directory). If I > create a symlink to ~/src/spambayes/bayes.ini it works once again, but > shouldn't there be an nbuckets attribute with a default value already? I never used ConfigParser before, but I read that its read() method silently ignores files that don't exist. If 'bayes.ini' isn't found, *none* of the options will be defined. Since you want to run this from a directory other than my spambayes directory, it's up to you to check in changes to make that possible <wink>.
0
[Spambayes] timtest broke? [Tim] > I never used ConfigParser before, but I read that its read() > method silently ignores files that don't exist. If 'bayes.ini' > isn't found, *none* of the options will be defined. ... Note that I since got rid of bayes.ini (it's embedded in Options.py now), so search-path issues won't burn you here anymore. The intended way to customize the tokenizer and testers is via creating your own bayescustomize.ini. You'll get burned by search-path issues wrt that instead now <0.7 wink>.
0
[Spambayes] Current histograms [Anthony Baxter] > Well, I've finally got around to pulling down the SF code. Starting > with it, and absolutely zero local modifications, I see the following: How many runs is this summarizing? For each, how many ham&spam were in the training set? How many in the prediction sets? What were the error rates (run rates.py over your output file)? The effect of set sizes on accuracy rates isn't known. I've informally reported some results from just a few controlled experiments on that. Jeremy reported improved accuracy by doubling the training set size, but that wasn't a controlled experiment (things besides just training set size changed between "before" and "after"). > Ham distribution for all runs: > * = 589 items > 0.00 35292 ************************************************************ > 2.50 36 * > 5.00 21 * > 7.50 12 * > 10.00 6 * > ... > 90.00 4 * > 92.50 8 * > 95.00 15 * > 97.50 441 * > > Spam distribution for all runs: > * = 504 items > 0.00 393 * > 2.50 17 * > 5.00 18 * > 7.50 12 * > 10.00 4 * > ... > 90.00 11 * > 92.50 16 * > 95.00 45 * > 97.50 30226 ************************************************************ > > > My next (current) task is to complete the corpus I've got - it's currently > got ~ 9000 ham, 7800 spam, and about 9200 currently unsorted. I'm tossing > up using either hammie or spamassassin to do the initial sort - previously > I've used various forms of 'grep' for keywords and a little gui thing to > pop a message up and let me say 'spam/ham', but that's just getting too, too > tedious. Yup, tagging data is mondo tedious, and mistakes hurt. I expect hammie will do a much better job on this already than hand grepping. Be sure to stare at the false positives and get the spam out of there. > I can't make it available en masse, but I will look at finding some of > the more 'interesting' uglies. One thing I've seen (consider this > 'anecdotal' for now) is that the 'skip' tokens end up in a _lot_ of the > f-ps. With probabilities favoring ham or spam? A skip token is produced in lieu of "word" more than 12 chars long and without any high-bit characters. It's possible that they helped me because raw HTML produces lots of these. However, if you're running current CVS, Tokenizer/retain_pure_html_tags defaults to False now, so HTML decorations should vanish before body tokenization.
0
[Spambayes] XTreme Training [Tim] > ... > At the other extreme, training on half my ham&spam, and scoring aginst > the other half > ... > false positive rate: 0.0100% > false negative rate: 0.3636% > ... > Alas, all 4 of the 0.99 clues there are HTML-related. That begged to try it again but with Tokenize/retain_pure_html_tags false. The random halves getting trained on and scored against are different here, and I repaired the bug that dropped 1 ham and 1 spam on the floor, so this isn't exactly a 1-change difference between runs. Ham distribution for all runs: * = 167 items 0.00 9999 ************************************************************ 10.00 0 20.00 0 30.00 0 40.00 0 50.00 0 60.00 0 70.00 0 80.00 0 90.00 1 * Spam distribution for all runs: * = 115 items 0.00 21 * 10.00 0 20.00 0 30.00 1 * 40.00 0 50.00 0 60.00 1 * 70.00 0 80.00 1 * 90.00 6852 ************************************************************ false positive rate: 0.0100% false negative rate: 0.3490% Yay! That may mean that HTML tags aren't really needed in my test data provided it's trained on enough stuff. Curiously, the sole false positive here is the same as the sole false positive on the half&half run reported in the preceding msg (I assume the Nigerian scam "false positive" just happened to end up in the training data both times): ************************************************************************ Data/Ham/Set4/107687.txt prob = 0.999632042904 prob('python.') = 0.01 prob('alteration') = 0.01 prob('edinburgh') = 0.01 prob('subject:Python') = 0.01 prob('header:Errors-To:1') = 0.0216278 prob('thanks,') = 0.0319955 prob('help?') = 0.041806 prob('road,') = 0.0462364 prob('there,') = 0.0722794 prob('us.') = 0.906609 prob('our') = 0.919118 prob('company,') = 0.921852 prob('visit') = 0.930785 prob('sent.') = 0.939882 prob('e-mail') = 0.949765 prob('courses') = 0.954726 prob('received') = 0.955209 prob('analyst') = 0.960756 prob('investment') = 0.975139 prob('regulated') = 0.99 prob('e-mails') = 0.99 prob('mills') = 0.99 Received: from [195.171.5.71] (helo=node401.dmz.standardlife.com) by mail.python.org with esmtp (Exim 3.21 #1) id 15rDsu-00085k-00 for python-list@python.org; Wed, 10 Oct 2001 03:34:32 -0400 Received: from slukdcn4.internal.standardlife.com (slukdcn4.standardlife.com [10.3.2.72]) by node401.dmz.standardlife.com (Pro-8.9.3/Pro-8.9.3) with SMTP id IAA53660for <python-list@python.org>; Wed, 10 Oct 2001 08:34:00 +0100 Received: from sl079320 ([172.31.88.231]) by slukdcn4.internal.standardlife.com (Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) with SMTP id 80256AE1.00294B60; Wed, 10 Oct 2001 08:31:02 +0100 Message-ID: <007e01c1515d$bb255940$e7581fac@sl079320.internal.standardlife.com> From: "Vickie Mills" <vickie_mills@standardlife.com> To: <python-list@python.org> Subject: Training Courses in Python in UK Date: Wed, 10 Oct 2001 08:32:30 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 4.72.3155.0 X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0 Sender: python-list-admin@python.org Errors-To: python-list-admin@python.org X-BeenThere: python-list@python.org X-Mailman-Version: 2.0.6 (101270) Precedence: bulk List-Help: <mailto:python-list-request@python.org?subject=help> List-Post: <mailto:python-list@python.org> List-Subscribe: <http://mail.python.org/mailman/listinfo/python-list>, <mailto:python-list-request@python.org?subject=subscribe> List-Id: General discussion list for the Python programming language <python-list.python.org> List-Unsubscribe: <http://mail.python.org/mailman/listinfo/python-list>, <mailto:python-list-request@python.org?subject=unsubscribe> List-Archive: <http://mail.python.org/pipermail/python-list/> Hi there, I am looking for you recommendations on training courses available in the UK on Python. Can you help? Thanks, Vickie Mills IS Training Analyst Tel: 0131 245 1127 Fax: 0131 245 1550 E-mail: vickie_mills@standardlife.com For more information on Standard Life, visit our website http://www.standardlife.com/ The Standard Life Assurance Company, Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH, is registered in Scotland (No SZ4) and regulated by the Personal Investment Authority. Tel: 0131 225 2552 - calls may be recorded or monitored. This confidential e-mail is for the addressee only. If received in error, do not retain/copy/disclose it without our consent and please return it to us. We virus scan all e-mails but are not responsible for any damage caused by a virus or alteration by a third party after it is sent. ************************************************************************ The top 30 discriminators are more interesting now: 'income' 629 0.99 'http0:python' 643 0.01 'header:MiME-Version:1' 672 0.99 'http1:remove' 693 0.99 'content-type:text/html' 711 0.982345 'string' 714 0.01 'http>1:jpg' 776 0.99 'object' 813 0.01 'python,' 852 0.01 'python.' 882 0.01 'language' 883 0.01 '>>>' 907 0.01 'header:Return-Path:2' 907 0.99 'unsubscribe' 975 0.99 'header:Received:7' 1113 0.99 'def' 1142 0.01 'http>1:gif' 1168 0.99 'module' 1169 0.01 'import' 1332 0.01 'header:Received:8' 1342 0.99 'header:Errors-To:1' 1377 0.0216278 'header:In-Reply-To:1' 1402 0.01 'wrote' 1753 0.01 '&nbsp;' 2067 0.99 'subject:Python' 2140 0.01 'header:User-Agent:1' 2322 0.01 'header:X-Complaints-To:1' 4351 0.01 'wrote:' 4370 0.01 'python' 4972 0.01 'header:Organization:1' 6921 0.01 There are still two HTML clues remaining there ("&nbsp;" and "content-type:text/html"). Anthony's trick accounts for almost a third of these. "Python" appears in 5 of them ('http0:python' means that 'python' was found in the 1st field of an embedded http:// URL). Sticking a .gif or a .jpg in a URL both score as 0.99 spam clues. Note the damning pattern of capitalization in 'header:MiME-Version:1'! This counting is case-sensitive, and nobody ever would have guessed that MiME is more damning than SUBJECT or DATE. Why would spam be likely to end up with two instances of Return-Path in the headers?
0
[Spambayes] stack.pop() ate my multipart message I've been running hammie on all my incoming messages, and I noticed that multipart/alternative messages are totally hosed: they have no content, just the MIME boundaries. For instance, the following message: ------------------------------8<------------------------------ From: somebody <someone@somewhere.org> To: neale@woozle.org Subject: Booga Content-type: multipart/alternative; boundary="snot" This is a multi-part message in MIME format. --snot Content-type: text/plain; charset=iso-8859-1 Content-transfer-encoding: 7BIT Hi there. --snot Content-type: text/html; charset=iso-8859-1 Content-transfer-encoding: 7BIT <pre>Hi there.</pre> --snot-- ------------------------------8<------------------------------ Comes out like this: ------------------------------8<------------------------------ From: somebody <someone@somewhere.org> To: neale@woozle.org Subject: Booga Content-type: multipart/alternative; boundary="snot" X-Hammie-Disposition: No; 0.74; [unrelated gar removed] This is a multi-part message in MIME format. --snot --snot-- ------------------------------8<------------------------------ I'm using "Python 2.3a0 (#1, Sep 9 2002, 22:56:24)". I've fixed it with the following patch to Tim's tokenizer, but I have to admit that I'm baffled as to why it works. Maybe there's some subtle interaction between generators and lists that I can't understand. Or something. Being as I'm baffled, I don't imagine any theory I come up with will be anywhere close to reality. In any case, be advised that (at least for me) hammie will eat multipart/alternative messages until this patch is applied. The patch seems rather bogus though, so I'm not checking it in, in the hope that there's a better fix I just wasn't capable of discovering :) ------------------------------8<------------------------------ Index: tokenizer.py =================================================================== RCS file: /cvsroot/spambayes/spambayes/tokenizer.py,v retrieving revision 1.15 diff -u -r1.15 tokenizer.py --- tokenizer.py 10 Sep 2002 18:15:49 -0000 1.15 +++ tokenizer.py 11 Sep 2002 05:01:16 -0000 @@ -1,3 +1,4 @@ +#! /usr/bin/env python """Module to tokenize email messages for spam filtering.""" import email @@ -507,7 +508,8 @@ htmlpart = textpart = None stack = part.get_payload() while stack: - subpart = stack.pop() + subpart = stack[0] + stack = stack[1:] ctype = subpart.get_content_type() if ctype == 'text/plain': textpart = subpart ------------------------------8<------------------------------
0
[Spambayes] stack.pop() ate my multipart message So then, Neale Pickett <neale@woozle.org> is all like: > Maybe there's some subtle interaction between generators and lists > that I can't understand. Or something. Being as I'm baffled, I don't > imagine any theory I come up with will be anywhere close to reality. And then, just as I was about to fall asleep, I figured it out. The tokenizer now has an extra [:], and all is well. I feel like a real chawbacon for not realizing this earlier. :*) Blaming it on staying up past bedtime, Neale
0
[Spambayes] XTreme Training On 10 September 2002, Tim Peters said: > Why would spam be likely to end up with two instances of Return-Path > in the headers? Possibly another qmail-ism from Bruce Guenter's spam collection. Or maybe Anthony's right about spammers being stupid and blindly copying headers. (Well, of course he's right about spammers being stupid; it's just this particular aspect of stupidity that's open to question.) Greg -- Greg Ward <gward@python.net> http://www.gerg.ca/ Think honk if you're a telepath.
0
[Spambayes] Current histograms [Anthony Baxter] > 5 sets, each of 1800ham/1550spam, just ran the once (it matched all 5 to > each other...) > > rates.py sez: > > Training on Data/Ham/Set1 & Data/Spam/Set1 ... 1798 hams & 1548 spams > 0.445 0.388 > 0.445 0.323 > 2.108 4.072 > 0.556 1.097 > Training on Data/Ham/Set2 & Data/Spam/Set2 ... 1798 hams & 1546 spams > 2.113 0.517 > 1.335 0.194 > 3.106 5.365 > 2.113 2.903 > Training on Data/Ham/Set3 & Data/Spam/Set3 ... 1798 hams & 1547 spams > 2.447 0.646 > 0.945 0.388 > 2.884 3.426 > 2.058 1.097 > Training on Data/Ham/Set4 & Data/Spam/Set4 ... 1803 hams & 1547 spams > 1.057 2.584 > 0.723 1.682 > 0.890 1.164 > 0.445 0.452 > Training on Data/Ham/Set5 & Data/Spam/Set5 ... 1798 hams & 1550 spams > 0.779 4.328 > 0.501 3.299 > 0.667 3.361 > 0.388 4.977 > total false pos 273 3.03501945525 > total false neg 367 4.74282760403 How were these msgs broken up into the 5 sets? Set4 in particular is giving the other sets severe problems, and Set5 blows the f-n rate on everything it's predicting -- when the rates across runs within a training set vary by as much as a factor of 25, it suggests there was systematic bias in the way the sets were chosen. For example, perhaps they were broken into sets by arrival time. If that's what you did, you should go back and break them into sets randomly instead. If you did partition them randomly, the wild variance across runs is mondo mysterious. >> I expect hammie will do a much better job on this already than hand >> grepping. Be sure to stare at the false positives and get the >> spam out of there. > Yah, but there's a chicken-and-egg problem there - I want stuff that's > _known_ to be right to test this stuff, Then you have to look at every message by eyeball -- any scheme has non-zero error rates of both kinds. > so using the spambayes code to tell me whether it's spam is not > going to help. Trust me <wink> -- it helps a *lot*. I expect everyone who has done any testing here has discovered spam in their ham, and vice versa. Results improve as you improve the categorization. Once the gross mistakes are straightened out, it's much less tedious to scan the rest by eyeball. [on skip tokens] > Yep, it shows up in a lot of spam, but also in different forms in hams. > But the hams each manage to pick a different variant of > ~~~~~~~~~~~~~~~~~~~~~~ > or whatever - so they don't end up counteracting the various bits in the > spam. > > Looking further, a _lot_ of the bad skip rubbish is coming from > uuencoded viruses &c in the spam-set. For whatever reason, there appear to be few of those in BruceG's spam collection. I added code to strip uuencoded sections, and pump out uuencode summary tokens instead. I'll check it in. It didn't make a significant difference on my usual test run (a single spam in my Set4 is now judged as ham by the other 4 sets; nothing else changed). It does shrink the database size here by a few percent. Let us know whether it helps you! Before and after stripping uuencoded sections: false positive percentages 0.000 0.000 tied 0.000 0.000 tied 0.050 0.050 tied 0.000 0.000 tied 0.025 0.025 tied 0.000 0.000 tied 0.075 0.075 tied 0.025 0.025 tied 0.025 0.025 tied 0.000 0.000 tied 0.050 0.050 tied 0.000 0.000 tied 0.025 0.025 tied 0.000 0.000 tied 0.000 0.000 tied 0.050 0.050 tied 0.025 0.025 tied 0.000 0.000 tied 0.025 0.025 tied 0.050 0.050 tied won 0 times tied 20 times lost 0 times total unique fp went from 8 to 8 tied false negative percentages 0.255 0.255 tied 0.364 0.364 tied 0.254 0.291 lost +14.57% 0.509 0.509 tied 0.436 0.436 tied 0.218 0.218 tied 0.182 0.218 lost +19.78% 0.582 0.582 tied 0.327 0.327 tied 0.255 0.255 tied 0.254 0.291 lost +14.57% 0.582 0.582 tied 0.545 0.545 tied 0.255 0.255 tied 0.291 0.291 tied 0.400 0.400 tied 0.291 0.291 tied 0.218 0.218 tied 0.218 0.218 tied 0.145 0.182 lost +25.52% won 0 times tied 16 times lost 4 times total unique fn went from 89 to 90 lost +1.12%
0
[Spambayes] Current histograms > How were these msgs broken up into the 5 sets? Set4 in particular is giving > the other sets severe problems, and Set5 blows the f-n rate on everything > it's predicting -- when the rates across runs within a training set vary by > as much as a factor of 25, it suggests there was systematic bias in the way > the sets were chosen. For example, perhaps they were broken into sets by > arrival time. If that's what you did, you should go back and break them > into sets randomly instead. If you did partition them randomly, the wild > variance across runs is mondo mysterious. They weren't partitioned in any particular scheme - I think I'll write a reshuffler and move them all around, just in case (fwiw, I'm using MH style folders with numbered files - means you can just use MH tools to manipulate the sets.) > For whatever reason, there appear to be few of those in BruceG's spam > collection. I added code to strip uuencoded sections, and pump out uuencode > summary tokens instead. I'll check it in. It didn't make a significant > difference on my usual test run (a single spam in my Set4 is now judged as > ham by the other 4 sets; nothing else changed). It does shrink the database > size here by a few percent. Let us know whether it helps you! I'll give it a go. -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.
0
[Spambayes] stack.pop() ate my multipart message [Neale Pickett] > And then, just as I was about to fall asleep, I figured it out. The > tokenizer now has an extra [:], and all is well. I feel like a real > chawbacon for not realizing this earlier. :*) Good eye, Neale! Thanks for the fix. > Blaming it on staying up past bedtime, Blame it on Barry. I do <wink>.
0
[Spambayes] XTreme Training [Tim] >> Why would spam be likely to end up with two instances of Return-Path >> in the headers? [Greg Ward] > Possibly another qmail-ism from Bruce Guenter's spam collection. Doesn't seem *likely*, as it appeared in about 900 of about 14,000 spams. It could be specific to one of his bait addresses, though -- don't know. A nice thing about a statistical inferencer is that you really don't have to know why a thing works, just whether it works <wink>. > Or maybe Anthony's right about spammers being stupid and blindly copying > headers. (Well, of course he's right about spammers being stupid; it's > just this particular aspect of stupidity that's open to question.) I'm going to blow it off -- it's just another instance of being pointlessly baffled by a mixed corpus half of which I don't know enough about.
0
[Spambayes] Current histograms Anthony> They weren't partitioned in any particular scheme - I think Anthony> I'll write a reshuffler and move them all around, ... Hmmm. How about you create empty Data/Ham/Set[12345], stuff all your files into a Data/Ham/reservoir folder, then run the rebal.py script to randomly parcel messages out to the various real directories? I suspect you can pull the same stunt for your Data/Spam stuff. Skip
0
[Spambayes] Current histograms [Skip] > Hmmm. How about you create empty Data/Ham/Set[12345], stuff all your > files into a Data/Ham/reservoir folder, then run the rebal.py script to > randomly parcel messages out to the various real directories? I'm afraid rebal is quadratic-time in the # of msgs it shuffles around -- since it was only intended to move a few files around, it's dead simple. An easy thing is to start the same way: move all the files into a single directory. Then do random.shuffle() on an os.listdir() of that directory. Then it's trivial to split the result into N slices, and move the files into N other directories accordingly. > I suspect you can pull the same stunt for your Data/Spam stuff. Yup!
0
[Spambayes] Current histograms > They weren't partitioned in any particular scheme - I think I'll write a > reshuffler and move them all around, just in case (fwiw, I'm using MH > style folders with numbered files - means you can just use MH tools to > manipulate the sets.) Freak show. Obviously there _was_ some sort of patterns to the data: Training on Data/Ham/Set1 & Data/Spam/Set1 ... 1798 hams & 1546 spams 0.779 0.582 0.834 0.840 0.945 0.452 0.667 1.164 Training on Data/Ham/Set2 & Data/Spam/Set2 ... 1798 hams & 1547 spams 1.112 0.776 0.834 0.969 0.779 0.646 0.667 1.100 Training on Data/Ham/Set3 & Data/Spam/Set3 ... 1798 hams & 1548 spams 1.168 0.582 1.001 0.646 0.834 0.582 0.667 0.453 Training on Data/Ham/Set4 & Data/Spam/Set4 ... 1798 hams & 1547 spams 0.779 0.712 0.779 0.582 0.556 0.840 0.779 0.970 Training on Data/Ham/Set5 & Data/Spam/Set5 ... 1798 hams & 1546 spams 0.612 0.517 0.779 0.517 0.723 0.711 0.667 0.582 total false pos 144 1.60177975528 total false neg 101 1.30592190328 (before the shuffle, I was seeing: total false pos 273 3.03501945525 total false neg 367 4.74282760403 ) For sake of comparision, here's what I see for partitioned into 2 sets: Training on Data/Ham/Set1 & Data/Spam/Set1 ... 4492 hams & 3872 spams 0.490 0.776 Training on Data/Ham/Set2 & Data/Spam/Set2 ... 4493 hams & 3868 spams 0.401 0.491 total false pos 40 0.445186421814 total false neg 49 0.633074935401 more later... Anthony
0
[use Perl] Headlines for 2002-08-24 use Perl Daily Headline Mailer Damian Conway Publishes Exegesis 5 posted by hfb on Friday August 23, @14:06 (perl6) http://use.perl.org/article.pl?sid=02/08/23/187226 2nd Open Source CMS Conference posted by ziggy on Friday August 23, @18:30 (events) http://use.perl.org/article.pl?sid=02/08/23/1837242 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-08-24 use Perl Daily Newsletter In this issue: * Damian Conway Publishes Exegesis 5 * 2nd Open Source CMS Conference +--------------------------------------------------------------------+ | Damian Conway Publishes Exegesis 5 | | posted by hfb on Friday August 23, @14:06 (perl6) | | http://use.perl.org/article.pl?sid=02/08/23/187226 | +--------------------------------------------------------------------+ An anonymous coward writes "[0]/. has a [1]link to [2]Damian's [3]Exegesis 5" Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/23/187226 Links: 0. http://www.slashdot.org/ 1. http://developers.slashdot.org/developers/02/08/23/1232230.shtml?tid=145 2. http://yetanother.org/damian/ 3. http://www.perl.com/lpt/a/2002/08/22/exegesis5.html +--------------------------------------------------------------------+ | 2nd Open Source CMS Conference | | posted by ziggy on Friday August 23, @18:30 (events) | | http://use.perl.org/article.pl?sid=02/08/23/1837242 | +--------------------------------------------------------------------+ [0]Gregor J. Rothfuss writes "There will be a second [1]Open Source CMS conference this fall in Berkeley. We will feature presentations and workshops from a wide range of CMS, and would definitely welcome some Perl-fu in there as well. Also of interest, our efforts to [2]build bridges across CMS. Participate in our [3]mailing list, or better yet, [4]show up :)" The [5]first Open Source CMS conference was held in Zurich this past March. Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/23/1837242 Links: 0. mailto:gregor.rothfuss@oscom.org 1. http://www.oscom.org/conferences/berkeley2002/index.html 2. http://www.oscom.org/interop.html 3. http://www.oscom.org/mailing-lists.html 4. http://www.oscom.org/conferences/berkeley2002/registration_fees.html 5. http://www.oscom.org/conferences/zurich2002/agenda.html Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
How to end the war.... Forwarded-by: Rob Windsor <windsor@warthog.com> Forwarded-by: "David Dietz" <kansas1@mynewroads.com> The latest proposal to drive the Taliban and Al Qaeda out of the Mountains of Afghanistan is to send in the ASF (Alabama Special Forces) Billy Bob, Bubba, Boo, Scooter, Cooter and Junior are being sent in with the following information about the Taliban: 1. There is no limit. 2. The season opened last weekend. 3. They taste just like chicken. 4. They hate beer, pickup trucks, country music, and Jesus. 5. Some are queer. 6. They don't like barbecue. And most importantly... 7. They were responsible for Dale Earnhardt's death. We estimate it should be over in just about two days.
0
Family Targetted Advertising Forwarded-by: Rob Windsor <windsor@warthog.com> Forwarded-by: "Dave Bruce" <dbruce@wwd.net> Forwarded by: Gary Williams <garyaw1990@aol.com> A Mother had 3 virgin daughters. They were all getting married within a short time period. Because Mom was a bit worried about how their sex life would get started, she made them all promise to send a postcard from the honeymoon with a few words on how marital sex felt. The first girl sent a card from Hawaii two days after the wedding. The card said nothing but "Maxwell House". Mom was puzzled at first, but then went to the kitchen and got out the Nescafe jar. It said: "Good till the last drop." Mom blushed, but was pleased for her daughter. The second girl sent the card from Vermont a week after the wedding, and the card read: "Benson & Hedges". Mom now knew to go straight to her husband's cigarettes, and she read from the Benson & Hedges pack: "Extra Long, King Size". She was again slightly embarrassed but still happy for her daughter. The third girl left for her honeymoon in the Caribbean. Mom waited for a week, nothing. Another week went by and still nothing. Then after a whole month, a card finally arrived. Written on it with shaky handwriting were the words: "British Airways". Mom took out her latest Harper's Bazaar magazine, flipped through the pages fearing the worst, and finally found the ad for the airline. The ad said: "Three times a day, seven days a week, both ways." Mom fainted.
0
[use Perl] Stories for 2002-08-27 use Perl Daily Newsletter In this issue: * This Week on perl5-porters (19-25 August 2002) * Slashdot Taking Questions to Ask Larry Wall +--------------------------------------------------------------------+ | This Week on perl5-porters (19-25 August 2002) | | posted by rafael on Monday August 26, @07:42 (summaries) | | http://use.perl.org/article.pl?sid=02/08/26/1154225 | +--------------------------------------------------------------------+ I guess those thunderstorms came. And how they came. From an even wetter than normal country on the shores of the North Sea, comes this weeks perl5-porters summary. This story continues at: http://use.perl.org/article.pl?sid=02/08/26/1154225 Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/26/1154225 +--------------------------------------------------------------------+ | Slashdot Taking Questions to Ask Larry Wall | | posted by pudge on Monday August 26, @14:41 (perl6) | | http://use.perl.org/article.pl?sid=02/08/26/1845224 | +--------------------------------------------------------------------+ Please, if you feel inclined, make sure some [0]reasonable questions are asked of him. Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/26/1845224 Links: 0. http://interviews.slashdot.org/article.pl?sid=02/08/25/236217&tid=145 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-08-27 use Perl Daily Headline Mailer This Week on perl5-porters (19-25 August 2002) posted by rafael on Monday August 26, @07:42 (summaries) http://use.perl.org/article.pl?sid=02/08/26/1154225 Slashdot Taking Questions to Ask Larry Wall posted by pudge on Monday August 26, @14:41 (perl6) http://use.perl.org/article.pl?sid=02/08/26/1845224 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
The "other" kind of work-experience ... Forwarded-by: Per Hammer <perh@inrs.co.uk> http://news.bbc.co.uk/1/hi/world/asia-pacific/2218715.stm Brothel duty for Australian MP A conservative Member of Parliament in Australia is set to spend the day as a "slave" at one of Western Australia's most notorious brothels. Liberal Party member Barry Haase was "won" in a charity auction after the madam of Langtree's brothel in the mining town of Kalgoorlie made the highest offer for his services for a day. [...] "I hope he will leave with an informed decision on what Australian brothels are all about and it will help him in his political career to make informed decisions that he might not have been able to make before," Ms Kenworthy said. Mr Haase, a member of Prime Minister John Howard's party seemed relaxed about the prospect of working in a brothel. "You can't be half-hearted about fundraising for significant charities and I think I'm big enough to play the game," he said.
0
[use Perl] Headlines for 2002-08-28 use Perl Daily Headline Mailer .NET and Perl, Working Together posted by pudge on Tuesday August 27, @09:17 (links) http://use.perl.org/article.pl?sid=02/08/27/1317253 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-08-28 use Perl Daily Newsletter In this issue: * .NET and Perl, Working Together +--------------------------------------------------------------------+ | .NET and Perl, Working Together | | posted by pudge on Tuesday August 27, @09:17 (links) | | http://use.perl.org/article.pl?sid=02/08/27/1317253 | +--------------------------------------------------------------------+ [0]jonasbn writes "DevX has brought an article on the subject of [1]Perl and .NET and porting existing code. The teaser: Learn how CPAN Perl modules can be made automatically available to the .NET framework. The technique involves providing small PerlNET mediators between Perl and .NET and knowing when, where, and how to modify." Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/27/1317253 Links: 0. mailto:jonasbn@io.dk 1. http://www.devx.com/dotnet/articles/ym81502/ym81502-1.asp Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-08-31 use Perl Daily Headline Mailer Two OSCON Lightning Talks Online posted by gnat on Friday August 30, @18:37 (news) http://use.perl.org/article.pl?sid=02/08/30/2238234 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-08-31 use Perl Daily Newsletter In this issue: * Two OSCON Lightning Talks Online +--------------------------------------------------------------------+ | Two OSCON Lightning Talks Online | | posted by gnat on Friday August 30, @18:37 (news) | | http://use.perl.org/article.pl?sid=02/08/30/2238234 | +--------------------------------------------------------------------+ [0]gnat writes "The first two OSCON 2002 lightning talks are available from [1]the perl.org website. They are Dan Brian on "What Sucks and What Rocks" (in [2]QuickTime and [3]mp3), and Brian Ingerson on "Your Own Personal Hashbang" (in [4]QuickTime and [5]mp3). Enjoy!" This story continues at: http://use.perl.org/article.pl?sid=02/08/30/2238234 Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/30/2238234 Links: 0. mailto:gnat@frii.com 1. http://www.perl.org/tpc/2002/ 2. http://www.perl.org/tpc/2002/movies/lt-1/ 3. http://www.perl.org/tpc/2002/audio/lt-1/ 4. http://www.perl.org/tpc/2002/movies/lt-2/ 5. http://www.perl.org/tpc/2002/audio/lt-2/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-01 use Perl Daily Headline Mailer Perl Ports Page posted by hfb on Saturday August 31, @13:40 (cpan) http://use.perl.org/article.pl?sid=02/08/31/1744247 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-01 use Perl Daily Newsletter In this issue: * Perl Ports Page +--------------------------------------------------------------------+ | Perl Ports Page | | posted by hfb on Saturday August 31, @13:40 (cpan) | | http://use.perl.org/article.pl?sid=02/08/31/1744247 | +--------------------------------------------------------------------+ [0]jhi writes " One of the largely unknown services of CPAN is the [1]Ports page, which offers links to ready-packaged binary distributions of Perl for various platforms, and also other related links (to other UNIXy software, if the platform isn't, and to IDEs and editors). I would like to get feedback on the ports page either here or by sending email to cpan@perl.org. Any kind of feedback is welcome, but I will feel free to ignore any I don't like :-)" Discuss this story at: http://use.perl.org/comments.pl?sid=02/08/31/1744247 Links: 0. mailto:jhi@iki.fi 1. http://www.cpan.org/ports/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-03 use Perl Daily Newsletter In this issue: * This Week on perl5-porters (26 August / 1st September 2002) * The Perl Review, v0 i5 +--------------------------------------------------------------------+ | This Week on perl5-porters (26 August / 1st September 2002) | | posted by rafael on Monday September 02, @03:47 (summaries) | | http://use.perl.org/article.pl?sid=02/09/02/0755208 | +--------------------------------------------------------------------+ This week, we're back to our regularly scheduled p5p report, straight from my keyboard's mouth. Many thanks to Elizabeth Mattjisen who provided the two previous reports, while I was away from p5p and from whatever might evocate more or less a computer. This story continues at: http://use.perl.org/article.pl?sid=02/09/02/0755208 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/02/0755208 +--------------------------------------------------------------------+ | The Perl Review, v0 i5 | | posted by ziggy on Monday September 02, @14:20 (news) | | http://use.perl.org/article.pl?sid=02/09/02/1823229 | +--------------------------------------------------------------------+ [0]brian_d_foy writes "The latest issue of The Perl Review is ready: [1]http://www.theperlreview.com * Extreme Mowing -- Andy Lester * Perl Assembly Language -- Phil Crow * What Perl Programmers Should Know About Java -- Beth Linker * Filehandle Ties -- Robby Walker * The Iterator Design Pattern -- brian d foy Enjoy!" Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/02/1823229 Links: 0. http://www.theperlreview.com 1. http://www.theperlreview.com/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-03 use Perl Daily Headline Mailer This Week on perl5-porters (26 August / 1st September 2002) posted by rafael on Monday September 02, @03:47 (summaries) http://use.perl.org/article.pl?sid=02/09/02/0755208 The Perl Review, v0 i5 posted by ziggy on Monday September 02, @14:20 (news) http://use.perl.org/article.pl?sid=02/09/02/1823229 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-04 use Perl Daily Headline Mailer Perl CMS Systems posted by ziggy on Tuesday September 03, @05:00 (tools) http://use.perl.org/article.pl?sid=02/09/02/1827239 1998 Perl Conference CD Online posted by gnat on Tuesday September 03, @19:34 (news) http://use.perl.org/article.pl?sid=02/09/03/2334251 Bricolage 1.4.0 Escapes! posted by chip on Tuesday September 03, @19:57 (tools) http://use.perl.org/article.pl?sid=02/09/04/002204 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-04 use Perl Daily Newsletter In this issue: * Perl CMS Systems * 1998 Perl Conference CD Online * Bricolage 1.4.0 Escapes! +--------------------------------------------------------------------+ | Perl CMS Systems | | posted by ziggy on Tuesday September 03, @05:00 (tools) | | http://use.perl.org/article.pl?sid=02/09/02/1827239 | +--------------------------------------------------------------------+ KLB writes "[0]Krow, one of the authors of [1]Slash, has written up a [2]review on [3]Linux.com of two other Perl CMS systems, the E2 and LJ engines. Makes for interesting reading." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/02/1827239 Links: 0. http://krow.net/~ 1. http://slashcode.com/ 2. http://newsforge.com/article.pl?sid=02/08/28/0013255&mode=thread&tid=49 3. http://linux.com/ +--------------------------------------------------------------------+ | 1998 Perl Conference CD Online | | posted by gnat on Tuesday September 03, @19:34 (news) | | http://use.perl.org/article.pl?sid=02/09/03/2334251 | +--------------------------------------------------------------------+ [0]gnat writes "[1]The 1998 Perl Conference CD is online on perl.org. Enjoy the blast from the past (was [2]this Damian's first public appearance?)" (thanks to Daniel Berger for packratting the CD!) Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/03/2334251 Links: 0. mailto:gnat@oreilly.com 1. http://www.perl.org/tpc/1998/ 2. http://www.perl.org/tpc/1998/User_Applications/Declarative%20Command-line%20Inter/ +--------------------------------------------------------------------+ | Bricolage 1.4.0 Escapes! | | posted by chip on Tuesday September 03, @19:57 (tools) | | http://use.perl.org/article.pl?sid=02/09/04/002204 | +--------------------------------------------------------------------+ [0]Theory writes "Bricolage 1.4.0 has finally escaped the shackles of its CVS repository! ... Bricolage is a full-featured, enterprise-class content management and publishing system. It offers a browser-based interface for ease-of use, a full-fledged templating system with complete programming language support for flexibility, and many other features (see below). It operates in an Apache/mod_perl environment, and uses the PostgreSQL RDBMS for its repository." This story continues at: http://use.perl.org/article.pl?sid=02/09/04/002204 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/04/002204 Links: 0. http://bricolage.cc/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-06 use Perl Daily Headline Mailer Perl "Meetup" posted by ziggy on Thursday September 05, @19:12 (news) http://use.perl.org/article.pl?sid=02/09/05/2316234 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-06 use Perl Daily Newsletter In this issue: * Perl "Meetup" +--------------------------------------------------------------------+ | Perl "Meetup" | | posted by ziggy on Thursday September 05, @19:12 (news) | | http://use.perl.org/article.pl?sid=02/09/05/2316234 | +--------------------------------------------------------------------+ [0]davorg writes "The people at [1]Meetup have set up a [2]Perl Meetup. The first one takes place on September 19th. I'll probably go along to the one in London to see what happens, but I'd be very interested in hearing any opinions on what this achieves that the existing Perl Mongers groups don't." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/05/2316234 Links: 0. mailto:dave@dave.org.uk 1. http://www.meetup.com/ 2. http://perl.meetup.com/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
Man dies as whale lands on boat Forwarded-by: Sven Guckes <guckes@math.fu-berlin.de> http://news.com.au/common/story_page/0,4057,5037762%255E13762,00.html Port San Luis, California September 05, 2002 A WHALE which suddenly breached and crashed into the bow of a fishing boat killed a restaurant owner on board. Jerry Tibbs, 51, of Bakersfield, California, was aboard his boat, The BBQ, when the whale hit and tossed him into the sea five miles off Port San Luis. Three other fishermen stayed aboard the damaged boat, which was towed to shore by the US Coast Guard. Tibbs and his three friends were just ending a day fishing for albacore when the accident occurred, authorities said. Tibbs' body was found after a search lasting more than 18 hours Coast Guard officials said it was the first time they could recall an accident caused by a whale hitting a boat.
0
[use Perl] Headlines for 2002-09-10 use Perl Daily Headline Mailer This Week on perl5-porters (2-8 September 2002) posted by rafael on Monday September 09, @07:33 (summaries) http://use.perl.org/article.pl?sid=02/09/09/1147243 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-10 use Perl Daily Newsletter In this issue: * This Week on perl5-porters (2-8 September 2002) +--------------------------------------------------------------------+ | This Week on perl5-porters (2-8 September 2002) | | posted by rafael on Monday September 09, @07:33 (summaries) | | http://use.perl.org/article.pl?sid=02/09/09/1147243 | +--------------------------------------------------------------------+ As September begins, the perl5-porters, ignoring the changing weather, continue to work. This week, some small things, and a few bigger ones, are selected in the report. Read below. This story continues at: http://use.perl.org/article.pl?sid=02/09/09/1147243 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/09/1147243 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-11 use Perl Daily Headline Mailer DynDNS.org Offers Free DNS To Perl Sites posted by KM on Tuesday September 10, @08:23 (news) http://use.perl.org/article.pl?sid=02/09/10/1225228 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-11 use Perl Daily Newsletter In this issue: * DynDNS.org Offers Free DNS To Perl Sites +--------------------------------------------------------------------+ | DynDNS.org Offers Free DNS To Perl Sites | | posted by KM on Tuesday September 10, @08:23 (news) | | http://use.perl.org/article.pl?sid=02/09/10/1225228 | +--------------------------------------------------------------------+ [0]krellis writes "[0]DynDNS.org today [1]announced that it will provide free premium DNS services (primary and secondary DNS hosting) to any domains involved in the Perl Community. Read the press release for full details, [2]create an account, and [3]request credit under the Perl DNS offer! Never lose traffic to your Perl site due to failed DNS again!" Sweet. Thanks. Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/10/1225228 Links: 0. http://www.dyndns.org/ 1. http://www.dyndns.org/news/2002/perl-dns.php 2. https://members.dyndns.org/policy.shtml 3. https://members.dyndns.org/nic/perl Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-13 use Perl Daily Headline Mailer The Perl Journal Returns Online posted by pudge on Wednesday September 11, @21:59 (links) http://use.perl.org/article.pl?sid=02/09/12/026254 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-13 use Perl Daily Newsletter In this issue: * The Perl Journal Returns Online +--------------------------------------------------------------------+ | The Perl Journal Returns Online | | posted by pudge on Wednesday September 11, @21:59 (links) | | http://use.perl.org/article.pl?sid=02/09/12/026254 | +--------------------------------------------------------------------+ CMP, owners of [0]The Perl Journal, have brought the journal back, in the form of an online monthly magazine, in PDF form. The subscription rate is $12 a year. They need 3,000 subscriptions to move forward (no word if existing subscriptions will be honored, or included in the 3,000). [0]Read the site for more details. I think some of the more interesting notes are that it will include "a healthy dose of opinion", as well a broadening of coverage including languages other than Perl (will this mean a name change?) and platforms other than Unix (I'd always thought one of TPJ's strengths was that it covered a wide variety of platforms). Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/12/026254 Links: 0. http://www.tpj.com/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-14 use Perl Daily Headline Mailer "Perl 6: Right Here, Right Now" slides ava posted by gnat on Friday September 13, @12:01 (news) http://use.perl.org/article.pl?sid=02/09/13/162209 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-14 use Perl Daily Newsletter In this issue: * "Perl 6: Right Here, Right Now" slides ava +--------------------------------------------------------------------+ | "Perl 6: Right Here, Right Now" slides ava | | posted by gnat on Friday September 13, @12:01 (news) | | http://use.perl.org/article.pl?sid=02/09/13/162209 | +--------------------------------------------------------------------+ [0]gnat writes "The wonderful Leon Brocard has released the slides from his lightning talk to the London perlmongers, [1]Perl 6: Right Here, Right Now, showing the current perl6 compiler in action." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/13/162209 Links: 0. mailto:gnat@oreilly.com 1. http://astray.com/perl6_now/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-17 use Perl Daily Headline Mailer New Perl Mongers Web Site posted by KM on Monday September 16, @08:41 (groups) http://use.perl.org/article.pl?sid=02/09/16/1243234 Java vs. Perl posted by pudge on Monday September 16, @11:15 (java) http://use.perl.org/article.pl?sid=02/09/16/1448246 This Week on perl5-porters (9-15 September 2002) posted by rafael on Monday September 16, @16:17 (summaries) http://use.perl.org/article.pl?sid=02/09/16/2026255 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-17 use Perl Daily Newsletter In this issue: * New Perl Mongers Web Site * Java vs. Perl * This Week on perl5-porters (9-15 September 2002) +--------------------------------------------------------------------+ | New Perl Mongers Web Site | | posted by KM on Monday September 16, @08:41 (groups) | | http://use.perl.org/article.pl?sid=02/09/16/1243234 | +--------------------------------------------------------------------+ [0]davorg writes "Leon Brocard has been working hard to update the [1]Perl Mongers web site. We're still going thru the process of cleaning up the data about the Perl Monger groups, so if you see something that isn't quite right then please [2]let us know." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/16/1243234 Links: 0. mailto:dave@dave.org.uk 1. http://www.pm.org/ 2. mailto:user_groups@pm.org +--------------------------------------------------------------------+ | Java vs. Perl | | posted by pudge on Monday September 16, @11:15 (java) | | http://use.perl.org/article.pl?sid=02/09/16/1448246 | +--------------------------------------------------------------------+ It seems the older Perl gets, the more willing people are to believe that it sucks, without any reasonable facts. [0]davorg writes "You may have seen the article [1]Can Java technology beat Perl on its home turf with pattern matching in large files? that there has been some debate about on both #perl and comp.lang.perl.misc today. One of the biggest criticisms of the article was that the author hasn't published the Perl code that he is comparing his Java with." This story continues at: http://use.perl.org/article.pl?sid=02/09/16/1448246 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/16/1448246 Links: 0. mailto:dave@dave.org.uk 1. http://developer.java.sun.com/developer/qow/archive/184/index.jsp +--------------------------------------------------------------------+ | This Week on perl5-porters (9-15 September 2002) | | posted by rafael on Monday September 16, @16:17 (summaries) | | http://use.perl.org/article.pl?sid=02/09/16/2026255 | +--------------------------------------------------------------------+ This was not a very busy week, with people packing for YAPC::Europe, and all that... Nevertheless, the smoke tests were running, the bug reports were flying, and an appropriate amount of patches were sent. Read about printf formats, serialized tied thingies, built-in leak testing, syntax oddities, et alii. This story continues at: http://use.perl.org/article.pl?sid=02/09/16/2026255 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/16/2026255 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
I'm a tad furry... Forwarded-by: Nev Dull <nev@sleepycat.com> Forwarded-by: newsletter@tvspy.com Excerpted: ShopTalk - September 13, 2002 "I'm a tad furry, so animal rights issues come into play." Robin Williams, telling Entertainment Weekly why he won't do nude scenes in movies. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Johnny U: Johnny Unitas was the National Football League's most valuable player twice - and he led Baltimore to victory in "Super Bowl Five." For those of you younger than 30: this WAS modern football. The game was played on artificial turf. (Richard Burkard/ http://www.Laughline.com) Announcement: How telling is it that the death of Johnny Unitas was announced by the Baltimore Ravens - and not the Colts, who now play in Indianapolis? When the Colt owners moved out of Baltimore years ago, they apparently left all the history books behind. (Burkard) Dick Disappears: Vice President Dick Cheney remains at an undisclosed location. The move is for security reasons. The Bush Administration is trying to keep him at a safe distance from would-be subpoenas. (Alan Ray - http://www.araycomedy.com) Noelle Nabbed: Jeb Bush's daughter Noelle is in trouble with the law again. When she was a child, her dad would read her favorite bedtime story. Goldilocks and the Three Strikes. (Ray)
0
[use Perl] Headlines for 2002-09-18 use Perl Daily Headline Mailer Subscribe to The Perl Review posted by pudge on Tuesday September 17, @08:00 (links) http://use.perl.org/article.pl?sid=02/09/17/121210 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-18 use Perl Daily Newsletter In this issue: * Subscribe to The Perl Review +--------------------------------------------------------------------+ | Subscribe to The Perl Review | | posted by pudge on Tuesday September 17, @08:00 (links) | | http://use.perl.org/article.pl?sid=02/09/17/121210 | +--------------------------------------------------------------------+ [0]barryp writes "You can now pledge a subscription to [1]The Perl Review. The plan is to produce four print magazines per year. Cost: $12/year (US); $20/year (international). Let's all make this happen by signing up!" The web site says that they'll attempt to go print if they get enough subscription pledges. Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/17/121210 Links: 0. mailto:paul.barry@itcarlow.ie 1. http://www.theperlreview.com/ Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
Virgin's latest airliner. Forwarded-by: William Knowles <wk@c4i.org> http://www.thesun.co.uk/article/0,,2-2002430339,00.html By PAUL CROSBIE Sep 16, 2002 VIRGIN'S latest airliner is being revamped after randy passengers discovered a tiny cabin was just the place to join the Mile High Club. The �130million Airbus 340-600 is fitted with a 5ft x 4ft mother and baby room with a plastic table meant for changing nappies. But couples keep wrecking it by sneaking in for a quick bonk. Virgin has replaced the table several times even though the plane only came into service a few weeks ago. It is named Claudia Nine after sexy model Claudia Schiffer, 32, who launched it in July. Now Virgin bosses have asked Airbus to build a stronger table. At first, German engineers responsible for the jet's interior were baffled by the problem. The table is designed to take the weight of a mum and baby. One Airbus worker said: "We couldn't work it out. Then the penny dropped. It didn't occur to the Germans that this might happen. It caused great amusement." The firm say the cost of strengthening the tables will be about �200. A Virgin spokeswoman said: "Those determined to join the Mile High Club will do so despite the lack of comforts. "We don't mind couples having a good time but this is not something that we would encourage because of air regulations." The new Airbus is the world's longest airliner, with teasing slogan "Mine is bigger than yours". Virgin is using it on flights to the Far East and the US.
0
Facts about sex. Forwarded-by: Flower Did you know that you can tell from the skin whether a person is sexually active or not? 1. Sex is a beauty treatment. Scientific tests find that when woman make love they produce more of the hormone estrogen, which makes hair shiny and skin smooth. 2. Gentle, relaxed lovemaking reduces your chances of suffering dermatitis, skin rashes and blemishes. The sweat produced cleanses the pores and makes your skin glow. 3. Lovemaking can burn up those calories you piled on during that romantic dinner. 4. Sex is one of the safest sports you can take up. It stretches and tones up just about every muscle in the body. It's more enjoyable than swimming 20 laps, and you don't need special sneakers! 5. Sex is an instant cure for mild depression. It releases the body endorphin into the bloodstream, producing a sense of euphoria and leaving you with a feeling of well-being. 6. The more sex you have, the more you will be offered. The sexually active body gives off greater quantities of chemicals called pheromones. These subtle sex perfumes drive the opposite sex crazy! 7. Sex is the safest tranquilizer in the world. IT IS 10 TIMES MORE EFFECTIVE THAN VALIUM. 8. Kissing each day will keep the dentist away. Kissing encourages saliva to wash food from the teeth and lowers the level of the acid that causes decay, preventing plaque build-up. 9. Sex actually relieves headaches. A lovemaking session can release the tension that restricts blood vessels in the brain. 10. A lot of lovemaking can unblock a stuffy nose. Sex is a national antihistamine. It can help combat asthma and hay fever. ENJOY SEX!
0
[use Perl] Headlines for 2002-09-19 use Perl Daily Headline Mailer How much does Perl, PHP, Java, or Lisp suck? posted by pudge on Wednesday September 18, @08:08 (links) http://use.perl.org/article.pl?sid=02/09/17/189201 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-19 use Perl Daily Newsletter In this issue: * How much does Perl, PHP, Java, or Lisp suck? +--------------------------------------------------------------------+ | How much does Perl, PHP, Java, or Lisp suck? | | posted by pudge on Wednesday September 18, @08:08 (links) | | http://use.perl.org/article.pl?sid=02/09/17/189201 | +--------------------------------------------------------------------+ [0]brian_d_foy writes "A long time ago Don Marti started the OS Sucks-Rules-O-Meter, and Jon Orwant wrote his own Sucks-Rules-O-Meter for computer languages. Recently Dan Brian improved on that with a little bit of natural language processing. Now [1]The Perl Review makes pretty pictures of it all. Based on searches of AltaVista and Google, we found that not a lot of people think PHP or Lisp sucks, a lot think C++ and Java suck, and they put Perl is somewhere in the middle. Does Perl suck more than it use to suck, or has PHP just shot way ahead?" Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/17/189201 Links: 0. http://www.theperlreview.com 1. http://www.theperlreview.com/at_a_glance.html Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-20 use Perl Daily Headline Mailer PerlQT 3 Released posted by ziggy on Thursday September 19, @10:41 (tools) http://use.perl.org/article.pl?sid=02/09/19/1443213 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-20 use Perl Daily Newsletter In this issue: * PerlQT 3 Released +--------------------------------------------------------------------+ | PerlQT 3 Released | | posted by ziggy on Thursday September 19, @10:41 (tools) | | http://use.perl.org/article.pl?sid=02/09/19/1443213 | +--------------------------------------------------------------------+ [0]Dom2 writes "As seen on the dot, a new version of [1]PerlQT is out! Apparently sporting a perl version of [2]uic for developing ui's from XML. A [3]tutorial is available for those wanting to know more details." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/19/1443213 Links: 0. mailto:dom@happygiraffe.net 1. http://perlqt.infonium.com/ 2. http://doc.trolltech.com/3.0/uic.html 3. http://perlqt.infonium.com/dist/current/doc/index.html Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-23 use Perl Daily Headline Mailer YAPC 2003 Call For Venues posted by KM on Sunday September 22, @09:08 (news) http://use.perl.org/article.pl?sid=02/09/22/1312225 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-23 use Perl Daily Newsletter In this issue: * YAPC 2003 Call For Venues +--------------------------------------------------------------------+ | YAPC 2003 Call For Venues | | posted by KM on Sunday September 22, @09:08 (news) | | http://use.perl.org/article.pl?sid=02/09/22/1312225 | +--------------------------------------------------------------------+ [0]Yet Another Society/[1]The Perl Foundation is making a call for venues for the 2003 YAPC::America. Don't forget to check the [2]venue requirements, the YAPC::Venue module on CPAN, and talk to the organizers of YAPCs past. The Perl Foundation aims to announce the venue in November 2002. Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/22/1312225 Links: 0. http://yetanother.org 1. http://perlfoundation.org 2. http://www.yapc.org/venue-reqs.txt Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-24 use Perl Daily Headline Mailer This Week on perl5-porters (16-22 September 2002) posted by rafael on Monday September 23, @07:58 (summaries) http://use.perl.org/article.pl?sid=02/09/23/125230 The Great Perl Monger Cull Of 2002 posted by ziggy on Monday September 23, @16:38 (news) http://use.perl.org/article.pl?sid=02/09/23/2041201 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Stories for 2002-09-24 use Perl Daily Newsletter In this issue: * This Week on perl5-porters (16-22 September 2002) * The Great Perl Monger Cull Of 2002 +--------------------------------------------------------------------+ | This Week on perl5-porters (16-22 September 2002) | | posted by rafael on Monday September 23, @07:58 (summaries) | | http://use.perl.org/article.pl?sid=02/09/23/125230 | +--------------------------------------------------------------------+ That's on a week like this that you realize that lots of porters are European (and managed to free themselves for YAPC::Europe.) Or were they, on the contrary, too busy in the big blue room ? On the other hand, the number of bug reports stayed at its habitual average level. This story continues at: http://use.perl.org/article.pl?sid=02/09/23/125230 Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/23/125230 +--------------------------------------------------------------------+ | The Great Perl Monger Cull Of 2002 | | posted by ziggy on Monday September 23, @16:38 (news) | | http://use.perl.org/article.pl?sid=02/09/23/2041201 | +--------------------------------------------------------------------+ [0]davorg writes "If you take a look at [1]list of local groups on the [2]Perl Mongers web site, you'll see that it's just got a good deal shorter. Over the last month or so, I've been making strenuous efforts to contact all of the groups we had listed to see which ones were still active. What you see is the result of this exercise. Almost half of the groups have been removed because they haven't responded to my emails. If your local group still exists but is no longer listed, then that means that I don't have an update to date contact for your group. Please [3]let me know if that's the case." Discuss this story at: http://use.perl.org/comments.pl?sid=02/09/23/2041201 Links: 0. mailto:dave@dave.org.uk 1. http://www.pm.org/groups/ 2. http://www.pm.org/ 3. mailto:user_groups@pm.org Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0
[use Perl] Headlines for 2002-09-26 use Perl Daily Headline Mailer Using Web Services with Perl and AppleScript posted by pudge on Wednesday September 25, @08:12 (links) http://use.perl.org/article.pl?sid=02/09/25/129231 Copyright 1997-2002 pudge. All rights reserved. ====================================================================== You have received this message because you subscribed to it on use Perl. To stop receiving this and other messages from use Perl, or to add more messages or change your preferences, please go to your user page. http://use.perl.org/my/messages/ You can log in and change your preferences from there.
0