text
stringlengths 2
299k
| is_spam
int64 0
1
|
---|---|
Re: [Razor-users] Fire.cloudmark.com is having issues
--CE+1k2dSO48ffgeK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Error 230 occurs when you report a signature, but the server doesn't
know about the signature, so it wants the full content. It's
basically an optimization. =20
Beyond that, I'm not sure how to interpret that output.. what version?
Vipul?
--jordan
On Wed, Oct 02, 2002 at 11:59:01PM -0400, Rose, Bobby wrote:
# What does this mean? I set up procmailrc for a spamtrap but I'm getting
# an error. I also am reporting to pyzor and dcc and they aren't
# registering an error. What's weird is that it works sometimes.
#=20
# .
# Oct 02 23:46:11.470523 report[14051]: [ 4] honor.cloudmark.com >> 20
# Oct 02 23:46:11.470805 report[14051]: [ 6] response to sent.3
# -res=3D1
# err=3D230
# .
# Oct 02 23:46:11.471825 report[14051]: [ 5] mail 1, orig_email, special
# case eng 1: Server accept
# ed report.
# Oct 02 23:46:11.472228 report[14051]: [ 8] mail 1.0, eng 4: err 230 -
# server wants mail
#=20
#=20
# -------------------------------------------------------
# This sf.net email is sponsored by:ThinkGeek
# Welcome to geek heaven.
# http://thinkgeek.com/sf
# _______________________________________________
# Razor-users mailing list
# Razor-users@lists.sourceforge.net
# https://lists.sourceforge.net/lists/listinfo/razor-users
--CE+1k2dSO48ffgeK
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE9m9OxpwQdAVEbU7oRAifiAKCSXxSTb64bYgzo8PiXVVswqkXCKgCgnqby
YYrMsnaQOrQvEb7cDw5jios=
=SBk3
-----END PGP SIGNATURE-----
--CE+1k2dSO48ffgeK--
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] Fire.cloudmark.com is having issues
If you examine the log further, you'll see debug messages generated by the
(content) reporting process that follows error 230.
On Wed, Oct 02, 2002 at 10:20:49PM -0700, Jordan Ritter wrote:
> Error 230 occurs when you report a signature, but the server doesn't
> know about the signature, so it wants the full content. It's
> basically an optimization.
>
> Beyond that, I'm not sure how to interpret that output.. what version?
> Vipul?
>
> On Wed, Oct 02, 2002 at 11:59:01PM -0400, Rose, Bobby wrote:
> # What does this mean? I set up procmailrc for a spamtrap but I'm getting
> # an error. I also am reporting to pyzor and dcc and they aren't
> # registering an error. What's weird is that it works sometimes.
> #
> # .
> # Oct 02 23:46:11.470523 report[14051]: [ 4] honor.cloudmark.com >> 20
> # Oct 02 23:46:11.470805 report[14051]: [ 6] response to sent.3
> # -res=1
> # err=230
> # .
> # Oct 02 23:46:11.471825 report[14051]: [ 5] mail 1, orig_email, special
> # case eng 1: Server accept
> # ed report.
> # Oct 02 23:46:11.472228 report[14051]: [ 8] mail 1.0, eng 4: err 230 -
> # server wants mail
--
Vipul Ved Prakash | "The future is here, it's just not
Software Design Artist | widely distributed yet."
http://vipul.net/ | -- William Gibson
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] "No razor servers available at this time"
I'm still seeing razor reporting bomb out with a small variety of messages.
Would reinitializing something here help, or does razor do that as needed
anyway, and this just reflects trouble at the servers?
Also, since these are reports I'm making from within Mutt, it's annoying
that the error messages garble the X window a bit. If there's nothing
constructive to do about razor failing, can I at least turn off the failure
messages? One way or the other I'd rather not see them.
Whit
On Wed, Sep 25, 2002 at 12:57:13PM -0400, Whit Blauvelt wrote:
> I'm getting "no servers available" about half the time in the last few days.
> This is with Razor 2. Is there something I need adjust in the installation
> here, or are the servers just down/overloaded?
>
> Thanks,
> Whit
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
[Razor-users] problems with hubris and/or discovery
trying to report spam [razor chooses hubris] I timeout on the connection
(which seems to have gotten slower all morning) and receive the following
error message:
razor-report error: connect4: nextserver: discover1: Error reading socket
connect4: nextserver: discover1: Error reading socket
I then try to run razor-admin -discover and receive the same error .....
problems with the servers today? only one discovery server?
Sven
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] problems with hubris and/or discovery
--ey/N+yb7u/X9mFhi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Folks,
There have been several major internet outages this morning,
across major providers UUNet, Genuity, and god knows who else.
Various routes across the Internet backbones have been
disappearing, repropagating, and disappearing again.
This has caused and exacerbated several problems which we are
working on correcting right now. We apologize for the
inconvenience..
Best,=20
--jordan
On Thu, Oct 03, 2002 at 01:18:25PM -0400, Sven Willenberger wrote:
# trying to report spam [razor chooses hubris] I timeout on the connection
# (which seems to have gotten slower all morning) and receive the following
# error message:
#=20
# razor-report error: connect4: nextserver: discover1: Error reading socket
# connect4: nextserver: discover1: Error reading socket
#=20
# I then try to run razor-admin -discover and receive the same error .....
# problems with the servers today? only one discovery server?
#=20
# Sven
#=20
#=20
#=20
# -------------------------------------------------------
# This sf.net email is sponsored by:ThinkGeek
# Welcome to geek heaven.
# http://thinkgeek.com/sf
# _______________________________________________
# Razor-users mailing list
# Razor-users@lists.sourceforge.net
# https://lists.sourceforge.net/lists/listinfo/razor-users
--ey/N+yb7u/X9mFhi
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE9nICFpwQdAVEbU7oRAiw+AJ966WR+zA0g47m0E7SkgCMbivfLkgCeOtGc
V2cgz8USK1UPYH7YJwh1SyM=
=dZej
-----END PGP SIGNATURE-----
--ey/N+yb7u/X9mFhi--
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
[Razor-users] FW: [EVENT NOTIFICATION] UUnet North American
--5vNYLRcllDrimb99
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
For everyone's benefit/edification, regardless of Razor usage.
--jordan
-----Original Message-----
From: InterNAP Backbone Status Reports [mailto:noc@internap.com]
Sent: Thursday, October 03, 2002 11:30 AM
To: InterNAP Backbone Status Reports
Subject: [EVENT NOTIFICATION] UUnet North American Backbone Problems
20021003@06:00 PDT [TIX106448] (fwd)
Hello,
I have just spoken with an UUnet technician at 11:23 PDT and he was unable
to provide me with any new information regarding the outage. He did
confirm that problems had spread beyond the east coast and are now seen
nationwide through out their network. They are currently working with
their hardware vendor to try and isolate the issue.
A large number of Internap's connections to UUnet remain shutdown and
traffic is being routed over alternate providers. The scale of this
outage is causing a large number of peering points with UUnet and other
providers to be flooded due to the large traffic shifts.
If you have problems reaching specific sites please send an email to
noc@internap.com containing a traceroute and source/destination IP
addresses and we will investigate. If it is possible will will attempt to
move specific prefixes onto an alternate provider leaving an Internap PNAP
if that will provide better performance.
Regards,
Andrew
--------------------------------------------------------------------
Andrew Dul Network Operations Center
InterNAP Network Services E-Mail: noc@internap.com
andrew@internap.com 206.256.9500 - 1.877.THE.INOC
The contents of this email message are confidential and proprietary.
---------- Forwarded message ----------
Date: Thu, 3 Oct 2002 08:51:26 -0700 (PDT)
From: Andrew Dul <andrew@internap.com>
Subject: [EVENT NOTIFICATION] UUnet North American Backbone Problems
[TIX106448]
UUnet is currently experiencing a large number of problems in their North
American backbone. Internap has been aware of these problems since approx
06:00 PST. Most of the problems appear to be concentrated on the East
Coast but we have reports of problems from other geographic regions.
This is being tracked under Internap ticket 106448 and UUnet master ticket
651751.
UUnet currently does not have any ETA for a fix for this event.
If you have issues reaching a specific site please send an email to
noc@internap.com with a traceroute showing the path which has a problem.
Internap has shutdown peerings to UUnet is various cities to help reduce
the number of problems that customers will experience.
Regards,
Andrew
--------------------------------------------------------------------
Andrew Dul Network Operations Center
InterNAP Network Services E-Mail: noc@internap.com
andrew@internap.com 206.256.9500 - 1.877.THE.INOC
The contents of this email message are confidential and proprietary.
--5vNYLRcllDrimb99
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE9nI+8pwQdAVEbU7oRAp25AJ9wlEUgdqUFi+7Hy/yYowIAid480gCfX49d
M4Ox3gKrlqqmZ+hAy0wTFl0=
=e6sV
-----END PGP SIGNATURE-----
--5vNYLRcllDrimb99--
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
[Razor-users] Re: Can't call method "log" with SA/Razor2
This is happening due to insufficient write access to the
"razor-agent.log" file. I was getting the same error, but
only as a non-root user. As a quick workaround, you can do
"chmod go+w razor-agent.log".
In Agent.pm, when then the Logger object is created, it
doesn't check whether the logfile is writable by the current
user. Then, when a write attempt is made, it bails out with
the "unblessed reference" error.
Hope that helps,
Michael
> I just noticed the following log entries in my syslog with the latest
> Spamassassin CVS (set up using spamc/spamd) and razor-agents 2.14:
>
> Jul 26 17:30:09 timmy spamd[54928]: razor2 check skipped: No such file or
> directory Can't call method "log" on unblessed reference at
> /usr/local/lib/perl5/site_perl/5.6.1/Razor2/Client/Agent.pm line 211,
> <STDIN> line 25.
>
> I saw this after checking if my upgrade from razor-agents 2.12 to 2.14 went
> okay, but the problem is still there after downgrading back to 2.12. I
> don't really know when this started happening, :-/
>
> Any ideas on the problem?
>
> - Robert
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] Collision of hashes?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
de'i Thursday 29 August 2002 13:36 la Fox cusku di'e
> The following was personal correspondence between two people. I can't
>
> fathom how Razor thinks it is spam:
Was it sent in HTML? If so, and it had a background, the background may have
been sent in a spam.
cmeclax
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)
iD8DBQE9bq9f3/k1hdmG9jMRAk4XAJ9CheEA+/hLIU9zTzfJbPyoPUm+XwCfXgZ1
tg7Fn8JcG9Q13UlKVfaOJzk=
=Mw8+
-----END PGP SIGNATURE-----
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] Collision of hashes?
On Thu, Aug 29, 2002 at 07:33:44PM -0400, cmeclax po'u le cmevi'u ke'umri wrote:
>
> Was it sent in HTML? If so, and it had a background, the background may have
> been sent in a spam.
razor-agents 2.14 needs all parts to be spam to make a positive decision
(though this will change with the next release), so it couldn't have been
a background. Could you send me the debug log?
cheers,
vipul.
--
Vipul Ved Prakash | "The future is here, it's just not
Software Design Artist | widely distributed."
http://vipul.net/ | -- William Gibson
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
[Razor-users] Razor and Pine
I've been testing Razor, invoked from sendmail/procmail and so far it
seems pretty copacetic. Last night's spam to the list provided a good test
- the spam itself as well as several of the responses were flagged, as
other list members reported.
This morning I piped the messages out from pine, being careful to use the
RAW mode, to razor-check -d. None of the messages come back as spam, even
the spam. Since folks revoked the false positives, I understand why they
would not come up, but not the spam itself, unless that also was revoked.
Is this spam just a bad one to test against, or is there some setting in
pine or razor that I am missing?
--
sc
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] Razor and Pine
On Wed, 9 Oct 2002 09:15:03 -0400 (EDT), Samuel Checker <sc@pffcu.org> wrote:
> I've been testing Razor, invoked from sendmail/procmail and so far it
> seems pretty copacetic. Last night's spam to the list provided a good test
> - the spam itself as well as several of the responses were flagged, as
> other list members reported.
>
> This morning I piped the messages out from pine, being careful to use the
> RAW mode, to razor-check -d. None of the messages come back as spam, even
> the spam. Since folks revoked the false positives, I understand why they
> would not come up, but not the spam itself, unless that also was revoked.
>
> Is this spam just a bad one to test against, or is there some setting in
> pine or razor that I am missing?
Are you using Spamassassin on the input side? I've just changed my sendmail installation and am looking for the 'proper' way to pass it through there, systemwide, before accepting it and sending it to the users. It's kinda problematic to set up procmail scripts for every user, when the user's home directories are NFS mounted....and the source is on my own machine, on which I try new things. (And it's the only machine with the drivespace...)
------------------------------------------------------------------------
Brian Fahrl�nder Linux Zealot, Conservative, and Technomad
Evansville, IN My Voyage: http://www.CounterMoon.com
ICQ 5119262
------------------------------------------------------------------------
angegangen, Schlange-H�ften, sein es ganz r�ber jetzt. B�gel innen fest,
weil es eine lange, s�sse Fahrt ist.
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
Re: [Razor-users] Razor and Pine
On Wed, 9 Oct 2002, Brian Fahrlander wrote:
> On Wed, 9 Oct 2002 09:15:03 -0400 (EDT), Samuel Checker <sc@pffcu.org> wrote:
>
> > I've been testing Razor, invoked from sendmail/procmail and so far it
> > seems pretty copacetic. Last night's spam to the list provided a good test
> > - the spam itself as well as several of the responses were flagged, as
> > other list members reported.
> >
>
> Are you using Spamassassin on the input side? I've just changed my sendmail installation and am looking for the 'proper' way to pass it through there, systemwide, before accepting it and sending it to the users. It's kinda problematic to set up procmail scripts for every user, when the user's home directories are NFS mounted....and the source is on my own machine, on which I try new things. (And it's the only machine with the drivespace...)
>
I've not used Spamassassin on the KISS principle. I just have procmail
adding an X-header and optionally modifying the Subject if razor-check
comes back positive.
--
sc
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
[Razor-users] razor vs cloudmark - merging?
I am somewhat puzzled by a phone call I (or rather the CIO at the ISP for
whom I work) received from an individual claiming to represent Cloudmark.
The gist of the call was that since we were using razor and checking our
mail against the razor servers and that since those servers contain
information proprietary to Cloudmark, we would [in the near future?] be
required to begin paying Cloudmark/spamnet $1.00/user per year. I was
wondering if anyone else has received such a call?
I am curious as to whether a spammer has begun to try contacting razor users
with the above tactic in an effort to get them to stop using razor or
whether the open source/community aspect of the razor project is going by
the wayside in lieu of a strictly commercial approach (ala brightmail and
the likes).
Sven
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Razor-users mailing list
Razor-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/razor-users
| 0 |
use of base image / delta image for automated recovery from attacks
Hi,
I was inspired by a mode of operation supported by VMWare. You can have a
base disk image shared by multiple virtual machine (vm) instances. That base
image is never altered by a vm instance. Instead, each vm instance writes
changes to its own "redo" log. Future hard disk reads from that vm instance
incorporate both the base image and the appropriate redo log to present the
current disk image for that specific virtual machine.
This is described here (thanks to Duane for providing this link on the
honeypots mailing list)
http://www.vmware.com/support/reference/common/disk_sharing.html
Could this basic concept be used to easily make self-fixing client/server
applications that efficiently and automatically recover from most attacks,
even before those attacks have been discovered? Here is what I imagine.
The physical architectures of most production client/server systems are
layered. For example, your basic web application might have a web server
running Apache, connected to an application server running some J2EE or .Net
business logic, connected to a database server for persistence. The only one
of these whose disk image really should evolve over time is the database
server, and even here you often put the static RDBMS software on one
partition and the changeable datafiles on another partition. It is only the
partition with the volatile datafiles that must be allowed to change from
one boot to the next. Other paritions may need to be writable for, say, swap
space, but these changes could be eliminated on each reboot.
When someone cracks this system, they will probably change an image that
shouldn't be changed. E.g., they might leverage a buffer overflow in IIS or
Apache to install a trojan or a backdoor on the more exposed web server. But
what if the web server ran off a base image, writing changes to a "delta" or
"redo" partition? And then what if every night it automatically erased the
redo partition and rebooted? The downtime involved for each machine would be
minimal, because it is only deleting data - rather than restoring from
backup. In a system with redundant web servers for load balancing or high
availability, this could be scheduled in a way such that the system is
always accessible. This base/redo partition concept could be implemented at
the same level as a feature of hardware RAID, allowing for greater
performance, reliability, and hack resistance. This concept could also be
applied to the application servers, and even the database server partitions
(except for those partitions which contain the table data files, of course.)
Does anyone do this already? Or is this a new concept? Or has this concept
been discussed before and abandoned for some reasons that I don't yet know?
I use the physical architecture of a basic web application as an example in
this post, but this concept could of course be applied to most server
systems. It would allow for the hardware-separation of volatile and
non-volatile disk images. It would be analogous to performing nightly
ghosting operations, only it would be more efficient and involve less (or
no) downtime.
Thanks for any opinions,
Ben
| 0 |
Re: Secure Sofware Key
On Tue, Sep 03, 2002 at 09:03:40PM -0400, Yannick Gingras wrote:
> This make me wonder about the relative protection of smart cards. They have
> an internal procession unit around 4MHz. Can we consider them as trusted
> hardware ? The ability to ship smart cards periodicaly uppon cashing of a
> monthly subscription fee would not raise too much the cost of "renting" the
> system. Smart card do their own self encryption. Can they be used to
> decrypt data needed by the system ? The input of the system could me mangled
> and the would keep a reference of how long it was in service.
>
> This sounds really feasible but I may be totaly wrong. I may also be wrong
> about the safety of a smart card.
>
> What do you think ?
That's similar to using hard-locks (either the old parallel, or the new
usb).
The problem is that that piece of hardware is trustworthy, but the rest of
the PC isn't, so a cracker just needs to simulate the lock/smart card, or
peek at the executable after the lock has been deactivated.
Regards,
Luciano Rocha
--
Consciousness: that annoying time between naps.
| 0 |
RE: use of base image / delta image for automated recovery from a
I did something crudely along those lines for VMS VAX maybe 13 years
ago; there is at least one product that does it for PC though I don't
recall its name. It is also handy for cases where you have a CD image
of some filesystem (or some other image of a filesystem) that is
intrinsically readonly but whose filesystem will not accept (or is
not graceful) readonly storage. It is also more or less necessary
if you want to work with WORM file structures, which are older still.
There have been a number of filesystems for those dating back to the
early 1980s if not before.
A generic facility of the type you mention is also one way to implement
snapshots on top of an existing filesystem. The written information
must (obviously!) be seekable so you can provide the illusion that you
wrote to the storage. A device level implementation is however perfectly
adequate.
It does not, of course, distinguish for you what should have been changed
and what should not. If you truly know a device (or perhaps a partition)
must not be written, it can be simpler to either return error on writes,
or to just return a fake success on writes yet discard the data. (NTFS
lives with the latter strategy just fine from my experiments. I have not
tried it on extf3 or reiser.)
BTW, think about your mention of RAID and consider the complexity of
writing to RAID4 or RAID5...
I would contend that with cheaper storage these days, it makes little sense
to use RAID, save for shadowing and possibly striping. Those at least do
not have the complexity and slowup dangers that higher RAID levels have, and
there is not a need to save the cost of disk so much where a single disk
may hold 200 gigs and up. Why not dedicate another whole disk to fault
recovery
and lose the complexity and slow write (sometimes) of RAID?
Glenn Everhart
-----Original Message-----
From: bmord@icon-nicholson.com [mailto:bmord@icon-nicholson.com]
Sent: Tuesday, September 03, 2002 3:04 PM
To: Webappsec Securityfocus.Com; SECPROG Securityfocus
Subject: use of base image / delta image for automated recovery from
attacks
Hi,
I was inspired by a mode of operation supported by VMWare. You can have a
base disk image shared by multiple virtual machine (vm) instances. That base
image is never altered by a vm instance. Instead, each vm instance writes
changes to its own "redo" log. Future hard disk reads from that vm instance
incorporate both the base image and the appropriate redo log to present the
current disk image for that specific virtual machine.
This is described here (thanks to Duane for providing this link on the
honeypots mailing list)
http://www.vmware.com/support/reference/common/disk_sharing.html
Could this basic concept be used to easily make self-fixing client/server
applications that efficiently and automatically recover from most attacks,
even before those attacks have been discovered? Here is what I imagine.
The physical architectures of most production client/server systems are
layered. For example, your basic web application might have a web server
running Apache, connected to an application server running some J2EE or .Net
business logic, connected to a database server for persistence. The only one
of these whose disk image really should evolve over time is the database
server, and even here you often put the static RDBMS software on one
partition and the changeable datafiles on another partition. It is only the
partition with the volatile datafiles that must be allowed to change from
one boot to the next. Other paritions may need to be writable for, say, swap
space, but these changes could be eliminated on each reboot.
When someone cracks this system, they will probably change an image that
shouldn't be changed. E.g., they might leverage a buffer overflow in IIS or
Apache to install a trojan or a backdoor on the more exposed web server. But
what if the web server ran off a base image, writing changes to a "delta" or
"redo" partition? And then what if every night it automatically erased the
redo partition and rebooted? The downtime involved for each machine would be
minimal, because it is only deleting data - rather than restoring from
backup. In a system with redundant web servers for load balancing or high
availability, this could be scheduled in a way such that the system is
always accessible. This base/redo partition concept could be implemented at
the same level as a feature of hardware RAID, allowing for greater
performance, reliability, and hack resistance. This concept could also be
applied to the application servers, and even the database server partitions
(except for those partitions which contain the table data files, of course.)
Does anyone do this already? Or is this a new concept? Or has this concept
been discussed before and abandoned for some reasons that I don't yet know?
I use the physical architecture of a basic web application as an example in
this post, but this concept could of course be applied to most server
systems. It would allow for the hardware-separation of volatile and
non-volatile disk images. It would be analogous to performing nightly
ghosting operations, only it would be more efficient and involve less (or
no) downtime.
Thanks for any opinions,
Ben
**********************************************************************
This transmission may contain information that is privileged, confidential and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Thank you
**********************************************************************
| 0 |
Re: FW: use of base image / delta image for automated recovery
Scott MacKenzie wrote:
>There is a software package that is used (or was up through w2k)
>on MicroSloth for this purpose. Ghost, or some such. One essentially
>"takes a picture" of the machine's proper config, and then upon
>schedule or demand replaces the machine's current config with the
>proper picture. It essentially over-writes the entire disk drive.
>Especially good for student access machines at libraries, etc.
>
And it is pretty common practice in some environments with public
workstations to just wipe and re-install Windows machines on a weekly
(or even daily) basis. It's easier than trying to maintain Windows.
Crispin
--
Crispin Cowan, Ph.D.
Chief Scientist, WireX http://wirex.com/~crispin/
Security Hardened Linux Distribution: http://immunix.org
Available for purchase: http://wirex.com/Products/Immunix/purchase.html
| 0 |
RE: Secure Sofware Key
the only way to insure a safe key is to use all the storage space in the
universe. too big to decrypt.
my point is there will never be a "safe" key. what I would consider is how
long does the data need to be protected. if you need to protect the data for
longer than 6 months, do not release it to the public. if you are trying to
stop the general public (your customer) from coping the data then use what
is available on the market. If you want to stop the bootleggers do not
release the data to the public.
I have never seen a lock that could not be unlocked. the act of unlocking
the key gives away it's secret.
the tougher the lock the more pissed-off your customers will be. take MS-XP
for example. only the home user is forced to register. think of the
nightmare if business had to register every copy. how many times have we
needed to reinstall our laptop OS? notice the amount of Mac's sold after the
XP release. these where mostly home users that converted to Mac OS.
the new Audio CD's that have digital copy protection so not play on my
computer. does this stop me from copying the CD? no. however it does make me
return them and get my money back.
the more popular the software the more likely it is to be cracked.
jef
| 0 |
Re: use of base image / delta image for automated recovery from
--2+N3zU4ZlskbnZaJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
> Simple approxmation to this: make /usr a separate partion, and mount it=
=20
> read-only:
>=20
> * The good news: attackers that want to trojan your software have to
> reboot, at least.
> * The bad news: administrators that want to update your software
> have to reboot, at least.
No reboot is required, you just need to remount it:
# mount -o remount,rw /usr
This requires root access, but presumably /usr is safe from non-root
users anyway.
Only way to disable this is to have the kernel compiled with something
that compartmentalizes capabilities (LIDS/etc on Linux for example) or to
remove CAP_SYS_ADMIN with lcap, which would definately require a reboot,
and possibly break some other functionatily to boot. (Pun intended. My
apologies.)
--
Brian Hatch "Are you expected?"
Systems and "No. Dreaded."
Security Engineer
www.hackinglinuxexposed.com
Every message PGP signed
--2+N3zU4ZlskbnZaJ
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iEYEARECAAYFAj13o3EACgkQp6D9AhxzHxDMkACfR3m+eBXLfiZUFRd+jlBwu4MH
Z/kAnRVbL3IA/m03PVTM6O4h9R4AKqML
=k5cA
-----END PGP SIGNATURE-----
--2+N3zU4ZlskbnZaJ--
| 0 |
Re: FW: use of base image / delta image for automated recovery
There is a software package that is used (or was up through w2k)
on MicroSloth for this purpose. Ghost, or some such. One essentially
"takes a picture" of the machine's proper config, and then upon
schedule or demand replaces the machine's current config with the
proper picture. It essentially over-writes the entire disk drive.
Especially good for student access machines at libraries, etc.
Ben Mord wrote:
>
> -----Original Message-----
> From: Crispin Cowan [mailto:crispin@wirex.com]
> Sent: Wednesday, September 04, 2002 5:46 PM
> To: Ben Mord
> Cc: Webappsec Securityfocus.Com; SECPROG Securityfocus
> Subject: Re: use of base image / delta image for automated recovery from
> attacks
>
> > I did my dissertation work in this area (Optimistic Computing) and so was
> >interested in applying it to the security problem. Unfortunately, you hit a
> >bunch of problems:
>
> > a.. When can you "commit" a state as being "good"? You can't run from
> a
> >redo log forever; the performance and storage penalties accumulate. Even
> log
> >structured file systems garbage collect eventually. So you have to commit
> >sometime. The problem is that if you commit too eagerly, you might commit
> >corrupted state. If you commit too conservatively, you eat performance and
> >storage penalties.
> > b.. What do you do if you discover that there is corrupted state in the
> >*middle* of your redo log, and you want some of the critical state that
> >comes after it? You need some way to dig the corruption out of the middle
> >and save the rest. My dissertation solves this problem, but you have to
> >re-write everything in my programming language :)
> . c.. Just doing this at all imposes substantial performance penalties. I
> >love VMWare, and use it every day (the best $200 I ever spent on software)
> >.but it is not very fast.
>
> My proposed solution to the first two problems you mention is to be less
> ambitious. The idea is that you *never* commit - instead, you simply revert
> to base state on reboot. Obviously, you can't do this with partitions that
> accrue important state, e.g. a partition that stores database table data.
> But in your typical web application, most partitions do not accrue important
> state. For example, your typical web server or application server could have
> their entire state reset back to a known base state during each reboot
> without harm.
> The advantage of being less ambitious is that we have a quick and easy way
> to frustrate certain attacks without rewriting all of our software or
> spending lots of money on additional application-specific coding.
>
> The first two problems you describe only occur if we become more ambitious
> and try to apply these same techniques to, for example, the database table
> partitions, where state changes remain important across reboots. That would
> certainly be a nice touch! But as you point out, many problems would have to
> be addressed first, and the hardest of these can not be abstracted away from
> the particular application. Not the least of these is the problem of writing
> heuristics for delineating good from malevolent state. That task is roughly
> analogous to what antiviral software authors do for a living, only this work
> could not be shared across many different systems as it would be specific to
> a paritcular application.
>
> The third problem you mention - performance penalty - is an argument for
> doing this in hardware, much like hardware raid. Another argument for doing
> this in hardware is hack resistance. Changing the base instance should
> require physical access to the console, e.g. by requiring that you first
> flip a physical switch on your RAID hardware or modify a bios setting. If
> the base image can be modified remotely or by software, then you have to
> worry about whether an implementation flaw might permit a cracker to modify
> the base image remotely.
>
> Ben
--
( ______
)) .-- Scott MacKenzie; Dine' College ISD --. >===<--.
C|~~| (>--- Phone/Voice Mail: 928-724-6639 ---<) | ; o |-'
| | \--- Senior DBA/CARS Coordinator/Etc. --/ | _ |
`--' `- Email: scottm@crystal.ncc.cc.nm.us -' `-----'
| 0 |
Re: Secure Sofware Key
> However, cracking and reverse engineering tools are not so ubiquitous on
> UNIX as they are on Windows platform for two main reasons:
>
> 1. The main customers of commercial Unices (Solaris, HP-UX, Aix, SCO...)
> are respectable companies. They are ready to pay big bucks for software
> they need: the reputation matters.
>
> 2. Most software for free and open source Unices like Linux and xBSD (this
> software often may be used on commercial unices as well) is, well, free and
> open source.
Thanks to your answers, I start to see where I should head for. What are your
sugestions for protecting a 100% online requirement system ?
--
Yannick Gingras
Coder for OBB : Observing Bantu-speaking Butanone
http://OpenBeatBox.org
| 0 |
Re: use of base image / delta image for automated recovery from
Ben Mord wrote:
> -----Original Message-----
> *From:* Crispin Cowan [mailto:crispin@wirex.com]
> *Sent:* Wednesday, September 04, 2002 5:46 PM
> *To:* Ben Mord
> *Cc:* Webappsec Securityfocus.Com; SECPROG Securityfocus
> *Subject:* Re: use of base image / delta image for automated
> recovery from attacks
>
> Ben Mord wrote:
>
>>I was inspired by a mode of operation supported by VMWare. [use VMWare's ability to rolll back state to recover from intrusions]
>>
> I did my dissertation work in this area (Optimistic Computing
> <http://www.cse.ogi.edu/%7Ecrispin/hope.pubs.html>) and so was
> interested in applying it to the security problem. Unfortunately,
> you hit a bunch of problems:
>
> * When can you "commit" a state as being "good"? You can't
> run from a redo log forever; the performance and storage
> penalties accumulate. Even log structured file systems
> garbage collect eventually. So you have to commit sometime.
> The problem is that if you commit too eagerly, you might
> commit corrupted state. If you commit too conservatively,
> you eat performance and storage penalties.
> * What do you do if you discover that there is corrupted state
> in the *middle* of your redo log, and you want some of the
> critical state that comes after it? You need some way to dig
> the corruption out of the middle and save the rest. My
> dissertation solves this problem, but you have to re-write
> everything in my programming language :)
> * Just doing this at all imposes substantial performance
> penalties. I love VMWare, and use it every day (the best
> $200 I ever spent on software) but it is not very fast.
>
> My proposed solution to the first two problems you mention is to be
> less ambitious. The idea is that you *never* commit - instead, you
> simply revert to base state on reboot.
Ah. In that case, you can use something considerably less powerful than
VMWare. All you need is a machine configured to boot from CD-ROM and use
a RAM disk for scratch space. Numerous Linux distros are available that
let you boot a stateless but functional system from CD-ROM.
> Obviously, you can't do this with partitions that accrue important
> state, e.g. a partition that stores database table data.
... but if you *do* want some state to persist, then you need a
mountable writable partition. To protect it, you need some kind of
access control management to decide who can do what to the writable
partition, blah blah blah ... and before you know it, the security
problem starts to look just like it does for conventional servers.
Simple approxmation to this: make /usr a separate partion, and mount it
read-only:
* The good news: attackers that want to trojan your software have to
reboot, at least.
* The bad news: administrators that want to update your software
have to reboot, at least.
Crispin
--
Crispin Cowan, Ph.D.
Chief Scientist, WireX http://wirex.com/~crispin/
Security Hardened Linux Distribution: http://immunix.org
Available for purchase: http://wirex.com/Products/Immunix/purchase.html
| 0 |
RE: use of base image / delta image for automated recovery from
-----Original Message-----
From: Crispin Cowan [mailto:crispin@wirex.com]
Sent: Wednesday, September 04, 2002 7:30 PM
To: Ben Mord
Cc: Webappsec Securityfocus.Com; SECPROG Securityfocus
Subject: Re: use of base image / delta image for automated recovery from
attacks
Ben Mord wrote:
>> -----Original Message-----
>> *From:* Crispin Cowan [mailto:crispin@wirex.com]
>> *Sent:* Wednesday, September 04, 2002 5:46 PM
>> *To:* Ben Mord
>> *Cc:* Webappsec Securityfocus.Com; SECPROG Securityfocus
>> *Subject:* Re: use of base image / delta image for automated
>> recovery from attacks
>>
>> Ben Mord wrote:
>>
>> My proposed solution to the first two problems you mention is to be
>> less ambitious. The idea is that you *never* commit - instead, you
>> simply revert to base state on reboot.
>Ah. In that case, you can use something considerably less powerful than
>VMWare. All you need is a machine configured to boot from CD-ROM and use
>a RAM disk for scratch space. Numerous Linux distros are available that
>let you boot a stateless but functional system from CD-ROM.
But RAM is expensive, and the directory structures of many systems (e.g.
Windows) are not sufficiently organized and standardized to make this
combination of bootable CDs and RAM drives practical. Even if you are
fortunate enough to be using Linux (or another FHS-compliant *nix), you
still can't fit a lot on a CD. Its not unusual today to have gigabytes of
static multimedia content on the web server. This particular problem can be
alleviated somewhat by using DVDs, but this is a temporary solution at best
which will become outdated quickly as our data requirements grow and hard
drives become cheaper.
>> Obviously, you can't do this with partitions that accrue important
>> state, e.g. a partition that stores database table data.
>... but if you *do* want some state to persist, then you need a
>mountable writable partition. To protect it, you need some kind of
>access control management to decide who can do what to the writable
>partition, blah blah blah ... and before you know it, the security
>problem starts to look just like it does for conventional servers.
Right. This is why you would consolidate all state of any long-term
significance on just a couple partitions, and why you would not put static
application code on these changeable partitions. Fortunately, most large
client/server application physical architectures do this anyhow, because
these are two fundamentally different kinds of state with two very different
sets of administrative, security, RAID, and backup requirements. People also
tend to do this anyhow because layered logical architectures are popular
with the GUI at one end, business logic in the middle, and persistence
services at the other. This logical architecture maps naturally to a
physical architecture that has a static web server, a static application
server, and a database server that has static and changeable partitions. (I
use the word static versus changeable instead of writeable versus unwritable
because the "unchangeable" partitions might be written to for temporary swap
space. Who knows what Windows does internally?)
My point is that there should be a market out there for a hardware RAID
device that can split designated partitions into a permanent base image
partition and a temporary delta image partition, that has some simple but
solid security measures to prevent the unauthorized remote modification of
base images, and that can be configured to clear the delta image when the
server is rebooted. If some vendor wished to implement this, they could then
market this as a mechanism to help frustrate broad classes of attack that
rely on the permanent modification of system or application files via buffer
overflows, platform and middleware bugs, etc. The prevention of unauthorized
modification of application data, of course, would not be addressed by this
particular product. But there are many other techniques out there to defend
application data. But those techniques all assume that your system itself
has not been compromised at a lower level, which is where this product could
help.
I would have to think that these features would be relatively easy for a
hardware RAID vendor to implement. (I'm just guessing, of course, with no
knowledge of how hardware RAID works internally.) If anyone knows of such a
product, I'd love to hear about it.
Ben
| 0 |
Re: Secure Sofware Key
Hi
On Wed, 4 Sep 2002, Yannick Gingras wrote:
> BTW thanks for all of you who provided interestiong insight. I'm playing with
> gdb's dissassembler now but I don't think it's what a typical cracker would
> use. Any hints on UNIX cracking tools ?
There's also an 'objdump' program, and 'biew' hex viewer/disassembler. A good
starting point to search is http://www.freshmeat.net/
However, cracking and reverse engineering tools are not so ubiquitous on UNIX as
they are on Windows platform for two main reasons:
1. The main customers of commercial Unices (Solaris, HP-UX, Aix, SCO...) are
respectable companies. They are ready to pay big bucks for software they need: the reputation matters.
2. Most software for free and open source Unices like Linux and xBSD (this
software often may be used on commercial unices as well) is, well, free and
open source.
Regards
/Artem
--
Artem Frolov <frolov@ispras.ru>
/------------------------------------------------------------------\
Software Engineer, System Administrator
Institute for System Programming, Russian Academy of Sciences
Tel. +7 095 912-5317 (ext 4406), Cellular: +7 095 768-7067
C7 40 CA 41 2A 18 89 D6 29 45 DF 50 75 13 6D 7A A4 87 2B 76
\------------------------------------------------------------------/
------------------------------------------------------------------
Basic Definitions of Science:
If it's green or wiggles, it's biology.
If it stinks, it's chemistry.
If it doesn't work, it's physics.
------------------------------------------------------------------
| 0 |
Re: use of base image / delta image for automated recovery from
> Ben Mord said:
>
> >Ah. In that case, you can use something considerably less powerful than
> >VMWare. All you need is a machine configured to boot from CD-ROM and use
> >a RAM disk for scratch space. Numerous Linux distros are available that
> >let you boot a stateless but functional system from CD-ROM.
>
> But RAM is expensive, and the directory structures of many systems (e.g.
> Windows) are not sufficiently organized and standardized to make this
> combination of bootable CDs and RAM drives practical. Even if you are
> fortunate enough to be using Linux (or another FHS-compliant *nix), you
> still can't fit a lot on a CD. Its not unusual today to have gigabytes of
> static multimedia content on the web server. This particular problem can be
> alleviated somewhat by using DVDs, but this is a temporary solution at best
> which will become outdated quickly as our data requirements grow and hard
> drives become cheaper.
So, just write-protect the hard disk for partitions that are static.
I seem to recall an article on this (early 80's, Byte magazine, perhaps?)
for BBS systems or for testing unknown (perhaps trojan horse) software.
- George
--
----------------------------------------------------------------------
George Dinwiddie gdinwiddie@alberg30.org
The gods do not deduct from man's allotted span those hours spent in
sailing. http://www.Alberg30.org/
----------------------------------------------------------------------
| 0 |
Re: The risks of client systems writing to server registry
On 5 Sep 2002, Richard Bartlett wrote:
Richard,
> I have a customer who is developing some printer driver code to allow
> custom driver settings (n-up, booklet, duplex etc.) to be saved up to the
> server to be retrieved by other users. The data is being written, by a
> printer driver (using the logged on users authentication, to a registry
> key) HKLM\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows NT
> x86\Drivers\Version-3\{Driver Name}\{Custom Key}\Subkey).
Let me get this straight; a registry key is loaded from the server onto
the client workstations who can modify it, then write it back onto the
server's own registry - which is not going to use it?
> The question is, what are the security risks of allowing users to write
> to this key? The data is string data, in the form of delimited numeric
> values. This data is then retrieved by capable printer drivers and
> interpreted.
>
> The risks as I see it are twofold;
> (1) The risks of a compromise to the server using this registry key. I
> think this is unlikeley as the server itself does not use this data, only
> client PC's do. Unless someone knows a way to travel out of a hive up
> the registry bypassing the permissions set using regedt32.
What is the reason to write a registry key to a server if the server
itself is not using it?
I don't think you should worry too much about someone travelling out of
the hive, but again, I'm curious as to how the driver actually modifies
the keys on the server.
> (2) The risks of a compromise to the client (far more likely). This
> would probably be by a malformed or extremely long string in the key
> value, which would presumably lead to either DOS or system compromise by
> buffer overflow on the client system.
And if the client writes the key back onto the server, yes, there's wide
open for something nasty here.
Two other things spring to mind;
1) If anyone can modify the key, how do you make sure that two users are
not overwriting the same key, thereby causing undesirable effects.
2) If anyone have permissions to write to the key (and below), anyone can
create thousands of extra keys under this key, thereby filling up the
registry. The result of such a thing is obvious.
If I got this all wrong, I'd be happy that you clarify a bit more and tell
me where I might have misunderstood.
Med venlig hilsen / Best regards,
-Allan Jensen
Si hoc signum legere potes, operis boni in rebus Latinus alacribus et
fructuosis potiri potes!
| 0 |
RE: Secure Sofware Key
Bryan Feir [mailto:bryan@sgl.crestech.ca] wrote:
>Of course, once one player key was broken, dealing with the rest became
> a known plaintext attack, and the rest of the player keys went down
like
> a row of dominos.
The actual follow-up to the Xing player break was more interesting than
that. The mere knowledge of known plaintext (a corresponding input and
output) does not necessarily make it trivial to break a properly
designed systems and/or algorithm. The primary reason it was easy for
CSS is because the CSS key was only 40-bits, and thereby easy to break
with exhaustive search attacks. It was only 40-bits (speculated)
because of a misunderstanding of the government cryptography export
rules at the time.
Even more interesting, to me at least, was that soon after the Xing
player break, people started studying the CSS algorithm itself. They
rapidly found serious design flaws which left the 40-bit CSS algorithm
with an actual strength of around 23-bits (from memory, and new attacks
might have further reduced the strength). This is another great example
showing why proprietary cryptography algorithms should be viewed with
the greatest of suspicion.
On Tue, Sep 03, 2002 at 09:03:40PM -0400, Yannick Gingras wrote:
> This make me wonder about the relative protection of smart cards.
They have
> an internal procession unit around 4MHz. Can we consider them as
trusted
> hardware ?
Yes and no. You can put a limited amount of trust in a smart card.
There have been any number of very clever attacks against smartcards
(Ross Anderson in the UK has documented quite a few of these), and
smartcard manufactures are usually one step behind these attacks. A
well designed system assumes that a system smartcard will be completely
compromised at some point, giving an adversary all of the secrets
contained in the smartcard. The cryptography industry has developed a
variety of techniques that can reduce the impact of a compromise,
including unique keys per smartcard and forward security techniques.
Luciano Rocha [strange@nsk.yi.org] wrote:
> The problem is that that piece of hardware is trustworthy, but the
rest of
> the PC isn't, so a cracker just needs to simulate the lock/smart card,
or
> peek at the executable after the lock has been deactivated.
Going back to the original question, once the encrypted material goes
outside the trusted hardware, it is impossible to "unbreakably" protect
it. There may be some mitigation steps you can take, such as the SDMI
watermarking, but most schemes to date have been easily broken.
Another consideration is the value of what you are trying to protect.
While there is no such thing as unbreakable, adding more cost (both in
terms of price and hassle-factor) can greatly improve the protection.
Since you are talking about the use of standard PC workstations, I
presume what you are trying to protect is not THAT valuable. I'm afraid
most security measures don't come for free.
Michael McKay
Director of Software Development
mmckay@iscubed.com
Information Security Systems & Services Inc.
19925 Stevens Creek Blvd. Cupertino, CA 95014
Phone: 408.725.7136 x 4138 Fax: 408.973.7239 www.iscubed.com
| 0 |
Re: Secure Sofware Key
On Thursday 19 September 2002 16:44, Michael McKay wrote:
> On Tue, Sep 03, 2002 at 09:03:40PM -0400, Yannick Gingras wrote:
> > This make me wonder about the relative protection of smart cards.
> They have an internal procession unit around 4MHz. Can we consider them as
> trusted hardware ?
SmartCards do not have fixed clock rates (more often than not) as the ISO spec
dictates that they are externally powered and clocked, but SmartCards used
for security purposes (usually JavaCards) have built-in crypto co-processors
that make clock rate irrelevant. 4mhz SmartCards can often preform triple-DES
faster than general purpose processors clocked at ten times the speed.
That said, clock rate has nothing with how trustworthy a card is. As Michael
pointed out, there's something of an arms-race between manufacturers and
attackers which has nothing to do with clock rate, and time and time again
what we've seen is that it's not a question of "is it secure", it's a
question of "who is it secure from and for how long?" Security is rarely a
question of absolutes (despite the often boolean nature of a break), rather
it's a question of assessing, quantifying, and managing risk. SmartCards are
designed to address threats in which the cost of protection cannot exceed the
$1-20 range (depending on the application).
As whether or not they are "trusted hardware", the question again revolves
around attacker and timeframe. One might expect a bored undergrad EE student
to have more trouble revealing the contents of a pilfered smartcard than,
say, a governtment intelligence service. If your goal is to keep undergrad
EEs from perpetrating mass fraud in the caffeteria, then a smartcard is
likely "trustworthy" enough for your application. If your aim is to protect
ICBM launch codes, then it's probably the wrong tool. In either application,
a risk/cost ratio must justify the use of the protection measure in question.
--
Alex Russell
alex@SecurePipe.com
alex@netWindows.org
| 0 |
Re: use of base image / delta image for automated recovery from
reply to the mail from Ben Mord (bmord@icon-nicholson.com):
> Hi,
Hello,
< ... snipped for brevity ... >
> ... This concept could also be
> applied to the application servers, and even the database server partitions
> (except for those partitions which contain the table data files, of course.)
Although the data might just be the information that needs protecting.
> Does anyone do this already? Or is this a new concept?
I've seen this implemented for a shell server, although they chose
to have their root on a CD-WR in a CD-R drive. Which meant that even
when compromised it was only possible to examine other users data.
AFAIR(emember) they just swapped CD's when a root exploit was found.
> Thanks for any opinions,
NP
blaze your trail
--
redhat
'I am become Shiva, destroyer of worlds'
| 0 |
Encryption approach to secure web applications
Hi everybody!
I'm writing a web application in java (tomcat + jsp/servlets + database
access with postgreSQL).
This will be released under the GPL and will eventually be useful as a
framework for other web applications.
The application main focus is e-commerce, but not limited to that.
I would like to use some form of cryptography to protect data on the
database, but I have some problem figuring out the right approach.
Above all, how to store passwords and keys in a shared web server.
A problem that I was unable to solve is how to store keys for
encryption/decryption. The api that I'm using is the jca (jdk1.4.x),
and the methods of saving generated keys in keystores fails always.
I can serialize the object, and store in the database, but this is not
the most secure approach: this key is needed to decrypt data in the
database, but the database is accessible from the web application.
Assuming that I can find a good and secure place where to store the
database password, I can use a different database with different
user... Argh... to complex and doesn't really solve the problem.
Where I can found good documentation about this topic?
There is another approach that I would share with the list, something I
thought that can be of bit interest, but probabily wrong and insecure.
After all, I'm a real beginner in secure programming, and I'm here to
learn methods and technics.
First of all, I need a secure way to keep database passwords secure, so
I have to keep them separate from the main server. The right approach
could be using a small java bean application that run as normal user
(not tomcat, so it is not shared with other web services or, worst, the
nobody user), that has no shell login, but has a default home directory
or a place where it can hold passwords and keys.
The web application could then open an ssl connection (could be done in
the init method at server startup) to get database passwords. The small
bean could check via code signature/rmi/whatever else that the source
is the right one, and handle all the database connections, or give the
db connection/password to the main web application.
In this way, we solve the problem of keeping the keys and passwords in
shared directories, and also, an attacker should get root/bean user
account to read data. This is not perfect, and works only if your
provider gives the opportunity to configure a separated java
application (that means, really, another server running in the
background).
Any suggestions?
Thank you,
Mario Torre
--
Please avoid sending me Word or PowerPoint attachments.
See http://www.fsf.org/philosophy/no-word-attachments.html
| 0 |
[Spambayes] test sets?
On 05 September 2002, Tim Peters said:
> Greg Ward is
> currently capturing a stream coming into python.org, and I hope we can get a
> more modern, and cleaner, test set out of that.
Not yet -- still working on the required config changes. But I have a
cunning plan...
> But if that stream contains
> any private email, it may not be ethically possible to make that available.
It will! Part of my cunning plan involves something like this:
if folder == "accepted": # ie. not suspected junk mail
if (len(recipients) == 1 and
recipients[0] in ("guido@python.org", "barry@python.org", ...)):
folder = "personal"
If you (and Guido, Barry, et. al.) prefer, I could change that last
statement to "folder = None", so the mail won't be saved at all. I
*might* also add a "and sender doesn't look like -bounce-*, -request,
-admin, ..." clause to that if statement.
> Can you think of anyplace to get a large, shareable ham sample apart from a
> public mailing list? Everyone's eager to share their spam, but spam is so
> much alike in so many ways that's the easy half of the data collection
> problem.
I believe the SpamAssassin maintainers have a scheme whereby the corpus
of non-spam is distributed, ie. several people have bodies of non-spam
that they use for collectively evolving the SA score set. If that
sounds vague, it matches my level of understanding.
Greg
--
Greg Ward <gward@python.net> http://www.gerg.ca/
Reality is for people who can't handle science fiction.
| 0 |
[Spambayes] test sets?
> I believe the SpamAssassin maintainers have a scheme whereby the corpus
> of non-spam is distributed, ie. several people have bodies of non-spam
> that they use for collectively evolving the SA score set. If that
> sounds vague, it matches my level of understanding.
See if you can get a hold of that so we can do a level-playing-field
competition. :-)
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Re: [Python-Dev] Getting started with GBayes testing
On 06 September 2002, Anthony Baxter said:
> A snippet, hopefully not enough to trigger the spam-filters.
As an aside: one of the best ways to dodge SpamAssassin is by having an
In-Reply-To header. Most list traffic should meet this criterion.
Alternately, I can whitelist mail to spambayes@python.org -- that'll
work until spammers get ahold of the list address, which usually seems
to take a few months.
Greg
--
Greg Ward <gward@python.net> http://www.gerg.ca/
Gee, I feel kind of LIGHT in the head now, knowing I can't make my
satellite dish PAYMENTS!
| 0 |
[Spambayes] test sets?
>>>>> "TP" == Tim Peters <tim.one@comcast.net> writes:
>> Any thought to wrapping up your spam and ham test sets for
>> inclusion w/ the spambayes project?
TP> I gave it all the thought it deserved <wink>. It would be
TP> wonderful to get several people cranking on the same test
TP> data, and I'm all in favor of that. OTOH, my Data/ subtree
TP> currently has more than 35,000 files slobbering over 134
TP> million bytes -- even if I had a place to put that much stuff,
TP> I'm not sure my ISP would let me email it in one msg <wink>.
Check it into the spambayes project. SF's disks are cheap <wink>.
-Barry
| 0 |
[Spambayes] test sets?
> Check it into the spambayes project. SF's disks are cheap <wink>.
Perhaps more useful would be if Tim could check in the pickle(s?)
generated by one of his training runs, so that others can see how
Tim's training data performs against their own corpora. This could
also be the starting point for a self-contained distribution (you've
got to start with *something*, and training with python-list data
seems just as good as anything else).
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] test sets?
>>>>> "GW" == Greg Ward <gward@python.net> writes:
GW> If you (and Guido, Barry, et. al.) prefer, I could change that
GW> last statement to "folder = None", so the mail won't be saved
GW> at all.
I don't care if the mail is foldered on python.org, but personal
messages regardless of who they're for, shouldn't be part of the
public spambayes repository unless specifically approved by both the
recipient and sender.
Note also that we are much more liberal about python.org/zope.org
mailing list traffic than most folks. Read list-managers for any
length of time and you'll find that there are a lot of people who
assert strict copyright over their collections, are very protective of
their traffic, and got really pissed when gmane just started
gatewaying their messages without asking.
Which might be an appropriate for their lists, but not for ours (don't
think I'm suggesting we do the same -- I /like/ our laissez-faire
approach).
But for personal email, we should be more careful.
-Barry
| 0 |
[Spambayes] Deployment
Quite independently from testing and tuning the algorithm, I'd like to
think about deployment.
Eventually, individuals and postmasters should be able to download a
spambayes software distribution, answer a few configuration questions
about their mail setup, training and false positives, and install it
as a filter.
A more modest initial goal might be the production of a tool that can
easily be used by individuals (since we're more likely to find
individuals willing to risk this than postmasters).
There are many ways to do this. Some ideas:
- A program that acts both as a pop client and a pop server. You
configure it by telling it about your real pop servers. You then
point your mail reader to the pop server at localhost. When it
receives a connection, it connects to the remote pop servers, reads
your mail, and gives you only the non-spam. To train it, you'd only
need to send it the false negatives somehow; it can assume that
anything is ham that you don't say is spam within 48 hours.
- A server with a custom protocol that you send a copy of a message
and that answers "spam" or "ham". Then you have a little program
that is invoked e.g. by procmail that talks to the server. (The
server exists so that it doesn't have to load the pickle with the
scoring database for each message. I don't know how big that pickle
would be, maybe loading it each time is fine. Or maybe
marshalling.)
- Your idea here.
Takers? How is ESR's bogofilter packaged? SpamAssassin? The Perl
Bayes filter advertised on slashdot?
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] test sets?
>>>>> "GvR" == Guido van Rossum <guido@python.org> writes:
GvR> Perhaps more useful would be if Tim could check in the
GvR> pickle(s?) generated by one of his training runs, so that
GvR> others can see how Tim's training data performs against their
GvR> own corpora.
He could do that too. :)
-Barry
| 0 |
[Spambayes] Deployment
On 6 Sep 2002 at 10:31, Guido van Rossum wrote:
> your mail, and gives you only the non-spam. To train it, you'd only need
> to send it the false negatives somehow; it can assume that anything is
> ham that you don't say is spam within 48 hours.
I have folks who leave their email programs running 24 hours a day, constantly polling
for mail. If they go away for a long weekend, lots of "friday night spam" will become
ham on sunday night. (Friday night seems to be the most popular time)
> - Your idea here.
Ultimately I'd like to see tight integration into the "most popular email clients".. As a
stop-gap to the auto-ham ..
How about adding an IMAP server with a spam and deleted-ham folder. Most email
clients can handle IMAP. Users should be able to quickly move "spam" into the spam
folder.
Instead of deleting messages (or, by reprogramming the delete function) they can
quickly move ham into the ham folder.
In either case, the message would be processed and then destroyed.
Brad Clements, bkc@murkworks.com (315)268-1000
http://www.murkworks.com (315)268-9812 Fax
AOL-IM: BKClements
| 0 |
[Spambayes] test sets?
[Tim]
> OTOH, my Data/ subtree currently has more than 35,000 files slobbering
> over 134 million bytes -- even if I had a place to put that much stuff,
> I'm not sure my ISP would let me email it in one msg <wink>.
[Skip]
> Do you have a dialup or something more modern <wink>?
Much more modern: a cable modem with a small upload rate cap. There's a
reason the less modern uncapped @Home went out of business <wink>.
> 134MB of messages zipped would probably compress pretty well - under 50MB
> I'd guess with all the similarity in the headers and such. You could zip
> each of the 10 sets individually and upload them somewhere.
I suppose this could finish over the course of an afternoon. Now where's
"somewhere"? I expect we'll eventually collect several datasets;
SourceForge isn't a good place for it (they expect projects to distribute
relatively small code files, and complain if even those get big).
> ...
> How about random sampling lots of public mailing lists via gmane or
> something similar, manually cleaning it (distributing that load over a
> number of people) and then relying on your clever code and your
> rebalancing script to help further cleanse it?
What then are we training the classifier to do? Graham's scoring scheme is
based on an assumption that the ham-vs-spam task is *easy*, and half of that
is due to that the ham has a lot in common. It was an experiment to apply
his scheme to all the comp.lang.python traffic, which is a lot broader than
he had in mind (c.l.py has long had a generous definition of "on topic"
<wink>). I don't expect good things to come of making it ever broader,
*unless* your goal is to investigate just how broad it can be made before it
breaks down.
> The "problem" with the ham is it tends to be much more tied to one person
> (not just intimate, but unique) than the spam.
Which is "a feature" from Graham's POV: the more clues, the better this
"smoking guns only" approach should work.
> I save all incoming email for ten days (gzipped mbox format) before it
rolls
> over and disappears. At any one time I think I have about 8,000-10,000
> messages. Most of it isn't terribly personal (which I would cull before
> passing along anyway) and much of it is machine-generated, so would be of
> marginal use. Finally, it's all ham-n-spam mixed together. Do we call
> that an omelette or a Denny's Grand Slam?
Unless you're volunteering to clean it, tag it, package it, and distribute
it, I'd call it irrelevant <wink>.
| 0 |
[Spambayes] Deployment
> > your mail, and gives you only the non-spam. To train it, you'd only need
> > to send it the false negatives somehow; it can assume that anything is
> > ham that you don't say is spam within 48 hours.
>
> I have folks who leave their email programs running 24 hours a day,
> constantly polling for mail. If they go away for a long weekend,
> lots of "friday night spam" will become ham on sunday night.
> (Friday night seems to be the most popular time)
So we'll make this a config parameter.
> > - Your idea here.
>
> Ultimately I'd like to see tight integration into the "most popular
> email clients".. As a stop-gap to the auto-ham ..
What's an auto-ham?
> How about adding an IMAP server with a spam and deleted-ham
> folder. Most email clients can handle IMAP. Users should be able to
> quickly move "spam" into the spam folder.
I personally don't think IMAP has a bright future, but for people who
do use it, that's certainly a good approach.
> Instead of deleting messages (or, by reprogramming the delete
> function) they can quickly move ham into the ham folder.
Yes.
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] test sets?
[Anthony Baxter]
> I've got a test set here that's the last 3 and a bit years email to
> info@ekit.com and info@ekno.com - it's a really ugly set of 20,000+
> messages, currently broken into 7,000 spam, 9,000 ham, 9,000 currently
> unclassified. These addresses are all over the 70-some different
> ekit/ekno/ISIConnect websites, so they get a LOT of spam.
>
> As well as the usual spam, it also has customers complaining about
> credit card charges, it has people interested in the service and
> asking questions about long distance rates, &c &c &c. Lots and lots
> of "commercial" speech, in other words. Stuff that SA gets pretty
> badly wrong.
Can this corpus be shared? I suppose not.
> I'm currently mangling it by feeding all parts (text, html, whatever
> else :) into the filters, as well as both a selected number of headers
> (to, from, content-type, x-mailer), and also a list of (header,
> count_of_header). This is showing up some nice stuff - e.g. the
> X-uidl that stoopid spammers blindly copy into their messages.
If we ever <wink> have a shared corpus, an easy refactoring of timtest
should allow to plug in different tokenizers. I've only made three changes
to Graham's algorithm so far (well, I've made dozens -- only three survived
testing as proven winners), all the rest has been refining the tokenization
to provide better clues.
> I did have Received in there, but it's out for the moment, as it causes
> rates to drop.
That's ambiguous. Accuracy rates or error rates, ham or spam rates?
> I'm also stripping out HTML tags, except for href="" and src="" - there's
> so so much goodness in them (note that I'm only keeping the contents of
> the attributes).
Mining embedded http/https/ftp thingies cut the false negative rate in half
in my tests (not keying off href, just scanning for anything that "looked
like" one); that was the single biggest f-n improvement I've seen. It
didn't change the false positive rate. So you know whether src added
additional power, or did you do both at once?
| 0 |
[Spambayes] Deployment
Guido> Takers? How is ESR's bogofilter packaged? SpamAssassin? The
Guido> Perl Bayes filter advertised on slashdot?
Dunno about the other tools, but SpamAssassin is a breeze to incorporate
into a procmail environment. Lots of people use it in many other ways. For
performance reasons, many people run a spamd process and then invoke a small
C program called spamc which shoots the message over to spamd and passes the
result back out. I think spambayes in incremental mode is probably fast
enough to not require such tricks (though I would consider changing the
pickle to an anydbm file).
Basic procmail usage goes something like this:
:0fw
| spamassassin -P
:0
* ^X-Spam-Status: Yes
$SPAM
Which just says, "Run spamassassin -P reinjecting its output into the
processing stream. If the resulting mail has a header which begins
"X-Spam-Status: Yes", toss it into the folder indicated by the variable
$SPAM.
SpamAssassin also adds other headers as well, which give you more detail
about how its tests fared. I'd like to see spambayes operate in at least
this way: do its thing then return a message to stdout with a modified set
of headers which further processing downstream can key on.
Skip
| 0 |
[Spambayes] Deployment
Did you want this on the list? I'm replying to the list..
On 6 Sep 2002 at 10:43, Guido van Rossum wrote:
> What's an auto-ham?
Automatically marking something as ham after a given timeout.. regardless of how long
that timeout is, someone is going to forget to submit the message back as spam.
How many spams-as-hams can be accepted before the f-n rate gets unacceptable?
> > How about adding an IMAP server with a spam and deleted-ham
> > folder. Most email clients can handle IMAP. Users should be able to
> > quickly move "spam" into the spam folder.
>
> I personally don't think IMAP has a bright future, but for people who
> do use it, that's certainly a good approach.
>
> > Instead of deleting messages (or, by reprogramming the delete
> > function) they can quickly move ham into the ham folder.
>
> Yes.
I view IMAP as a stop-gap measure until tighter integration with various email clients
can be achieved.
I still feel it's better to require classification feedback from the recipient, rather than
make any assumptions after some period of time passes. But this is an end-user issue
and we're still at the algorithm stage.. ;-)
Brad Clements, bkc@murkworks.com (315)268-1000
http://www.murkworks.com (315)268-9812 Fax
AOL-IM: BKClements
| 0 |
[Spambayes] Deployment
> > What's an auto-ham?
>
> Automatically marking something as ham after a given
> timeout.. regardless of how long that timeout is, someone is going
> to forget to submit the message back as spam.
OK, here's a refinement. Assuming very little spam comes through, we
only need to pick a small percentage of ham received as new training
ham to match the new training spam. The program could randomly select
a sufficient number of saved non-spam msgs and ask the user to
validate this selection. You could do this once a day or week (config
parameter).
> How many spams-as-hams can be accepted before the f-n rate gets
> unacceptable?
Config parameter.
> I view IMAP as a stop-gap measure until tighter integration with
> various email clients can be achieved.
>
> I still feel it's better to require classification feedback from the
> recipient, rather than make any assumptions after some period of
> time passes. But this is an end-user issue and we're still at the
> algorithm stage.. ;-)
I'm trying to think about the end-user issues because I have nothing
to contribute to the algorithm at this point. For deployment we need
both!
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Deployment
> Dunno about the other tools, but SpamAssassin is a breeze to incorporate
> into a procmail environment. Lots of people use it in many other ways. For
> performance reasons, many people run a spamd process and then invoke a small
> C program called spamc which shoots the message over to spamd and passes the
> result back out. I think spambayes in incremental mode is probably fast
> enough to not require such tricks (though I would consider changing the
> pickle to an anydbm file).
>
> Basic procmail usage goes something like this:
>
> :0fw
> | spamassassin -P
>
> :0
> * ^X-Spam-Status: Yes
> $SPAM
>
> Which just says, "Run spamassassin -P reinjecting its output into the
> processing stream. If the resulting mail has a header which begins
> "X-Spam-Status: Yes", toss it into the folder indicated by the variable
> $SPAM.
>
> SpamAssassin also adds other headers as well, which give you more detail
> about how its tests fared. I'd like to see spambayes operate in at least
> this way: do its thing then return a message to stdout with a modified set
> of headers which further processing downstream can key on.
Do you feel capable of writing such a tool? It doesn't look too hard.
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Deployment
>> Ultimately I'd like to see tight integration into the "most popular
>> email clients"..
The advantage of using a kitchen sink (umm, make that highly programmable)
editor+email package like Emacs+VM is that you can twiddle your key bindings
and write a little ELisp (or Pymacs) glue to toss messages in the right
direction (spam or ham). For this, spambayes would have to operate in an
incremental fashion when fed a single ham or spam message.
(No, I have no idea what an "auto-ham" is. A pig run over by a car,
perhaps?)
give-a-dog-a-bone-ly, y'rs,
Skip
| 0 |
[Spambayes] Deployment
>> Dunno about the other tools, but SpamAssassin is a breeze ...
>> SpamAssassin also adds other headers as well, which give you more
>> detail ...
Guido> Do you feel capable of writing such a tool? It doesn't look too
Guido> hard.
Sure, but at the moment I have to stop reading email for a few hours and do
some real work. ;-) I'll see if I can modify GBayes.py suitably over the
weekend.
Skip
| 0 |
[Spambayes] Deployment
Guido van Rossum wrote:
> I personally don't think IMAP has a bright future, but for people who
> do use it, that's certainly a good approach.
Writing an IMAP server is a non-trivial task. The specification is huge
and clients do all kinds of weird stuff. POP is very easy in
comparison. Perhaps you could forward messages to a special address or
save them in a special folder to mark them as false negatives.
Alternatively, perhaps there could be a separate protocol and client
that could be used to review additions to the training set. Each day a
few random spam and ham messages could be grabbed as candidates.
Someone would periodically startup the client, review the candidates,
reclassify or remove any messages they don't like and add them to the
training set.
Neil
| 0 |
[Spambayes] Deployment
>>>>> "NS" == Neil Schemenauer <nas@python.ca> writes:
NS> Writing an IMAP server is a non-trivial task.
That's what I've been told by everyone I've talked to who's actually
tried to write one.
NS> Alternatively, perhaps there could be a separate protocol and
NS> client that could be used to review additions to the training
NS> set. Each day a few random spam and ham messages could be
NS> grabbed as candidates. Someone would periodically startup the
NS> client, review the candidates, reclassify or remove any
NS> messages they don't like and add them to the training set.
I think people will be much more motivated to report spam than ham. I
like the general approach that copies of random messages will be
sequestered for some period of time before they're assumed to be ham.
Matched with a simple spam reporting scheme, this could keep the
training up to date with little effort. I've sketched out an approach
a listserver like Mailman could do along these lines and if I get some
free time I'll hack something together.
I like the idea of a POP proxy which is classifying messages as
they're pulled from the server. The easiest way for such a beast to
be notified of spam might be to simply save the spam in a special
folder or file that the POP proxy would periodically consult.
-Barry
| 0 |
[Spambayes] Corpus Collection (Was: Re: Deployment)
On Fri, 6 Sep 2002, Guido van Rossum wrote:
>Quite independently from testing and tuning the algorithm, I'd like to
>think about deployment.
>
>Eventually, individuals and postmasters should be able to download a
>spambayes software distribution, answer a few configuration questions
>about their mail setup, training and false positives, and install it
>as a filter.
>
>A more modest initial goal might be the production of a tool that can
>easily be used by individuals (since we're more likely to find
>individuals willing to risk this than postmasters).
My impression is that a pre-collected corpus would not fit most individuals
very well, but each individual (or group?) should collect their own corpus.
One problem that comes upp immediately: individuals are lazy.
If I currently get 50 spam and 50 ham a day, and I'll have to
press the 'delete' button once for each spam, I'll be happy
to press the 'spam' button instead. However, if in addition
have to press a 'ham' button for each ham, it starts to look
much less like a win to me. Add the time to install and setup
the whole machinery, and I'll just keep hitting delete.
The suggestions so far have been to hook something on the delete
action, that adds a message to the ham corpus. I see two problems
with this: the ham will be a bit skewed; mail that I keep around
without deleting will not be counted. Secondly, if I by force of
habit happen to press the 'delete' key instead of the 'spam' key,
I'll end up with spam in the ham, anyways.
I would like to look for a way to deal with spam in the ham.
The obvious thing to do is to trigger on the 'spam' button,
and at that time look for messages similar to the deleted one
in the ham corpus, and simply remove them. To do this we
need a way to compare two word count histograms, to see
how similar they are. Any ideas ?
Also, I personally would prefer to not see the spam at all.
If they get bounced (preferably already in the SMTP),
false positives become the senders problem, to rewrite
to remove the spam smell.
In a well tuned system then, there spam corpus will be much
smaller than the ham corpus, so it would be possible to be
slightly over-agressive when clearing potential spam from
the ham corpus. This should make it easier to keep it clean.
Having a good way to remove spam from the ham corpus,
there's less need to worry about it getting there by mistake,
and we might as well simply add all messages to the ham corpus,
that didn't get deleted by the spam filtering.
It might also be useful to have a way to remove messages from
the spam corpus, in case of user ooops.
/Paul
| 0 |
[Spambayes] Deployment
> I guess MUA-level filtering is just a fallback for people who don't have
> 1) a burning, all-consuming hatred of junk mail, 2) root access to all
> mail servers they rely on, and 3) the ability and inclination to install
> an MTA with every bell and whistle tweaked to keep out junk mail.
Sure. But for most people, changing their company's or ISP's server
requires years of lobbying, while they have total and immediate
control over their own MUA.
That said, I agree that we should offer a good solution to
postmasters, and I trust that your ideas are right on the mark!
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Deployment
I think one step towards deployment is creating a re-usable tokenizer
for mail messages. The current codebase doesn't expose an easy-to-use
or easy-to-customize tokenizer.
The timtest module seems to contain an enormous body of practical
knowledge about how to parse mail messages, but the module wasn't
designed for re-use. I'd like to see a module that can take a single
message or a collection of messages and tokenize each one.
I'd like to see the tokenize by customizable, too. Tim had to exclude
some headers from his test data, because there were particular biases
in the test data. If other people have test data without those
biases, they ought to be able to customize the tokenizer to include
them or exclude others.
Jeremy
| 0 |
[Spambayes] test sets?
[Barry A. Warsaw, gives answers and asks questions]
Here's the code that produced the header tokens:
x2n = {}
for x in msg.keys():
x2n[x] = x2n.get(x, 0) + 1
for x in x2n.items():
yield "header:%s:%d" % x
Some responses:
> 0.01 19 3559 'header:X-Mailman-Version:1'
> 0.01 19 3559 'header:List-Id:1'
> 0.01 19 3557 'header:X-BeenThere:1'
>
> These three are definitely MM artifacts, although the second one
> /could/ be inserted by other list management software (it's described
> in an RFC).
Since all the ham came from Mailman, and only 19 spam had it, it's quite
safe to assume then that I should ignore these for now.
> 0.01 0 3093 'header:Newsgroups:1'
> 0.01 0 3054 'header:Xref:1'
> 0.01 0 3053 'header:Path:1'
>
> These aren't MM artifacts, but are byproducts of gating a message off
> of an nntp feed. Some of the other NNTP-* headers are similar, but I
> won't point them out below.
I should ignore these too then.
> 0.01 19 2668 'header:List-Unsubscribe:1'
> 0.01 19 2668 'header:List-Subscribe:1'
> 0.01 19 2668 'header:List-Post:1'
> 0.01 19 2668 'header:List-Help:1'
> 0.01 19 2668 'header:List-Archive:1'
>
> RFC recommended generic listserve headers that MM injects.
Ditto.
> So why do you get two entries for this one?
>
> 0.99 519 0 'header:Received:8'
> 0.99 466 1 'header:Received:7'
Read the code <wink>. The first line counts msgs that had 8 instances of a
'Received' header, and the second counts msgs that had 7 instances. I
expect this is a good clue! The more indirect the mail path, the more of
those thingies we'll see, and if you're posting from a spam trailer park in
Tasmania you may well need to travel thru more machines.
> ...
> Note that header names are case insensitive, so this one's no
> different than "MIME-Version:". Similarly other headers in your list.
Ignoring case here may or may not help; that's for experiment to decide.
It's plausible that case is significant, if, e.g., a particular spam mailing
package generates unusual case, or a particular clueless spammer
misconfigures his package.
> 0.02 65 3559 'header:Precedence:1'
>
> Could be Mailman, or not. This header is supposed to tell other
> automated software that this message was automated. E.g. a replybot
> should ignore any message with a Precedence: {bulk|junk|list}.
Rule of thumb: if Mailman inserts a thing, I should ignore it. Or, better,
I should stop trying to out-think the flaws in the test data and get better
test data instead!
> 0.50 4 0 'header:2:1'
>
> !?
> ...
> 0.50 0 2 'header:'
>
> Heh?
I sucked out all the wordinfo keys that began with "header:". The last line
there was probably due to unrelated instances of the string "header:" in
message bodies. Harder to guess about the first line.
> ...
> Some headers of course are totally unreliable as to their origin. I'm
> thinking stuff like MIME-Version, Content-Type, To, From, etc, etc.
> Everyone sticks those in.
The brilliance of Anthony's "just count them" scheme is that it requires no
thought, so can't be fooled <wink>. Header lines that are evenly
distributed across spam and ham will turn out to be worthless indicators
(prob near 0.5), so do no harm.
| 0 |
[Spambayes] test sets?
[Anthony Baxter]
> The other thing on my todo list (probably tonight's tram ride home) is
> to add all headers from non-text parts of multipart messages. If nothing
> else, it'll pick up most virus email real quick.
See the checkin comments for timtest.py last night. Adding this code gave a
major reduction in the false negative rate:
def crack_content_xyz(msg):
x = msg.get_type()
if x is not None:
yield 'content-type:' + x.lower()
x = msg.get_param('type')
if x is not None:
yield 'content-type/type:' + x.lower()
for x in msg.get_charsets(None):
if x is not None:
yield 'charset:' + x.lower()
x = msg.get('content-disposition')
if x is not None:
yield 'content-disposition:' + x.lower()
fname = msg.get_filename()
if fname is not None:
for x in fname.lower().split('/'):
for y in x.split('.'):
yield 'filename:' + y
x = msg.get('content-transfer-encoding:')
if x is not None:
yield 'content-transfer-encoding:' + x.lower()
...
t = ''
for x in msg.walk():
for w in crack_content_xyz(x):
yield t + w
t = '>'
I *suspect* most of that stuff didn't make any difference, but I put it all
in as one blob so don't know which parts did and didn't help.
| 0 |
[Spambayes] test sets?
TP> A false positive *really* has to work hard then, eh? The long
TP> quote of a Nigerian scam letter is one of the two that made
TP> it, and spamprob() looked at all this stuff before deciding it
TP> was spam:
Here's an interesting thing to test: discriminate words differently if
they are on a line that starts with `>' or, to catch styles like
above, that the first occurance on a line of < or > is > (to eliminate
html).
Then again, it may not be worth trying to un-false-positive that
Nigerian scam quote.
-Barry
| 0 |
[Spambayes] Deployment
So then, Guido van Rossum <guido@python.org> is all like:
> > Basic procmail usage goes something like this:
> >
> > :0fw
> > | spamassassin -P
> >
> > :0
> > * ^X-Spam-Status: Yes
> > $SPAM
> >
>
> Do you feel capable of writing such a tool? It doesn't look too hard.
Not to beat a dead horse, but that's exactly what my spamcan package
did. For those just tuning in, spamcan is a thingy I wrote before I
knew about Tim & co's work on this crazy stuff; you can download it from
<http://woozle.org/~neale/src/spamcan/spamcan.html>, but I'm not going
to work on it anymore.
I'm currently writing a new one based on classifier (and timtest's
booty-kicking tokenizer). I'll probably have something soon, like maybe
half an hour, and no, it's not too hard. The hard part is storing the
data somewhere. I don't want to use ZODB, as I'd like something a
person can just drop in with a default Python install. So anydbm is
looking like my best option.
I already have a setup like this using Xavier Leroy's SpamOracle, which
does the same sort of thing. You call it from procmail, it adds a new
header, and then you can filter on that header. Really easy.
Here's how I envision this working. Everybody gets four new mailboxes:
train-eggs
train-spam
trained-eggs
trained-spam
You copy all your spam and eggs* into the "train-" boxes as you get it.
How frequently you do this would be up to you, but you'd get better
results if you did it more often, and you'd be wise to always copy over
anything which was misclassified. Then, every night, the spam fairy
swoops down and reads through your folders, learning about what sorts of
things you think are eggs and what sorts of things are spam. After she's
done, she moves your mail into the "trained-" folders.
This would work for anybody using IMAP on a Unix box, or folks who read
their mail right off the server. I've spoken with some fellows at work
about Exchange and they seem to beleive that Exchange exports
appropriate functionality to implement a spam fairy as well.
Advanced users could stay ahead of the game by reprogramming their mail
client to bind the key "S" to "move to train-spam" and "H" to "move to
train-eggs". Eventually, if enough people used this sort of thing, it'd
start showing up in mail clients. That's the "delete as spam" button
Paul Graham was talking about.
* The Hormel company might not think well of using the word "ham" as the
opposite of "spam", and they've been amazingly cool about the use of
their product name for things thus far. So I propose we start calling
non-spam something more innocuous (and more Monty Pythonic) such as
"eggs".
Neale
| 0 |
[Spambayes] Deployment
[Jeremy Hylton]
> I think one step towards deployment is creating a re-usable tokenizer
> for mail messages. The current codebase doesn't expose an easy-to-use
> or easy-to-customize tokenizer.
tokenize() couldn't be easier to use: it takes a string argument, and
produces a stream of tokens (whether via explicit list, or generator, or
tuple, or ... doesn't matter). All the tokenize() functions in GBayes.py
and timtest.py are freely interchangeable this way.
Note that we have no evidence to support that a customizable tokenizer would
do any good, or, if it would, in which ways customization could be helpful.
That's a research issue on which no work has been done.
> The timtest module seems to contain an enormous body of practical
> knowledge about how to parse mail messages, but the module wasn't
> designed for re-use.
That's partly a failure of imagination <wink>. Splitting out all knowledge
of tokenization is just a large block cut-and-paste ... there, it's done.
Change the
from timtoken import tokenize
at the top to use any other tokenizer now. If you want to make it easier
still, feel free to check in something better.
> I'd like to see a module that can take a single message or a collection of
> messages and tokenize each one.
The Msg and MsgStream classes in timtest.py are a start at that, but it's
hard to do anything truly *useful* here when people use all sorts of
different physical representations for email msgs (mboxes in various
formats, one file per "folder", one file per msg, Skip's gzipped gimmick,
...). If you're a Python coder <wink>, you *should* find it very easy to
change the guts of Msg and MsgStream to handle your peculiar scheme.
Defining interfaces for these guys should be done.
> I'd like to see the tokenize by customizable, too. Tim had to exclude
> some headers from his test data, because there were particular biases
> in the test data. If other people have test data without those
> biases, they ought to be able to customize the tokenizer to include
> them or exclude others.
This sounds like a bottomless pit to me, and there's no easier way to
customize than to edit the code. As README.txt still says, though, massive
refactoring would help. Hop to it!
| 0 |
[Spambayes] Deployment
I think that when considering deployment, a solution that supports all
Python platforms and not just the L|Unix crowd is desirable.
Mac and PC users are more apt to be using a commercial MUA that's unlikely
to offer hooking ability (at least not easily). As mentioned elsewhere, even
L|Unix users may find an MUA solution easier to use then getting it added to
their MTA. (SysOps make programmers look like flaming liberals ;).)
My notion of a solution for Windows/Outlook has been, as Guido described, a
client-server. Client side does pop3/imap/mapi fetching (of which, I'm only
going to implement pop3 initially) potentially on several hosts, spamhams
the incoming mail and puts it into one file per message (qmail style?). The
MUA accesses this "eThunk" as a server to obtain all the ham. Spam is
retained in the eThunk and a simple viewer would be used for manual
oversight on the spam for ultimate rejection (and training of spam filter)
and the ham will go forward (after being used for training) on the next MUA
fetch. eThunk would sit on a timer for 'always online' users, but I am not
clear on how to support dialup users with this scheme.
Outbound mail would use a direct path from the MUA to the MTA. Hopefully all
MUAs can split the host fetch/send URL's
IMO, end users are likely to be more intested in n-way classification. If
this is available, the "simple viewer" could be enhanced to support viewing
via folders and (at least for me) the Outlook nightmare is over - I would
use this as my only MUA. (N.B. according to my recent readings, the best
n-way classifier uses something called a "Support Vector Machine" (SVM)
which is 5-8% more accurate then Naive Bayes (NB) ).
I wonder if the focus of spambayes ought not to be a classifier that leaves
the fetching and feeding of messages to auxillary code? That way, it could
be dropped into whatever harness that suited the user's situation.
David LeBlanc
Seattle, WA USA
| 0 |
[Spambayes] Deployment
> I wonder if the focus of spambayes ought not to be a classifier that
> leaves the fetching and feeding of messages to auxillary code? That
> way, it could be dropped into whatever harness that suited the
> user's situation.
I see no reason to restrict the project to developing the classifier
and leave the deployment to others. Attempts at deployment in the
real world will surely provide additional feedback for the classifier.
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] test sets?
On 06 September 2002, Tim Peters said:
> > Note that header names are case insensitive, so this one's no
> > different than "MIME-Version:". Similarly other headers in your list.
>
> Ignoring case here may or may not help; that's for experiment to decide.
> It's plausible that case is significant, if, e.g., a particular spam mailing
> package generates unusual case, or a particular clueless spammer
> misconfigures his package.
Case of headers is definitely helpful. SpamAssassin has a rule for it
-- if you have headers like "DATE" or "SUBJECT", you get a few more
points.
Greg
--
Greg Ward <gward@python.net> http://www.gerg.ca/
God is omnipotent, omniscient, and omnibenevolent
---it says so right here on the label.
| 0 |
[Spambayes] Deployment
Greg> In case it wasn't obvious, I'm a strong proponent of filtering
Greg> junk mail as early as possible, ie. right after the SMTP DATA
Greg> command has been completed. Filtering spam at the MUA just seems
Greg> stupid to me -- by the time it gets to me MUA, the spammer has
Greg> already stolen my bandwidth.
The two problems I see with filtering that early are:
1. Everyone receiving email via that server will contribute ham to the
stew, making the Bayesian classification less effective.
2. Given that there will be some false positives, you absolutely have to
put the mail somewhere. You can't simply delete it. (I also don't
like the TMDA-ish business of replying with a msg that says, "here's
what you do to really get your message to me." That puts an extra
burden on my correspondents.) As an individual, I would prefer you
put spammish messages somewhere where I can review them, not an
anonymous sysadmin who I might not trust with my personal email
(nothing against you Greg ;-).
I personally prefer to manage this stuff at the user agent level. Bandwidth
is a heck of a lot cheaper than my time.
Skip
| 0 |
[Spambayes] Deployment
On Friday 06 September 2002 20:48, Skip Montanaro wrote:
> Greg> In case it wasn't obvious, I'm a strong proponent of
> filtering Greg> junk mail as early as possible, ie. right after the
> SMTP DATA Greg> command has been completed. Filtering spam at the
> MUA just seems Greg> stupid to me -- by the time it gets to me MUA,
> the spammer has Greg> already stolen my bandwidth.
>
> The two problems I see with filtering that early are:
>
> 1. Everyone receiving email via that server will contribute ham
> to the stew, making the Bayesian classification less effective.
>
> 2. Given that there will be some false positives, you absolutely
> have to put the mail somewhere. You can't simply delete it. (I also
> don't like the TMDA-ish business of replying with a msg that says,
> "here's what you do to really get your message to me." That puts an
> extra burden on my correspondents.) As an individual, I would prefer
> you put spammish messages somewhere where I can review them, not an
> anonymous sysadmin who I might not trust with my personal email
> (nothing against you Greg ;-).
>
> I personally prefer to manage this stuff at the user agent level.
> Bandwidth is a heck of a lot cheaper than my time.
>
I see no reason why both approaches could and should not be used.
MTA level filtering would just need to use a different corpus, one that
would contain illegal or otherwise commonly unapproved material for the
group of people using that MTA. I'm sure that such an approach would
significantly reduce the mail traffic as a first step, without giving
false positives.
MUA corpus would then be personally trained -- although I'd like the
option of 'down-loadable' corpuses and merge functionality.
Harri
PS. Just joined the list, so pardon if my thoughts have been hashed
through before.
| 0 |
[Spambayes] Deployment
[Guido]
> ...
> - A program that acts both as a pop client and a pop server. You
> configure it by telling it about your real pop servers. You then
> point your mail reader to the pop server at localhost. When it
> receives a connection, it connects to the remote pop servers, reads
> your mail, and gives you only the non-spam.
FYI, I'll never trust such a scheme: I have no tolerance for false
positives, and indeed do nothing to try to block spam on any of my email
accounts now for that reason. Deliver all suspected spam to a Spam folder
instead and I'd love it.
| 0 |
[Spambayes] Deployment
> > - A program that acts both as a pop client and a pop server. You
> > configure it by telling it about your real pop servers. You then
> > point your mail reader to the pop server at localhost. When it
> > receives a connection, it connects to the remote pop servers, reads
> > your mail, and gives you only the non-spam.
>
> FYI, I'll never trust such a scheme: I have no tolerance for false
> positives, and indeed do nothing to try to block spam on any of my email
> accounts now for that reason. Deliver all suspected spam to a Spam folder
> instead and I'd love it.
Another config parameter.
The filter could add a header file. Or a ~ to the subject if you like
that style. :-)
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Deployment
[Guido]
> ...
> I don't know how big that pickle would be, maybe loading it each time
> is fine. Or maybe marshalling.)
My tests train on about 7,000 msgs, and a binary pickle of the database is
approaching 10 million bytes. I haven't done anything to try to reduce its
size, and know of some specific problem areas (for example, doing character
5-grams of "long words" containing high-bit characters generates a lot of
database entries, and I suspect they're approximately worthless). OTOH,
adding in more headers will increase the size. So let's call it 10 meg
<wink>.
| 0 |
[Spambayes] Deployment
[Guido]
> Takers? How is ESR's bogofilter packaged? SpamAssassin? The Perl
> Bayes filter advertised on slashdot?
WRT the last, it's a small pile of Windows .exe files along with
cygwin1.dll. The .exes are cmdline programs. One is a POP3 proxy. If I
currently have an email server named, say, mail.comcast.net, with user name
timmy, then I change my email reader to say that my server is 127.0.0.1, and
that my user name on that server is mail.comcast.net:timmy. In that way the
proxy picks up both the real server and user names from what the mail reader
tells it the user name is.
This is an N-way classifier (like ifile that way), and "all it does" is
insert a
X-Text-Classification: one_of_the_class_names_you_picked
header into your email before passing it on to your mail reader. The user
then presumably fiddles their mail reader to look for such headers and "do
something about it" (and even Outlook can handle *that* much <wink>).
The user is responsible for generating text files with appropriate examples
of each class of message, and for running the cmdline tools to train the
classifier.
| 0 |
[Spambayes] Deployment
You missed the part that said that spam is kept in the "eThunk" and was
viewable by a simple viewer for final disposition?
Of course, with Outbloat, you could fire up PythonWin and stuff the spam
into the Junk Email folder... but then you loose the ability to retrain on
the user classified ham/spam.
David LeBlanc
Seattle, WA USA
> -----Original Message-----
> From: spambayes-bounces+whisper=oz.net@python.org
> [mailto:spambayes-bounces+whisper=oz.net@python.org]On Behalf Of Tim
> Peters
> Sent: Friday, September 06, 2002 12:24
> To: spambayes@python.org
> Subject: RE: [Spambayes] Deployment
>
>
> [Guido]
> > ...
> > - A program that acts both as a pop client and a pop server. You
> > configure it by telling it about your real pop servers. You then
> > point your mail reader to the pop server at localhost. When it
> > receives a connection, it connects to the remote pop servers, reads
> > your mail, and gives you only the non-spam.
>
> FYI, I'll never trust such a scheme: I have no tolerance for false
> positives, and indeed do nothing to try to block spam on any of my email
> accounts now for that reason. Deliver all suspected spam to a Spam folder
> instead and I'd love it.
>
>
> _______________________________________________
> Spambayes mailing list
> Spambayes@python.org
> http://mail.python.org/mailman-21/listinfo/spambayes
| 0 |
[Spambayes] Deployment
So then, Tim Peters <tim.one@comcast.net> is all like:
> [Guido]
> > ...
> > I don't know how big that pickle would be, maybe loading it each time
> > is fine. Or maybe marshalling.)
>
> My tests train on about 7,000 msgs, and a binary pickle of the database is
> approaching 10 million bytes.
My paltry 3000-message training set makes a 6.3MB (where 1MB=1e6 bytes)
pickle. hammie.py, which I just checked in, will optionally let you
write stuff out to a dbm file. With that same message base, the dbm
file weighs in at a hefty 21.4MB. It also takes longer to write:
Using a database:
real 8m24.741s
user 6m19.410s
sys 1m33.650s
Using a pickle:
real 1m39.824s
user 1m36.400s
sys 0m2.160s
This is on a PIII at 551.257MHz (I don't know what it's *supposed* to
be, 551.257 is what /proc/cpuinfo says).
For comparison, SpamOracle (currently the gold standard in my mind, at
least for speed) on the same data blazes along:
real 0m29.592s
user 0m28.050s
sys 0m1.180s
Its data file, which appears to be a marshalled hash, is 448KB.
However, it's compiled O'Caml and it uses a much simpler tokenizing
algorithm written with a lexical analyzer (ocamllex), so we'll never be
able to outperform it. It's something to keep in mind, though.
I don't have statistics yet for scanning unknown messages. (Actually, I
do, and the database blows the pickle out of the water, but it scores
every word with 0.00, so I'm not sure that's a fair test. ;) In any
case, 21MB per user is probably too large, and 10MB is questionable.
On the other hand, my pickle compressed very well with gzip, shrinking
down to 1.8MB.
Neale
| 0 |
[Spambayes] Deployment
[Tim]
> My tests train on about 7,000 msgs, and a binary pickle of the database is
> approaching 10 million bytes.
That shrinks to under 2 million bytes, though, if I delete all the WordInfo
records with spamprob exactly equal to UNKNOWN_SPAMPROB. Such records
aren't needed when scoring (an unknown word gets a made-up probability of
UNKNOWN_SPAMPROB). Such records are only needed for training; I've noted
before that a scoring-only database can be leaner.
In part the bloat is due to character 5-gram'ing, part due to that the
database is brand new so has never been cleaned via clearjunk(), and part
due to plain evil gremlins.
| 0 |
[Spambayes] Deployment
So then, Tim Peters <tim.one@comcast.net> is all like:
> [Tim]
> > My tests train on about 7,000 msgs, and a binary pickle of the database is
> > approaching 10 million bytes.
>
> That shrinks to under 2 million bytes, though, if I delete all the WordInfo
> records with spamprob exactly equal to UNKNOWN_SPAMPROB. Such records
> aren't needed when scoring (an unknown word gets a made-up probability of
> UNKNOWN_SPAMPROB). Such records are only needed for training; I've noted
> before that a scoring-only database can be leaner.
That's pretty good. I wonder how much better you could do by using some
custom pickler. I just checked my little dbm file and found a lot of
what I would call bloat:
>>> import anydbm, hammie
>>> d = hammie.PersistentGrahamBayes("ham.db")
>>> db = anydbm.open("ham.db")
>>> db["neale"], len(db["neale"])
('ccopy_reg\n_reconstructor\nq\x01(cclassifier\nWordInfo\nq\x02c__builtin__\nobject\nq\x03NtRq\x04(GA\xce\xbc{\xfd\x94\xbboK\x00K\x00K\x00G?\xe0\x00\x00\x00\x00\x00\x00tb.', 106)
>>> d.wordinfo["neale"], len(`d.wordinfo["neale"]`)
(WordInfo'(1031337979.16197, 0, 0, 0, 0.5)', 42)
Ignoring the fact that there are too many zeros in there, the pickled
version of that WordInfo object is over twice as large as the string
representation. So we could get a 50% decrease in size just by using
the string representation instead of the pickle, right?
Something about that logic seems wrong to me, but I can't see what it
is. Maybe pickling is good for heterogeneous data types, but every
value of our big dictionary is going to have the same type, so there's a
ton of redundancy. I guess that explains why it compressed so well.
Neale
| 0 |
[Spambayes] understanding high false negative rate
>> > ##Remove: jeremy@alum.mit.edu##
Tim> Yuck: it got two 0.01's from embedding your email address at the
Tim> bottom here.
Which suggests that tagging email addresses in To/CC headers should be
handled differently than in message bodies?
Skip
| 0 |
[Spambayes] understanding high false negative rate
> >> > ##Remove: jeremy@alum.mit.edu##
>
> Tim> Yuck: it got two 0.01's from embedding your email address at the
> Tim> bottom here.
>
> Which suggests that tagging email addresses in To/CC headers should be
> handled differently than in message bodies?
I don't know whether it suggests that, but they would be tagged differently
in to/cc if I were tagging them at all right now. If I were tagging To:
addresses, for example, the tokens would look like
'to:email addr:mit'
instead of
'email addr:mit'
as they appear when an email-like thingie is found in the body. Whether
email addresses should be stuck in as one blob or split up as they are now
is something I haven't tested.
| 0 |
[Spambayes] [ANN] Trained classifier available
http://sf.net/project/showfiles.php?group_id=61702
This is the binary pickle of my classifier after training on
my first spam/ham corpora pair. All records with
spamprob == UNKNOWN_SPAMPROB have been purged.
It's in a zip file, and is only half a meg.
Jeremy, it would be interesting if you tried that on your data. The false
negative rates across my other 4 test sets when run against this are:
0.364%
0.400%
0.400%
0.909%
| 0 |
[Spambayes] understanding high false negative rate
>>>>> "TP" == Tim Peters <tim.one@comcast.net> writes:
>> The false positive rate is 0-3%. (Finally! I had to scrub a
>> bunch of previously unnoticed spam from my inbox.) Both
>> collections have about 1100 messages.
TP> Does this mean you trained on about 1100 of each?
The total collections are 1100 messages. I trained with 1100/5
messages.
TP> Can't guess. You're in a good position to start adding more
TP> headers into the analysis, though. For example, an easy start
TP> would be to uncomment the header-counting lines in tokenize()
TP> (look for "Anthony"). Likely the most valuable thing it's
TP> missing then is some special parsing and tagging of Received
TP> headers.
I tried the "Anthony" stuff, but it didn't make any appreciable
difference that I could see from staring at the false negative rate.
The numbers are big enough that a quick eyeball suffices.
Then I tried a dirt simple tokenizer for the headers that tokenize the
words in the header and emitted like this "%s: %s" % (hdr, word).
That worked too well :-). The received and date headers helped the
classifier discover that most of my spam is old and most of my ham is
new.
So I tried a slightly more complex one that skipped received, data,
and x-from_, which all contained timestamps. I also skipped the X-VM-
headers that my mail reader added:
class MyTokenizer(Tokenizer):
skip = {'received': 1,
'date': 1,
'x-from_': 1,
}
def tokenize_headers(self, msg):
for k, v in msg.items():
k = k.lower()
if k in self.skip or k.startswith('x-vm'):
continue
for w in subject_word_re.findall(v):
for t in tokenize_word(w):
yield "%s:%s" % (k, t)
This did moderately better. The false negative rate is 7-21% over the
tests performed so far. This is versus 11-28% for the previous test
run that used the timtest header tokenizer.
It's interesting to see that the best descriminators are all ham
discriminators. There's not a single spam-indicator in the list.
Most of the discriminators are header fields. One thing to note is
that the presence of Mailman-generated headers is a strong non-spam
indicator. That matches my intuition: I got an awful lot of
Mailman-generated mail, and those lists are pretty good at surpressing
spam. The other thing is that I get a lot of ham from people who use
XEmacs. That's probably Barry, Guido, Fred, and me :-).
One final note. It looks like many of the false positives are from
people I've never met with questions about Shakespeare. They often
start with stuff like:
> Dear Sir/Madam,
>
> May I please take some of your precious time to ask you to help me to find a
> solution to a problem that is worrying me greatly. I am old science student
I guess that reads a lot like spam :-(.
Jeremy
238 hams & 221 spams
false positive: 2.10084033613
false negative: 9.50226244344
new false positives: []
new false negatives: []
best discriminators:
'x-mailscanner:clean' 671 0.0483425
'x-spam-status:IN_REP_TO' 679 0.01
'delivered-to:skip:s 10' 691 0.0829876
'x-mailer:Lucid' 699 0.01
'x-mailer:XEmacs' 699 0.01
'x-mailer:patch' 699 0.01
'x-mailer:under' 709 0.01
'x-mailscanner:Found' 716 0.0479124
'cc:zope.com' 718 0.01
"i'll" 750 0.01
'references:skip:1 20' 767 0.01
'rossum' 795 0.01
'x-spam-status:skip:S 10' 825 0.01
'van' 850 0.01
'http0:zope' 869 0.01
'email addr:zope' 883 0.01
'from:python.org' 895 0.01
'to:jeremy' 902 0.185401
'zope' 984 0.01
'list-archive:skip:m 10' 1058 0.01
'list-subscribe:skip:m 10' 1058 0.01
'list-unsubscribe:skip:m 10' 1058 0.01
'from:zope.com' 1098 0.01
'return-path:zope.com' 1115 0.01
'wrote:' 1129 0.01
'jeremy' 1150 0.01
'email addr:python' 1257 0.01
'x-mailman-version:2.0.13' 1311 0.01
'x-mailman-version:101270' 1395 0.01
'python' 1401 0.01
| 0 |
[Spambayes] Ditching WordInfo
[Neale Pickett]
> I hacked up something to turn WordInfo into a tuple before pickling,
That's what WordInfo.__getstate__ does.
> and then turn the tuple back into WordInfo right after unpickling.
Likewise for WordInfo.__setstate__.
> Without this hack, my database was 21549056 bytes. After, it's 9945088
bytes.
> That's a 50% savings, not a bad optimization.
I'm not sure what you're doing, but suspect you're storing individual
WordInfo pickles. If so, most of the administrative pickle bloat is due to
that, and doesn't happen if you pickle an entire classifier instance
directly.
> So my question is, would it be too painful to ditch WordInfo in favor of
> a straight out tuple? (Or list if you'd rather, although making it a
> tuple has the nice side-effect of forcing you to play nice with my
> DBDict class).
>
> I hope doing this sort of optimization isn't too far distant from the
> goal of this project, even though README.txt says it is :)
>
> Diff attached. I'm not comfortable checking this in,
I think it's healthy that you're uncomfortable checking things in with
> + # XXX: kludge kludge kludge.
comments <wink>.
> since I don't really like how it works (I'd rather just get rid of
WordInfo).
> But I guess it proves the point :)
I'm not interested in optimizing anything yet, and get many benefits from
the *ease* of working with utterly vanilla Python instance objects. Lots of
code all over picks these apart for display and analysis purposes. Very few
people have tried this code yet, and there are still many questions about it
(see, e.g., Jeremy's writeup of his disappointing first-time experiences
today). Let's keep it as easy as possible to modify for now. If you're
desparate to save memory, write a subclass?
Other people are free to vote in other directions, of course <wink>.
| 0 |
[Spambayes] test sets?
[Barry]
> Here's an interesting thing to test: discriminate words differently if
> they are on a line that starts with `>' or, to catch styles like
> above, that the first occurance on a line of < or > is > (to eliminate
> html).
Give me a mod to timtoken.py that does this, and I'll be happy to test it.
> Then again, it may not be worth trying to un-false-positive that
> Nigerian scam quote.
If there's any sanity in the world, even the original poster would be glad
to have his kneejerk response blocked <wink>. OTOH, you know there are a
great many msgs on c.l.py (all over Usenet) that do nothing except quote a
previous post and add a one-line comment. Remove the quoted sections from
those, and there may be no content left to judge except for the headers. So
I can see this nudging the stats in either direction. The only way to find
out for sure is for you to write some code <wink>.
| 0 |
[Spambayes] understanding high false negative rate
[Jeremy Hylton[
> The total collections are 1100 messages. I trained with 1100/5
> messages.
I'm reading this now as that you trained on about 220 spam and about 220
ham. That's less than 10% of the sizes of the training sets I've been
using. Please try an experiment: train on 550 of each, and test once
against the other 550 of each. Do that a few times making a random split
each time (it won't be long until you discover why directories of individual
files are a lot easier to work -- e.g., random.shuffle() makes this kind of
thing trivial for me).
| 0 |
[Spambayes] understanding high false negative rate
[Jeremy]
> The total collections are 1100 messages. I trained with 1100/5
> messages.
While that's not a lot of training data, I picked random subsets of my
corpora and got much better behavior (this is rates.py output; f-p rate per
run in left column, f-n rate in right):
Training on Data/Ham/Set1 & Data/Spam/Set1 ... 220 hams & 220 spams
0.000 1.364
0.000 0.455
0.000 1.818
0.000 1.364
Training on Data/Ham/Set2 & Data/Spam/Set2 ... 220 hams & 220 spams
0.455 2.727
0.455 0.455
0.000 0.909
0.455 2.273
Training on Data/Ham/Set3 & Data/Spam/Set3 ... 220 hams & 220 spams
0.000 2.727
0.455 0.909
0.000 0.909
0.000 1.818
Training on Data/Ham/Set4 & Data/Spam/Set4 ... 220 hams & 220 spams
0.000 2.727
0.000 0.909
0.000 0.909
0.000 1.818
Training on Data/Ham/Set5 & Data/Spam/Set5 ... 220 hams & 220 spams
0.000 1.818
0.000 1.364
0.000 0.909
0.000 2.273
total false pos 4 0.363636363636
total false neg 29 2.63636363636
Another full run with another randomly chosen (but disjoint) 220 of each in
each set was much the same. The score distribution is also quite sharp:
Ham distribution for all runs:
* = 74 items
0.00 4381 ************************************************************
2.50 3 *
5.00 3 *
7.50 1 *
10.00 0
12.50 0
15.00 1 *
17.50 1 *
20.00 1 *
22.50 0
25.00 0
27.50 0
30.00 1 *
32.50 0
35.00 0
37.50 0
40.00 1 *
42.50 0
45.00 0
47.50 0
50.00 0
52.50 0
55.00 0
57.50 1 *
60.00 0
62.50 0
65.00 0
67.50 1 *
70.00 0
72.50 0
75.00 0
77.50 0
80.00 0
82.50 0
85.00 0
87.50 1 *
90.00 0
92.50 2 *
95.00 0
97.50 2 *
Spam distribution for all runs:
* = 73 items
0.00 13 *
2.50 0
5.00 4 *
7.50 5 *
10.00 0
12.50 2 *
15.00 1 *
17.50 1 *
20.00 2 *
22.50 1 *
25.00 0
27.50 1 *
30.00 0
32.50 3 *
35.00 0
37.50 0
40.00 0
42.50 0
45.00 1 *
47.50 3 *
50.00 16 *
52.50 0
55.00 0
57.50 0
60.00 1 *
62.50 0
65.00 2 *
67.50 1 *
70.00 1 *
72.50 0
75.00 1 *
77.50 0
80.00 3 *
82.50 2 *
85.00 1 *
87.50 2 *
90.00 2 *
92.50 4 *
95.00 4 *
97.50 4323 ************************************************************
It's hard to say whether you need better ham or better spam, but I suspect
better spam <wink>. 18 of the 30 most powerful discriminators here were
HTML-related spam indicators; the top 10 overall were:
'<font' 266 0.99
'content-type:text/plain' 275 0.172932
'</body>' 312 0.99
'</html>' 329 0.99
'click' 334 0.99
'<html>' 335 0.99
'wrote:' 381 0.01
'skip:< 10' 398 0.99
'python' 428 0.01
'content-type:text/html' 454 0.99
The HTML tags come from non-multipart/alternative HTML messages, from which
HTML tags aren't stripped, and there are lots of these in my spam sets.
That doesn't account for it, though. If I strip HTML tags out of those too,
the rates are only a little worse:
raining on Data/Ham/Set1 & Data/Spam/Set1 ... 220 hams & 220 spams
0.000 1.364
0.000 1.818
0.455 1.818
0.000 1.818
raining on Data/Ham/Set2 & Data/Spam/Set2 ... 220 hams & 220 spams
0.000 1.364
0.455 1.818
0.455 0.909
0.000 1.818
raining on Data/Ham/Set3 & Data/Spam/Set3 ... 220 hams & 220 spams
0.000 2.727
0.000 0.909
0.909 0.909
0.455 1.818
raining on Data/Ham/Set4 & Data/Spam/Set4 ... 220 hams & 220 spams
0.000 1.818
0.000 0.909
0.455 0.909
0.000 1.364
raining on Data/Ham/Set5 & Data/Spam/Set5 ... 220 hams & 220 spams
0.000 2.727
0.000 1.364
0.455 2.273
0.455 2.273
otal false pos 4 0.363636363636
otal false neg 34 3.09090909091
The 4th-strongest discriminator *still* finds another HTML clue, though!
'subject:Python' 164 0.01
'money' 169 0.99
'content-type:text/plain' 185 0.2
'charset:us-ascii' 191 0.127273
"i'm" 232 0.01
'content-type:text/html' 248 0.983607
' ' 255 0.99
'wrote:' 372 0.01
'python' 431 0.01
'click' 519 0.99
Heh. I forgot all about .
| 0 |
[Spambayes] test sets?
> > Note that header names are case insensitive, so this one's no
> > different than "MIME-Version:". Similarly other headers in your list.
>
> Ignoring case here may or may not help; that's for experiment to decide.
> It's plausible that case is significant, if, e.g., a particular spam mailing
> package generates unusual case, or a particular clueless spammer
> misconfigures his package.
I found it made no difference for my testing.
> The brilliance of Anthony's "just count them" scheme is that it requires no
> thought, so can't be fooled <wink>. Header lines that are evenly
> distributed across spam and ham will turn out to be worthless indicators
> (prob near 0.5), so do no harm.
zactly. I started off doing clever clever things, and, as always with
this stuff, found that stupid with a rock beats smart with scissors,
every time.
--
Anthony Baxter <anthony@interlink.com.au>
It's never too late to have a happy childhood.
| 0 |
[Spambayes] understanding high false negative rate
> Looks like your ham corpus by and large has To: jeremy@alum.mit.edu in
> a header while your spam corpus by and large doesn't. But this one
> does.
Interestingly, for me, one of the highest value spam indicators was
the name of the mail host that the spam was delivered to, in the To:
line. So mail to info@gin.elax2.ekorp.com was pretty much a dead cert
for the filters.
--
Anthony Baxter <anthony@interlink.com.au>
It's never too late to have a happy childhood.
| 0 |
[Spambayes] understanding high false negative rate
>>> Jeremy Hylton wrote
> Then I tried a dirt simple tokenizer for the headers that tokenize the
> words in the header and emitted like this "%s: %s" % (hdr, word).
> That worked too well :-). The received and date headers helped the
> classifier discover that most of my spam is old and most of my ham is
> new.
Heh. I hit the same problem, but the other way round, when I first
started playing with this - I'd collected spam for a week or two,
then mixed it up with randomly selected messages from my mail boxes.
course, it instantly picked up on 'received:2001' as a non-ham.
Curse that too-smart-for-me software. Still, it's probably a good
thing to note in the documentation about the software - when collecting
spam/ham, make _sure_ you try and collect from the same source.
Anthony
--
Anthony Baxter <anthony@interlink.com.au>
It's never too late to have a happy childhood.
| 0 |
[Spambayes] understanding high false negative rate
> course, it instantly picked up on 'received:2001' as a non-ham.
-spam.
*sigh*
--
Anthony Baxter <anthony@interlink.com.au>
It's never too late to have a happy childhood.
| 0 |
[Spambayes] hammie.py vs. GBayes.py
There seem to be two "drivers" for the classifier now: Neale Pickett's
hammie.py, and the original GBayes.py. According to the README.txt,
GBayes.py hasn't been kept up to date. Is there anything in there
that isn't covered by hammie.py? About the only useful feature of
GBayes.py that hammie.py doesn't (yet) copy is -u, which calculates
spamness for an entire mailbox. This feature can easily be copied
into hammie.py. (GBayes.py also has a large collection of tokenizers;
but timtoken.py rules, so I'm not sure how interesting that is now.)
Therefore I propose to nuke GBayes.py, after adding a -u feature.
Anyone against? (I imagine that Skip or Barry might have a stake in
GBayes.py; Tim seems to have moved all code he's working to other
modules.)
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] Maybe change X-Spam-Disposition to something else...
I actually like Neale's X-Spam-Disposition header, I just wonder if maybe we
should choose something with a different prefix than "X-Spam-" so that
people don't confuse it with SpamAssassin, all of whose headers begin with
that prefix.
Also, some sort of version information would be useful. Perhaps the CVS
version number for the classifier module could be tacked on.
Skip
| 0 |
[Spambayes] Maybe change X-Spam-Disposition to something else...
>>> Skip Montanaro wrote
>
> I actually like Neale's X-Spam-Disposition header, I just wonder if maybe we
> should choose something with a different prefix than "X-Spam-" so that
> people don't confuse it with SpamAssassin, all of whose headers begin with
> that prefix.
I think it's fine, in general, just so long as no-one checks in anything
that puts it into my test corpus.
Or alternately, whatever is chosen should be ignored by the tokenizer.
I know my mail host (interlink) runs SA, but I also run it, with my own
set of rules and scores. I don't want my spam-filter to be getting messed
up by an upstream spam filter.
| 0 |
[Spambayes] hammie.py vs. GBayes.py
Guido> Therefore I propose to nuke GBayes.py, after adding a -u feature.
Guido> Anyone against? (I imagine that Skip or Barry might have a stake
Guido> in GBayes.py; Tim seems to have moved all code he's working to
Guido> other modules.)
No argument here.
Skip
| 0 |
[Spambayes] speed
If the frequency of my laptop's disk chirps are any indication, I'd say
hammie is about 3-5x faster than SpamAssassin.
Skip
| 0 |
[Spambayes] can't write to CVS...
I'm listed as a developer on SF and have the spambayes CVS module checked
out using my SF username, but I'm unable to write to the repository. CVS
complains:
% cvs add unheader.py
cvs [server aborted]: "add" requires write access to the repository
Any thoughts?
Skip
| 0 |
[Spambayes] Ditching WordInfo
So then, Tim Peters <tim.one@comcast.net> is all like:
> I'm not sure what you're doing, but suspect you're storing individual
> WordInfo pickles. If so, most of the administrative pickle bloat is
> due to that, and doesn't happen if you pickle an entire classifier
> instance directly.
Yeah, that's exactly what I was doing--I didn't realize I was incurring
administrative pickle bloat this way. I'm specifically trying to make
things faster and smaller, so I'm storing individual WordInfo pickles
into an anydbm dict (keyed by token). The result is that it's almost 50
times faster to score messages one per run our of procmail (.408s vs
18.851s).
However, it *does* say all over the place that the goal of this project
isn't to make the fastest or the smallest implementation, so I guess
I'll hold off doing any further performance tuning until the goal starts
to point more in that direction. .4 seconds is probably fast enough for
people to use it in their procmailrc, which is what I was after.
> If you're desparate to save memory, write a subclass?
That's probably what I'll do if I get too antsy :)
Trying to think of ways to sneak "administrative pickle boat" into
casual conversation,
Neale
| 0 |
[Spambayes] Ditching WordInfo
> Yeah, that's exactly what I was doing--I didn't realize I was
> incurring administrative pickle bloat this way. I'm specifically
> trying to make things faster and smaller, so I'm storing individual
> WordInfo pickles into an anydbm dict (keyed by token). The result
> is that it's almost 50 times faster to score messages one per run
> our of procmail (.408s vs 18.851s).
This is very worthwhile.
> However, it *does* say all over the place that the goal of this
> project isn't to make the fastest or the smallest implementation, so
> I guess I'll hold off doing any further performance tuning until the
> goal starts to point more in that direction. .4 seconds is probably
> fast enough for people to use it in their procmailrc, which is what
> I was after.
Maybe. I batch messages using fetchmail (don't ask why), and adding
.4 seconds per message for a batch of 50 (not untypical) feels like a
real wait to me...
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] can't write to CVS...
[Skip Montanaro]
> I'm listed as a developer on SF and have the spambayes CVS module checked
> out using my SF username, but I'm unable to write to the repository. CVS
> complains:
>
> % cvs add unheader.py
> cvs [server aborted]: "add" requires write access to the repository
>
> Any thoughts?
Not really. Try again? About half the developers on the spambayes project
were missing some permission or other, so I ran thru all of them and checked
every damned box and clicked on every damn dropdown list I could find. As
far as SF is concerned, you're all sitting on God's Right Hand now, so if it
still doesn't work I suggest you upgrade to Win98 <wink>.
| 0 |
[Spambayes] can't write to CVS...
Tim> About half the developers on the spambayes project were missing
Tim> some permission or other, so I ran thru all of them and checked
Tim> every damned box and clicked on every damn dropdown list I could
Tim> find. As far as SF is concerned, you're all sitting on God's Right
Tim> Hand now, so if it still doesn't work I suggest you upgrade to
Tim> Win98 <wink>.
Time to upgrade I guess. :-(
% cvs add unheader.py
cvs [server aborted]: "add" requires write access to the repository
I'll try a checkout into a new directory...
S
| 0 |
[Spambayes] Maybe change X-Spam-Disposition to something else...
[Skip Montanaro, to Anthony Baxter]
> ...
> Accordingly, I wrote unheader.py, which is mostly a ripoff of something
> someone else posted to python-dev or c.l.py within the last week or so to
> strip out SA-generated headers.
Unless I've grown senile tonight, you got it from Anthony to begin with.
Please check it in to project, and add a short blurb to README.txt!
| 0 |
[Spambayes] Maybe change X-Spam-Disposition to something else...
>> ... choose something with a different prefix than "X-Spam-" so that
>> people don't confuse it with SpamAssassin ...
Neale> How about X-Spambayes-Disposition (or X-Hammie-Disposition if
Neale> there'll be other classifier front-ends)?
I kinda like "hammie". Your front end was there first, so I suspect it will
rule the front end roost.
S
| 0 |
[Spambayes] can't write to CVS...
Skip> I'll try a checkout into a new directory...
Which didn't help. At the very least I think that means it's time for
bed...
Skip
| 0 |
[Spambayes] Maybe change X-Spam-Disposition to something else...
>> Accordingly, I wrote unheader.py, which is mostly a ripoff of
>> something someone else posted to python-dev or c.l.py within the last
>> week or so to strip out SA-generated headers.
Tim> Unless I've grown senile tonight, you got it from Anthony to begin
Tim> with. Please check it in to project, and add a short blurb to
Tim> README.txt!
Raring to go, once I can write to CVS...
S
| 0 |
[Spambayes] Ditching WordInfo
So then, Guido van Rossum <guido@python.org> is all like:
> Maybe. I batch messages using fetchmail (don't ask why), and adding
> .4 seconds per message for a batch of 50 (not untypical) feels like a
> real wait to me...
Yeesh. Sounds like what you need is something to kick up once and score
an entire mailbox.
Wait a second... So *that's* why you wanted -u.
If you can spare the memory, you might get better performance in this
case using the pickle store, since it only has to go to disk once (but
boy, does it ever go to disk!) I can't think of anything obvious to
speed things up once it's all loaded into memory, though. That's
profiler territory, and profiling is exactly the kind of optimization
I just said I wasn't going to do :)
Neale
| 0 |
[Spambayes] Ditching WordInfo
> > Maybe. I batch messages using fetchmail (don't ask why), and adding
> > .4 seconds per message for a batch of 50 (not untypical) feels like a
> > real wait to me...
>
> Yeesh. Sounds like what you need is something to kick up once and score
> an entire mailbox.
>
> Wait a second... So *that's* why you wanted -u.
>
> If you can spare the memory, you might get better performance in this
> case using the pickle store, since it only has to go to disk once (but
> boy, does it ever go to disk!) I can't think of anything obvious to
> speed things up once it's all loaded into memory, though. That's
> profiler territory, and profiling is exactly the kind of optimization
> I just said I wasn't going to do :)
We could have a server mode (someone described this as an SA option).
--Guido van Rossum (home page: http://www.python.org/~guido/)
| 0 |
[Spambayes] understanding high false negative rate
>>>>> "TP" == Tim Peters <tim.one@comcast.net> writes:
TP> [Jeremy Hylton[
>> The total collections are 1100 messages. I trained with 1100/5
>> messages.
TP> I'm reading this now as that you trained on about 220 spam and
TP> about 220 ham. That's less than 10% of the sizes of the
TP> training sets I've been using. Please try an experiment: train
TP> on 550 of each, and test once against the other 550 of each.
This helps a lot! Here are results with the stock tokenizer:
Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 8>
... 644 hams & 557 spams
0.000 10.413
1.398 6.104
1.398 5.027
Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 0>
... 644 hams & 557 spams
0.000 8.259
1.242 2.873
1.242 5.745
Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 3>
... 644 hams & 557 spams
1.398 5.206
1.398 4.488
0.000 9.336
Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 0>
... 644 hams & 557 spams
1.553 5.206
1.553 5.027
0.000 9.874
total false pos 139 5.39596273292
total false neg 970 43.5368043088
And results from the tokenizer that looks at all headers except Date,
Received, and X-From_:
Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 8>
... 644 hams & 557 spams
0.000 7.540
0.932 4.847
0.932 3.232
Training on <mbox: /home/jeremy/Mail/INBOX 0> & <mbox: /home/jeremy/Mail/spam 0>
... 644 hams & 557 spams
0.000 7.181
0.621 2.873
0.621 4.847
Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 3>
... 644 hams & 557 spams
1.087 4.129
1.087 3.052
0.000 6.822
Training on <mbox: /home/jeremy/Mail/INBOX 2> & <mbox: /home/jeremy/Mail/spam 0>
... 644 hams & 557 spams
0.776 3.411
0.776 3.411
0.000 6.463
total false pos 97 3.76552795031
total false neg 738 33.1238779174
Is it safe to conclude that avoiding any cleverness with headers is a
good thing?
TP> Do that a few times making a random split each time (it won't be
TP> long until you discover why directories of individual files are
TP> a lot easier to work -- e.g., random.shuffle() makes this kind
TP> of thing trivial for me).
You haven't looked at mboxtest.py, have you? I'm using
random.shuffle(), too. You don't need to have many files to make an
arbitrary selection of messages from an mbox.
I'll report some more results when they're done.
Jeremy
| 0 |