Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

dws (341)

dws
  (email not shown publicly)
http://www.davewsmith.com/

Journal of dws (341)

Monday August 19, 2002
03:14 PM

Bayesian filtering code to play with

[ #7176 ]

Several people have asked, so here (link is forever 404, sorry) is a partial implementation of Paul Graham's Bayesian spam filtering algorithm, suitable for experimenting with. I'm currently reworking the final filter so that it doesn't slurp the entire token weighting table into memory (which is great when you want to run it against an mbox, but is overkill for testing a single email).

See the README for some ideas on how to expand this into something usable.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Similar code... (Score:3, Insightful)

    by Matts (1087) on 2002.08.20 2:01 (#11960) Journal
    I also posted very similar code [sourceforge.net] to do this to the SpamAssassin-Talk mailing list yesterday, in case anyone is interested in a slightly different encoding of the algorithm.

    I use my own mail parser class that doesn't use memory (it uses temp files instead), and decodes all the MIME stuff for you. Might be worth checking out too in case anyone is interested.

    We'll probably plug this into SA 2.41+ or SA3 (whichever comes first).
    • If you've got SpamAssasin covered, I'll keep going on a Mail::Audit plugin (which also handles MIME). I've reworked the algorithm to scan the weighting file after tokenizing the message body. No more sucking everything into memory.

      By the way, a simple tokenizer tweak cut my falst negatives in half. I only force a token to lowercase if at least one character is already lowercase. This has the effect of keeping a separate (high) weights for "MILLION" and "EMAILS" than for "million" and "emails", which have l
      • I've reworked mine to use SQLite. Seems to work well. Database is 17MiB though, so I think I need to investigate other options too.

        The upper/lower case thing didn't make one squat of a difference for me.
        • Are you sure you're tokenizing consistently? I got bit by that once.

          When querying, are you going after tokens one at a time, batching up requests using IN (), or trying to get them all at once using a JOIN?