Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • The problem with a P2P blacklist is that you allow people into your network that you don't trust. Spammers would just get smart and use zombies to join the network and un-blacklist everything.

    • Yup, P2P networks have problems with concurrency, deciding which is the most upto date version of the list.
      Plus as you say, there is no authentication, anyone can post a file labelled sbl-blacklist.txt identical to the official file, but with completely different IP addressed.
      But they are very good at surviving denail of service attacks, which is a big problem with the DNS blacklists at the moment.
      • Doesn't a simple digital signature on the text file ensure it's authenticity? I believe the FSF is going to do this for their software after their FTP servers got cracked. This should prevent future rogue code injections, or at least prevent them from going unnoticed. The same should work to authenticate blacklists, no?

        • If I was going to be evil, instead of serving a signed file I'd just join the network with as many machines as possible and have a distributed honeypot. Making the spam check timeout would work just as well as returning bad data. You could also run a DDOS on the root nodes.
          • Wasn't freenet designed to withstand just such an attack?

            I think the idea is to not have a root node. Just a single public key you can validate the text file against. The blacklist could be distributed and mirrored using a variety of technologies -- http, ftp, nntp, irc, konspire, email, jabber, kazaa, freenet, etc. It's seems not just possible for pretty straightforward to have the original blacklist be periodically injected into the 'net, as opposed to having a dependency on a particular server being up and running.

            • Blacklists are dynamic; new IP addresses get added, some get delisted. A single authorative source is required, otherwise you have big problems with version control.
              Signed incremental updates are the way to go, but I'm still unsure about how you authenticate the identity of a signee (without resorting to blindly beleiving Verisign)(which would then leave Verisign as a single point of failure in the system).