These days most people rsync DNSBL zones rather than relying on AXFR which is slower and uses more bandwidth. We do this ourselves with a large internal DNSBL zone that averages about 150MB (or 2.7million entries).
Unfortunately we found out that every rsync it was transferring the entire thing across the wire, which really gives us no benefit over AXFR except perhaps for reliability.
The first thing I found was that we were creating the zone from an in-memory hash by just going through keys(). Since this is a hashtable it pretty much randomised the list every time (give or take). Changing the ordering every time makes rsync just transfer the whole lot. Making it sort(keys()) fixed that (despite my concerns about blowing the time it took to run, or the memory used, it was all OK).
Unfortunately this didn't fix the basic problem of transferring the whole thing. Something else was wrong.
It occurred to me that since this was a sorted list of IP addresses that inserting a new IP every 20 lines or so would probably blow rsync's view of differences out of the water. The way rsync works is to split your file up into "blocks", checksum each block and transfer those checksums over the wire. For every block that doesn't match the remote end sends back the changed blocks. The rsync client uses a heuristic block size depending on the size of the file (larger block sizes for bigger files), so for my DNSBL it was using a block size of around 12KB - clearly big enough to mis match every time. What I needed was a block size closer to a few lines. Through trial and error I found out that the required size was around 400B.
Now comes the problem with rsync: rsync is stupid.
The reason that rsync chooses larger block sizes for larger files isn't because it means less checksums get transferred, it's to minimize the chances of a hash collision (basically a fix for rsync's birthday paradox). By default rsync uses a pretty small checksum - a CRC32 plus the first 2 bytes of the md4 of the block. Now what we have done by reducing the block size is massively increase our chances of collisions.
Luckily rsync recognises this, but not after it's done the ENTIRE TRANSFER first. After it recognises "Oh, we've had collisions, duh!", it switches to full checksum mode, which simply transfers the full md4 checksum for each block. Basically resulting in doing the whole thing twice (the rsync output says "redoing file(0)").
And yes, there is no command line switch to just tell it to do the large checksums first. Lame. Also what I found on the mailing list archives was a patch to do CRC + 4 bytes, which would also fix this, but it breaks the rsync protocol for everything else you want to rsync with, so we'd have to maintain a custom dnsbl-rsync.
It turns out that it would be easier (though MUCH less reliable) to simply do a diff(1) and send that. I'm going to work on a proposal that tries to do that first, and resort back to rsync if that fails.