Is it just me, or is everyone writing an HTTP server these days? After Apache 2 became solid, it looked like there wasn't much of interest left to do in the world of HTTP servers, and the field had been fully commodified. CPAN had a half dozen or so Perl HTTP servers, all of which were fine for entertainment but not useful for real sites. You'd hear some crank on Slashdot shouting about thttpd (I swear that wasn't me), but it didn't set the world on fire.
Then the single-threaded servers started showing up in earnest. A non-blocking I/O approach to networking is well-known to scale better than threads or processes, and it appealed to developers in a very primal way -- it's fast! Well, not so much fast, since you'd run out of bandwidth long before that mattered, but you could handle lots of open connections to slow clients without any trouble.
Lighttpd quickly became a star, especially in the PHP and Ruby worlds. (Why were Rails developers looking for a faster web server rather than trying to fix Ruby's performance problems? Probably because it's a much easier problem.)
Somewhere in there, Perlbal made the scene. It's a bit of a hodgepodge of features, having been developed to suit some particular in-house project needs, but an interesting sort of glue project to fill gaps in the Perl web app deployment story.
Some of the Rails guys then decided they didn't like FastCGI and would write their own HTTP server to replace it, called Mongrel. So far, the benchmarks I've seen make it look like performance has gotten worse compared to what they had with FastCGI, but it's still early so maybe they will improve that. They say they were doing it because the FastCGI implementations all had bugs, so maybe they don't care if it's slower anyway.
Meanwhile, people started popping up on the mod_perl list saying that they had built their own single-threaded servers. I usually ask people two things when they say this:
The only good answer I've heard to the first question so far is to ship the blocking stuff off to some separate persistent processes (e.g. mod_perl, PPerl, etc.) that you talk to over non-blocking I/O, and pick up the results when it's done. This is what Stas Bekman did with the single-threaded server he works on at MailChannels (for blocking spam). It's also what Matt Sergeant seems to be planning for his new single-threaded AxKit2 HTTP server.
Meanwhile, back at the Apache 2 camp, mod_proxy has picked up useful new features like basic load balancing and people are experimenting with hybrid threaded/non-blocking I/O process models.
It's good to see innovation happening. Sometimes I do wonder if people are chasing the right things. I find it pretty easy to make a screamingly fast web app with basic Apache and mod_perl these days, so maybe pushing things in a direction that makes development harder (as I think single-threaded programming will be for most people) is not the best move for all of us. High-performance has an undeniable allure though, especially for people like us who still have to convince managers that Perl is fast enough for a web site. (Duh. Maybe you've heard of Amazon?) I'll certainly be paying attention though, to see what Matt and everyone else cooks up.