Benchmarking is fun.
I've been seeing how fast DBD::SQLite is (or can be). At the moment I've only done inserts. Initially I was finding it horribly slow, trying to do 100_000 inserts of random data into a table. I was managing about 1000 rows every 20 seconds. Then I tried it with transactions, and only committing every 1000 rows. Suddenly it was fast. Very fast. About 1100 rows per second. This seems to be because the transaction journal is used even in non-transaction mode - it just does an implied commit on every statement.
I haven't benchmarked selects yet, but I'm not sure it's worth it because from within dbish every time I try and do something against that table with 100_000 rows in it, results come back instantaneously. Awesome.