Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Good thinking, Ovid. We rebuild most of our tables many times throughout our test suite and it takes eons. I might try to do something similar to this. Thanks for sharing the idea!
  • We have a large test suite as well (almost 18,000 tests with a run time of about 54 minutes). I agree that running the whole thing is annoying. But it seems like you are spending too much time trying to speed up your test suite. As your application gets larger and more complex you will be guaranteed to add more tests (and even if you don't add any new features there are probably parts of the application that are undertested, so there's more tests there). Seems like a losing battle.

    Here at $work when a devel

    • Here at $work when a developer is working on a feature he should run the tests that are most relevant to that feature and nothing more.

      Shameless self promotion, etc.

      Have you looked at Devel::CoverX::Covered [cpan.org]?

    • Seems like a losing battle.

      If you only get 3 - 5% improvements here and there, it probably is. (You have to work for those after the first few.)

      Ovid, I still wonder two things. What percentage of your tests eventually perform database work? How much data is in your testing database?

  • I've given some thought to how to reload data quickly after a test. In most cases, piping mysqldump output through the mysql shell is fast enough. When there's too much data for that, LOAD DATA INFILE is very fast and can be done for the whole database using mk-parallel-dump and mk-parallel-restore. It's also possible keep a test database and just copy all the tables from it when you want to restore with CREATE TABLE...LIKE and INSERT...SELECT FROM. The fastest way though is probably to use LVM snapshot
  • I'm happy to say that I don't have to work with MySQL. The test suite for our Postgres-based app has several thousand tests grouped into directories. The database is re-initialised for each directory. The initialisation was based on replaying the original SQL/DDL followed by a bunch of schema patches, but that got really slow. We switched to using a dump of the DB and using pg_restore for each test directory and that was much faster. The next step was when someone realised that the Postgres createdb co

  • Are you using (MySQL) strict (mode)?