Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • I agree that you should try your hardest to at least have *some* sort of regression testing in place.

    But the problem with the above approach is that it's fragile. I know, because one of the codebases I work with has a *lot* of tests like that. And they're very noisy. So noisy that they tend to get ignored. And most cases where a test complains, it's some environmental or transient issue, or a false positive (i.e. someone has made a change that breaks the tests but hasn't introduced a bug).

    This is compounded

    • by Ovid (2709) on 2008.07.21 5:34 (#63972) Homepage Journal

      Not all of that is necessarily a symptom of using the approach you suggest (there are a lot of other issues with the test framework we're using), but I think you perhaps underestimate how hard it can be to get it right.

      I can assure you that I've been down this road many, many times. For most companies I start with, I almost immediately find out that if they have a test suite, it's usually broken or limited in very fundamental ways. If they don't have a test suite, they always have a wide variety of excuses, few of which really hold water.

      You are correct that the approach I outline can be problematic at times, but the advantage that such integration tests give you that unit tests do not is twofold:

      1. You can write the tests, whereas unit tests can be much more fragile
      2. You get decent coverage of user expectations, something that unit tests can't do.

      For one company I worked with, I couldn't refactor code safely because there was no test suite, but when I found myself facing 500 line functions with global variables galore, falling back on Test::WWW::Mechanize allowed me have a bit of confidence that at least I wasn't doing anything catastrophic.

      Sure, there are environmental issues, false positives, renamed forms, etc., but that's the nature of having a legacy code base with few (if any) tests. In the long-run, if there's not a strong commitment to testing and solving these problems, then they'll remain. Ironically, they'll then be cited as the reason why testing isn't done. The problem is very circular in nature and, with all due respect, I've been testing for too many years to agree with your argument because with any significant codebase you can't safely refactor and then write tests (though refactoring will happen anyway and I do agree that if you have the tests after the code is written, that's better than refactoring with no tests).

      • Yeah, I think you're right, it's about a commitment to testing. Using integration tests is fine as an interim solution, but it must be used as a stepping stone to writing a real test suite (which to my mind is based primarily on unit tests). Otherwise you don't gain a lot.

        Of course, it's easy for the PHB to say "we've already got tests, don't we?" But that's another issue, really.