Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Ovid (2709)

Ovid
  (email not shown publicly)
http://publius-ovidius.livejournal.com/
AOL IM: ovidperl (Add Buddy, Send Message)

Stuff with the Perl Foundation. A couple of patches in the Perl core. A few CPAN modules. That about sums it up.

Journal of Ovid (2709)

Thursday October 30, 2003
04:05 PM

Today's Tautology: Software is Software

[ #15478 ]

Would you move software into production if it failed to perform to spec five percent of the time? Of course you would. In fact, you probably have. This sort of things happens all the time and we call it "a bug" (or "a feature" if we want to alienate our user base). So rephrasing the question: would you knowingly move such software into production? Probably not. Admittedly, there's a certain ill-defined level of complexity beyond which bugs are unescapable, but a five percent failure rate is ridiculously high, particularly if the feature that fails is critical to the overall correct operation of the software.

Here's a hint to all developers doing TDD (test driven development): tests are software, too. While we admittedly do some strange things in tests (I override function definitions left and right), for the most part, good software practices apply to tests because tests are software. It doesn't matter if you're not shipping these tests to your customers (though I think you probably should). What matters is that software is software and if you write perfect code for your customers and lousy tests, you still have a substandard product.

Right now, I'm dealing with intermittant software failures in tests. As it turns out, the developers who wrote these tests knew that they would likely fail at some point, but rather than make sure the tests always worked, they accepted that they would usually work. Maybe this is fine for Windows users who accept that the occassional BSOD is an acceptable price to pay for the ability to create Powerpoint presentations, but as a developer, I am the customer for those tests that he's writing. I can't pay that price because shouldn't be expected to keep track of which tests might fail and which might succeed when I'm dealing with thousands of tests. It's like having an error log full of "unitialized" warnings. After a while, you learn to ignore the error log.

Fragile code is also bad, regardless of whether or not it's in a test. We have tests where we compare stack traces in error logs. The tests assume that the trace is going to be an exact match, if someone fixes a module in a completely different section of code, they have problems in running the test suite because they may have affected a stack trace in an apparently unrelated set of tests. Right now, I'm working on testing the stack trace functionality directly and then the stack trace tests will use regular expression rather than expect an exact match. Once done, we'll have much more robust tests.

We also have a several helper functions that are cut and pasted into many different test programs. These should have been refactored, but weren't, so if there's a problem, I need to grep through the codebase and find all of these problems. I don't know if anyone at this company has ever done such a large-scale fix of the code as I'm doing, but having a test suite where the rules of good software development apply would have made this job much easier to deal with.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • The problem with the test that expected an exact match only becomes a bug in the test when you change the output. So, you roll back your changes, refactor the failing test to accept more generic input, make sure it still passes without your changes and then redo your changes (consider doing a quick smoke test by deliberately breaking the thing it's supposed to test to make sure it fails correctly too).

    With TDD you are holding your tests to the same standard as your code; tests should be good enough for rig
  • Absolutely! (Score:2, Interesting)

    • While we admittedly do some strange things in tests (I override function definitions left and right), for the most part, good software practices apply to tests because tests are software.

    I couldn't agree more.

    Being an ardent advocate of TDD, I wouldn't think of writing a line of test software before I have the tests for that test software in place. Of course, the test-tests are also software and require that tests for them be written first and those tests require that tests, well you get the idea.

    • The nice part about TDD is that the tests test the code and the code tests the test. That is, if you write a test, make sure it fails, write the code, make sure it passes, you can avoid most of the worry.

      Writing a good test library is harder, though.