Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • The sample sizes were far too small to draw any reasonable conclusion.

    I don't think you're going to find much "evidence" unless there are large scale studies of complex projects. (Which have their own issues as you don't have good controls). This is the reason why much of "social science" isn't really science.

    Speaking personally and anecdotally, however, my hypothesis is that the reported "effectiveness" of TDD is driven largely by two factors: (a) it promotes a well-articulated description of expectations (in code) prior to writing implementation code and (b) it promotes high test coverage as a side effect of development style.

    To your point about when TDD is not helpful, then, given that hypothesis, to me it's pretty clear that it won't necessarily help in any circumstance where expectations are either obvious (method call succeeds! woohoo!), expensive to test programmatically at the unit level of granularity (Test::MockTheWorld), or unknown (experimenting with something new). Your approach for writing at a higher level sounds like an attempt to address the cost of very granular testing, but I see as still in the spirit of TDD -- just with a different cost/rigor trade off.

    The second point -- promoting coverage, probably could only be examined outside the lab setting. When the control group students are told "write the tests afterwards", they probably do because they know it's part of the study. In the real world, though, various reasons tend to interfere with test writing, as we (anecdotally) know.

    Anecdotally yours,

    -- dagolden

    • When I am writing several tests first I do think it's sort of in the spirit of TDD but purists might take exception. One extreme TDD exercise [gojko.net] I read about sounded very frustrating for the participant, but the person coordinating the exercise responded in the comments that he wouldn't use the "one test, one bit of code, repeat" style every time, so I'm glad that he's not over zealous about it.

      Still, I often find myself writing quite a bit of code and then coming back and writing the tests. The times I usua

      • On the "purists" comment -- it could be the normal difference between the teaching environment and the real world. In the teaching environment, the importance of "proper" process tends to get exaggerated. In the real world, with experience, we take shortcuts.

        On API testing -- I've often done "can_ok" either for a class or else for "main" to confirm automatic exports. What I've never really done (well) is confirm that no other methods exist. I might need to look into Class::Sniff or related techniques fo

      • Your idea for detecting accidental overrides seems overly complex. How about just comparing what is meant to be overridden to what Class::Sniff->overridden says has been.
        • Class::Sniff is too heavy weight for this. It also captures code at a snapshot in time. It doesn't tell me if the method cache is invalidated (MRO::Compat will let me do this with mro::get_pgk_gen).