Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • That's the problem with software - failure really is an option. It's not like we're building bridges or hospitals.

    Case in point - today we discovered a bug in my spam scanning software that has been there for years. Hundreds of thousands of mails have triggered this bug. Yet we only just noticed it because failure wasn't a total showstopper. Creating the software with a tool like Alloy would have caught the bug (probably) but it would have also taken a hell of a lot longer to get the software written.
    • Depending upon what you're doing, failure may not be an option. Consider the Therac-25 [wikipedia.org], a well-known radiation therapy machine which killed at least 5 patients due to a software bug.

      Or how about the doctors who were indicted for murder [baselinemag.com] because they didn't double-check the results of some software and had several patients die as a result?

      On a less lethal scale, tests can be used to prevent software flaws from reappearing, but if the underlying design of the software is flawed, the fixes that go in place

      • Consider the Therac-25, a well-known radiation therapy machine which killed at least 5 patients due to a software bug.

        The Therac 25 is a really important story, but it is an outlier, and ultimately not relevant to most discussions about bugs, reliability or catastrophic failure. There is no general lesson to learn from that, except to be extremely careful when working on a system where life is on the line (medical, embedded or otherwise).

        Case in point: I've worked on many online publishing systems

        • I wondered how Brooks' distinction between accidental complexity and essential complexity fits into this distinction between acceptable and unacceptable failures.

          Whether the failure is acceptable or not depends on the values of the clients, I think. Or does it?

          I was thinking accidental complexity comes from the problem that the software is supposed to solve, but it looks like Brooks didn't think this way.

          He said use of a high-level language frees a program from much of its accidental complexity.

          I think I have a problem with his distinction.

          An alternative classification of sources of complexity, into those coming from the nature of software and those coming from the nature of the problem the software is trying to solve, might give us a better fit between un/acceptable failure and complexity.

          I think programmers have to cope more with failure than some other professions.
          But I am not a programmer. I just play one on the Internet.
          • Actually, you bring up a very good point.

            In the systems I can remember at the moment, catastrophic failure related to essential complexity is intolerable. Catastrophic failure related to accidental complexity is accepted as part of the "cost of doing business". Prime example: IIS and Windows servers instead of something more solid, like VMS, TrustedSolaris or something even more paranoid that can run a webapp. :-)

            You could make a convincing case that the inherent complexity of a computer is a part of the