Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • That's the problem with software - failure really is an option. It's not like we're building bridges or hospitals.

    Case in point - today we discovered a bug in my spam scanning software that has been there for years. Hundreds of thousands of mails have triggered this bug. Yet we only just noticed it because failure wasn't a total showstopper. Creating the software with a tool like Alloy would have caught the bug (probably) but it would have also taken a hell of a lot longer to get the software written.
    • Depending upon what you're doing, failure may not be an option. Consider the Therac-25 [wikipedia.org], a well-known radiation therapy machine which killed at least 5 patients due to a software bug.

      Or how about the doctors who were indicted for murder [baselinemag.com] because they didn't double-check the results of some software and had several patients die as a result?

      On a less lethal scale, tests can be used to prevent software flaws from reappearing, but if the underlying design of the software is flawed, the fixes that go in place are often patchwork messes which merely help bad systems limp along. Eventually, many systems with core flaws can turn into a big ball of mud [laputan.org]. One of my friends is working at a company which has offered this person a huge amount of money to stay, but key flaws in their underlying software have made it very difficult for the software to scale and they may go out of business because they are having trouble keeping up with demand.

      Software flaws caught at design time are usually less expensive to fix than post-deployment flaws and the larger the project, the more likely it is to have serious design flaws. Any tool which might help catch those flaws up front seems worth looking at.

      To be honest, I'm not sure what you're getting at. I'm sure plenty of programmers have horror stories about software flaws driving companies into bankruptcy (the article I point to mentions this in regards to United Airlines, but plenty of smaller companies have issues like this and I've seen it firsthand).

      • What I mean is it's very easy to come up with the examples of where this sort of strictness of design is necessary (medical software, flight control, etc - stuff where people's lives are at stake!) but the majority of software developers don't work in those environments. They're hacking together a tool to help put together the monthly accounts, or displaying things from a database on a web site...

        So what happens is that software developers don't get trained in formal design methodology, or if they do, they
        • I have a couple of argument against this kind of reasoning.

          First of all, most programmers do work in enviroments where bugs matter - the internet. If your web enabled CRUD app has an exploitable vulnerability, then you risk both exposing the rest of us on the internet to DDoS attacks, worms, etc from your script kiddy owned host, and the fraud or damage to your reputation which can result from a more savvy attacker.

          Second, code reuse means that what looks like a non critical bug can quickly become cata
          • The thing is, what you wrote is today's reality. So failure to catch those things clearly was an option, and the world hasn't ended. Yeah it sucks, but that doesn't mean we aren't coping with it (ok, so that is debatable too :-))

            I'm basically saying that a lot of places and jobs would like to do better, but can't afford it (again, mostly not due to financials, as catching these bugs later is more expensive, but due to time-to-market pressures).
            • Yeah, that would be survival bias talking. When I was a kid I (ate lead paint, got shot with a bow, fell out of trees, split my head with an axe, etc) and I survived, so clearly these things aren't harmful. ;-)

              The problem with the "I would like to, but it costs too much" argument, is that when inevitably something bad happens, the rest of us have to pay for it. Either directly in SYN floods, indirectly through the market (phishing, cc fraud, stock scams), or worst of all, forever and ever though ill tho
              • Yup exactly, it's a mess. I suspect it won't change until people's lives are at risk though, and even then clearly that isn't enough - it needs to be a publicity disaster too. Get the newspapers to make people enraged about it.

                It's funny, I almost wish we as software developers had ignored Y2K. Show the public what a real disaster looks like. Lets get it right for 2038, eh?
      • Consider the Therac-25, a well-known radiation therapy machine which killed at least 5 patients due to a software bug.

        The Therac 25 is a really important story, but it is an outlier, and ultimately not relevant to most discussions about bugs, reliability or catastrophic failure. There is no general lesson to learn from that, except to be extremely careful when working on a system where life is on the line (medical, embedded or otherwise).

        Case in point: I've worked on many online publishing systems

        • I wondered how Brooks' distinction between accidental complexity and essential complexity fits into this distinction between acceptable and unacceptable failures.

          Whether the failure is acceptable or not depends on the values of the clients, I think. Or does it?

          I was thinking accidental complexity comes from the problem that the software is supposed to solve, but it looks like Brooks didn't think this way.

          He said use of a high-level language frees a program from much of its accidental complexity.

          I think I ha
          • Actually, you bring up a very good point.

            In the systems I can remember at the moment, catastrophic failure related to essential complexity is intolerable. Catastrophic failure related to accidental complexity is accepted as part of the "cost of doing business". Prime example: IIS and Windows servers instead of something more solid, like VMS, TrustedSolaris or something even more paranoid that can run a webapp. :-)

            You could make a convincing case that the inherent complexity of a computer is a part of the
        • No, this is the worst case scenario: vulnerabilites in SAP [cansecwest.com] or perhaps this Who turned out the lights [cansecwest.com]. The price of catastrophic failure really can be that bad. The problem is, that the same components that you used in your publishing house, or to bend sheet metal are being used everywhere else as well - And they suck!

          Now for my lovely little anecdote to debunk the rest of your point. Back before I worked for ActiveState, I was an IT consultant to a very large forestry company (who shall remain nameless
          • mock! How've you been? Where have you been hiding yourself?

            Bugs matter.

            Yes, they do, but not all bugs have equal weight. Not even security related bugs. Do I care if a package has a known buffer overflow if it's running inside my firewall? OK, I care, but do I care as much as I would if it's in the DMZ or on a public site? Do I care enough to patch inside the firewall first, leaving a public machine wide open?

            We can trade annecdotes all day about how bugs matter or don't. In the end, thou

            • Well I'm still kicking around Vancouver, however you might see me in London or Tokyo as well. I founded MailChannels [mailchannels.com] with another former ActiveStater, and I've been making the bits go for CanSecWest [cansecwest.com] and associated conferences for the last few years. Right now, I'm reworking our conference registration system, which entailed an audit of all the bits I was planning on using. I'm not really pleased with what I found.

              While I don't disagree that perspective is necessary, obviously when limited resources are