Original article here: http://www.ekinoderm.com/wordpress/2008/08/inexact-is-good-enough
My reply, slightly edited:
Good thought, good article.
I want to comment on comp sci a bit though. This is a field that's been badly watered down lately in all but a few schools. The way I was thought, everything is a trade-off. The trade-off between time and space is the most discussed. Programming something like the Atari 2600 is interesting, where you have very little CPU power but even less space -- 128 bytes of RAM -- and so are forced to optimize for space. Good comp sci curriculums teach about NP-complete problems and algorithms, where a perfect solution for even a normal set of data in unfeasible, and the approximate ("pretty good") solutions that are workable. Students learn -- or should learn -- that various constraints are wins. The more you know about your data or the problem space, the more you can optimize. Closed polygons versus open ones; directed versus undirected graphs; and so on. These are approximations in that a less generalized algorithm is an approximation of the more generalized algorithm. Kind of.
Just looking at the history of computer gaming, the break throughs usually came about when someone was able to do something in an approximate fashion. Flight sims started on 8 bit machines with cos/sin tables with 256 entries of 8 bits each, which is terrible precision, but some clever chap at Sublogic thought it might just be good enough to make something kind of fly. And he was right. 8 bits isn't many, but most 8 bit games somewhere have tables where the bytes are broken down further, assigning 4 bits to the whole part and 4 bits to the fractal part of a number, and similar things. The programmers spend a lot of time thinking about how much precision they really need to eek by. Early interesting enemy AIs were finite state machines. The ghosts in Pac Man have simple personalities. I've been playing a bit of Blaster Master on the NES lately and the little routines the various types of baddies go through is quite impressive. Doom used a terrible simplification of the 3D model called "raycasting" which is piled full of constraints and trade-offs. But people had been dabbling with 3D first person long before the PC, and probably before the C=64, where I first saw it.
If you're a smart guy playing, trying to see if something unlikely is in fact feasible, you're coming at all of this very different from a work-a-day grunt. The smart guy playing is going after what's barely possible; the grunt uses inherent difficulty in problems to excuse himself from being expected to exceed, or even take the assignment seriously.
To say that people used to use slide rules to do amazing things is a bit of a simplification. The interesting things that were done were done with slide rules. But most people were using mechanical adding machines which required less imagination to use and had even less precision, besides nothing resembling logs. Most people were doing very mundane things with tried and true technologies that were simple to operate and quite limited in potential. And that hasn't changed. I don't expect it will.
P.S.: Sorry about the off-topic spam that has made it from me onto the front page in the past. The "Post: Pay no attention to my musings" option doesn't seem to do what I expected it to. So I'll try to keep stuff here at least somehow related to Perl.