Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • but, my flying car has taken care of that.

    Seriously, talking about computers being 40 Million times more powerful leads to some pretty silly conclusions.

    While raw computing power may have improved 40 Million times, programming has not advanced anywhere nearly as much. I would hardly trust Windows (or even Unix) systems to do a lot of launching programs based on what it "thinks" I need, or trusting that it really gets the gist of my natural language request. I could see this really getting out of hand, o

    • >This technetcast is quite a find!

      It's pretty darned cool.

      >talking about computers being 40 Million times more powerful leads to some pretty silly conclusions.

      Take a listen to the talk. Rashid is not making the argument that improved processing speed / storage capacity magically allow us to develop cool stuff. His argument (perhaps poorly summarized by me) is that the choices that OS designers made may have been reasonable 2-3 decades ago. However, partly because of processing and storage improvements, and partly because there have been substantial genuine improvements in our programming tools (notably in language parsing, machine learning and handwriting, speech and video recognition), we can do better today.

      >I would hardly trust Windows (or even Unix) systems to do a lot of launching programs based on what it "thinks" I need, or trusting that it really gets the gist of my natural language request.

      I understand your concern, and it seems likely that a certain class of applications will require human intervention for some time to come.

      However, his claims are less out-landish than they might seem on the surface. The computer does not need to understand the "gist" of the request the way a human would understand it. The computer needs to be able to parse the request to identify the core query and determine what a reasonable reply would look like. It would then access a database of categorized content, looking for similar queries and replies. The "understanding" is "simply" the application of a set of associational rules.

      As an example, Mozilla 1.3 will include a spam filter that uses Bayesian statistics to determine whether or not an email is spam, and file the resulting email appropriately.

      Examples that Rashid suggests include improved categorization of email, for example, based on the computer's recognition of patterns in the way that you've handled email in the past. For example, if you always respond to emails from a certain email address as soon as you read them, the computer might learn from your behavior and automatically assign those emails a higher priority, or even page you when one arrives. Machine learning is at the core of many of the improvements that he discusses.