Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Journal of nicholas (3034)

Monday March 30, 2009
07:24 AM

Thoughts from the world(s) of Python

[ #38723 ]

Ted Leung wrote about The Pycon Summits. A couple of things interested me. Emphasis mine:

The largest discussion during that section, was of course, the roadmap for 2.x versus 3.x, and how to encourage people to move from 2.x to 3.x. It looks like there is (good) reluctance to keep the 2.x series moving forward, so there may be a long (possibly infinitely long) gap between 2.7 and 2.8. At the same time, it seems clear that 3.1, which is currently in alpha, will be the first truly usable release in the 3.x line. The goal is to get 3.1 out pretty quickly and deprecate 3.0 - an odd and one time practice for the Python community. There was also a discussion of what could be done to help library and framework developers jump to 3.x. One concrete outcome was agreement to start work on a 3to2 tool which would allow developers to develop on 3.x and then backport to 2.x, thus helpoing developers to flipp their effort into (in my opinion) the correct release stream

I am also happy to see that they are going to tackle removal of Python’s global interpreter lock (GIL).

I'd already commented on the latter on p5p, but I'll repeat the relevant part:

I'm curious how the plan to get rid of the global interpreter lock pans out, and what has changed since Greg Stein tried it, and got a 60% slowdown for unthreaded code (on platforms with fast threading primitives), something that Gudio was not going to accept back.

Also, I'll repeat here the interesting comment on the Unladen Swallow project plan page:

Here at Red Hat we use Python for a lot of things. What we've observed is that execution performance is not the main issue (although it improving it would be greatly appreciated), rather it's the memory footprint which is the problem we most often encounter. If anything can be done to reduce the massive amount of memory Python uses it would be a huge win. I would encourage you to consider memory usage as just as important a goal as execution speed if you're going to tackle optimizing CPython.

Memory usage is something that Perl 5.10 already addresses.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • This is something a lot of people don't seem to get -- CPU and disk are all but free in our era of 2GHz+ chips and terabyte drives, but memory (and cache) are a huge bottleneck. The limiting factors in many programs I write (bioinformatics) are memory and sometimes address space, not CPU time or disk space; and my most frequent source of UI irritation is not waiting for the CPU, but waiting for an application that has been paged out. Yet we see man-years devoted to ever-more-complicated CPU schedulers and

    • Yet we see man-years devoted to ever-more-complicated CPU schedulers and file compression algorithms, while language runtimes become increasingly complex and approximate LRU remains the state of the art in virtual memory management.

      I've seen applications where the best possible optimization was to remove a cache and recalculate a modestly cheap value; many modern processors are fast enough that throwing a few extra cycles in the pipeline is more efficient than paying the price of a processor cache miss.

    • I think this just goes to show you how different problems needs different optimizations. I mainly work on web applications and in those instances we are mostly concerned with Disk performance, CPU and network, not memory. Withy 64bit boxes all over the place having 16GB+ memory is pretty common. So memory is never my problem and I appreciate all the work that's been put into making CPUs faster.

      • This really surprises me. From my admittedly outdated experience, I would have guessed that you would either be network-bound or, with too many simultaneous (slow) connections, memory-bound. What's sucking up all that CPU? Is it the kernel copying buffers around, or your app generating a bunch of dynamic content?

        • We do have a lot of IO bound issues (especially with the DB) and most of our content is dynamic. Since we try to minimize the load on the shared resources (the DB) it means we push the computation into the web servers as much as possible, so having fast executing code is nice.

          Memory is so cheap so throwing a few extra gigs into a box is a non-issue. I've never found myself thinking "wow I wish Perl used less memory". Again, this is just my experience with the kinds of applications I develop.