Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • I have a suggestion, which is to try implementing at least some of the benchmarks over at the The Computer Language Benchmarks Game http://shootout.alioth.debian.org/ [debian.org] in Perl 6. Is there someone around here who is interested in trying this out? Perhaps we could make a challenge out of it or something :-)

    I'd really like to see this happen. There's already a shootout/ directory in the perl6-examples repository [1] -- I'll gladly give commitbits to anyone who wants to work on this or anything else in that repo. Just email me with your github id. If we decide that the benchmarks deserve their own repository, I can set that up also.

    I'm also going to see if I can find a place where I can regularly run and post benchmark timings for rakudo, similar to the way we perform daily updates on Rakudo's spectest progress. I've been wanting to report how long it takes to run the spectests, but I need a stable platform to do that from.

          [1] http://github.com/perl6/perl6-examples/ [github.com]

    Thanks!

    Pm

    • I'll see what I can do.

      Some of the benchmarks are inspired by bioinformatics and I might try having a go at these since I am familiar with these kind of problems (I guess this is a fairly typical stance for "non-programmer" programmers, it being far easier to solve tricky problems if they relate to other concepts you are familiar with, such as biology and DNA data in my case).

      However, it will need to wait for a few months since I am hoping to finish my thesis in July :-)

      I'll definitely continue loo
    • I am interested in coding benchmarks for rakudo as well.
    • I would be very interested in seeing regular outputs and graphs from such a thing, and would put some effort into writing code as tuits become available.

      One thing to consider is having more than one way to do it -- i.e. including multiple versions of a benchmark in regular runs. This would show when things like "$var += 1" and "$var++" have performance gaps or convergence.