Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • Thanks, I like this and may use it for my own website.

    I have a question about the integration. In one case several links were aggregated into one post. In other cases, which link got it's own post.

    What's the difference? Or, why do it both ways?
    • This is the data duplication issue that Paul mentions in the article I link to. You're seeing the same data from two different sources.

      I've been using the delicious "daily blog posting" tool to wrap up a day's links and add them as a single entry to my blog. But the planet includes both the blog feed and the raw delicious feed. So delicious links appear twice on the planet.

      I'll going to turn off the delicious daily posting tool.
  • I hope to be able to use this someday. I've been needing such a tool but on a lower scale than plagger. Unfortunately I'm so busy with $NEW_JOB I'm not sure when I'll get to try it out. :)

    J. David works really hard, has a passion for writing good software, and knows many of the world's best Perl programmers
  • Unfortunately, XML::Feed and XML::Atom don't produce valid Atom feeds [].

    There are, however, two RT tickets (#33881 [], adding accessors for some required feed types, and #29684 [], which stops 'convert' generating empty summaries) which will help you get towards a valid feed.

    I really should write up the other changes I had to make to genenerate a valid Atom feed []. I think convert needs help with applying an updated date to RSS feeds, for example. There are also issues with RSS and Atom taking different values for th
    • Ah. I'm been concentrating so hard on getting the HTML page valid[1] that I'd forgotten to look at the feed.

      I'll apply the patches from RT locally and wait for the next release of XML::Feed to fix the problems.

      [1] Often a pointless task on a planet as so much of the markup is out of your control.
      •'s (X)HTML isn't valid [] mainly because Vox use lots of attributes, and their visual HTML editor can get quite confused about wrapping spans. At some point, I'll either start running the HTML through Tidy before presenting it, or I'll edit the raw HTML before saving it to Vox.

        Similar attributes cause issues with the Atom feed, but they don't prevent validation, merely cause warnings, because Atom does at least recognise the concept of extensibility.
  • While I understand the general pain to install Plagger, I should argue a little bit about that "Plagger dev team need to look to slim down the distro" statement. []

    Plagger's "requirement" modules are all generic, from Cache, DateTime and LWP to HTML, XML and URI fetch modules. They are all necessary to all kinds of data sources and output as well. Other "plugins" are all pure perl and you can just skip installing the dependencies, which are disabled by
    • Plagger's "requirement" modules are all generic, from Cache, DateTime and LWP to HTML, XML and URI fetch modules. They are all necessary to all kinds of data sources and output as well.

      You're absolutely right (of course!) The vast majority of the pre-requisites are optional. I had forgotten that.

      I still think, however, that you're bundling too much stuff together. If I was bundling Plagger I'd create a core distribution that just read feeds, combined them and published new ones. I'd then relegate all

      • Right. Splitting the distro into separate piece of modules, or at least to bundles was the original idea.

        The only reason why we haven't done this was probably because we had never got to the point "yes we're done", and I just didn't want to see people uploading their random Plagger plugins to CPAN that will eventually be unmaintained, abandoned, out of sync with core, and in a poor quality code etc, etc.

        It doesn't take you a minute to name a few Catalyst modules that are "out-of-date" or "was a total mistak
  • I tried to install this today on a personal hosting account today on a shared system where I have SSH access, but did not want to install the dependencies as root.

    First, I tried several alternatives for using PAR, building a packed file on my Linux laptop and uploading to the FreeBSD server. That failed in part because PAR failed to detect all the dependencies, including some of the XML::Feed namespace modules and some of the DateTime modules. It was also overly conservative about which modules it through n
    • I'll suggest from this experience that the dependency chain of XML::Feed is similar in complexity to that of Plagger from this perspective

      The dependencies checker indicates that XML::Feed has a 44% chance [] of installing correctly whereas Plagger has a 26% chance []. Of course, Plagger uses XML::Feed so Plagger never going to be easier to install than XML::Feed.

      I had exactly the same problem with XML::LibXML that you did. My server is a Fedora Core 6 system and I install all of my modules using rpm (buildi

  • The likelihood of success calculated by cpandeps only takes into account the mandatory dependencies (those in 'requires' and 'build_requires') and ignores those that are merely recommended - taking recommended modules into account would in fact *reduce* the chance of success. It's also worth noting that the link you give is for the dependency tree and test results when using latest version of perl (5.10.0) and for any operating system. It's always a good idea to change the filters to match your perl versi

    • I just enhanced the probability of Plaggers cygwin deps from 0% to 30% by doing a cpan5.10.0 Feed::Find with installed CPAN::Reporter. This had 2 FAILs, though it works fine for me.