Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • What we need is less bad not-quite-XML and more good XML. For generating good XML, the libaries that guarantee producing correct XML are the way to go. My impression is that they are easier to use than the not-so-nice ones because you don't have to worry about screwing up. They do things like always encoding strings, always using UTF-8, always worrying about namespace, and complaining about improperly nested tags.

    If you have to generate not-quite-XML or bad XML, then you have a bigger problem. I am no

    • But there's plenty of bad XML out there already and there are programmers who have no choice but to implement it, particularly if it's a third party requirement. Usually this bad XML tends to cause plenty of problems. Why have even more problems by creating yet another hand-rolled module which may or may not do what you want? Programmers in this unfortunate situation should at least be able to get the job done and not waste time having to reimplement something.

      The good thing about Data::XML::Variant [cpan.org] is that it can produce perfectly valid XML. Thus, if you are lucky enough to be able to migrate from malformed XML to good XML, you already have code in place which can do that. Yes, there are those who would argue that you should then switch to modules which validate the XML, but ignoring issues of cost and time constraints is a habit that many programmers tend to get into.