Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Matts (1087)

  (email not shown publicly)

I work for MessageLabs [] in Toronto, ON, Canada. I write spam filters, MTA software, high performance network software, string matching algorithms, and other cool stuff mostly in Perl and C.

Journal of Matts (1087)

Friday January 24, 2003
05:12 AM

Reason for AxKit

[ #10161 ]

I keep forgetting to mention this, but I want to make sure I write it down *somewhere*...

One of the best reasons for developing a site in AxKit XSP is that it totally eradicates XSS bugs. No need to check what you output - it doesn't matter - there's no way to bypass the strict output checking that XML gives you.

This is all TRACE not withstanding ;-)

The other thing I found out about TRACE is that it totally bypasses any Apache handler installed (mod_perl or otherwise). This seems like a bug to me - if I could handle TRACE in axkit I could disable it very easily. Bah.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • Do I get it right that with AxKit it is impossible to inject new tags into output? If yes, then it makes making XSS holes much harder but not impossible cause XSS it is not just injection of dangerous tags into output. Imagine for example public web service that let's people to exchange URL's (let's call it "Shared bookmarks"). Obvious possible XSS hole is not verifying schema part of submited URL to be clear from dangerous schemas like 'javascript'. I doubt AxKit can automatically protect from this kind of

    Ilya Martynov ( [])

    • You're absolutely right, and I'm no security expert, so just how much damage can you do with the javascript: scheme? Is it limited to one function call, or can you chain lots of javascript into one method.
      • At least with Mozilla you can chain several function calls. BTW there are other types of XSS attacks which as I understand AxKit cannot protect from. Like arbitrary user input passed into response HTTP headers.

        Ilya Martynov ( [])

        • arbitrary user input passed into response HTTP headers.

          Mind explaining how this works? I still don't know enough about XSS, but it's a technique that has fascinated me ever since I watched Jeffrey Baker demo it at the Open Source Conference 2.5 years ago.
          • Take this perl CGI for example:

            my $cgi = CGI->new;
            # print headers
            print "Content-type: text/html\n";
            print "Set-Cookie: cookie=" . $cgi->param('cookie') . "\n";
            print "\n";
            # print content
            print "<html>.....</html>";

            Attacker can pass as value of "cookie" parameter something like "\n\n<javascript>....</javascript>" so this CGI ends up printing:

            Content-type: text/html
            Set-Cookie: cookie=


            <html>.....</html&g t;

            See? Since arbitrar


            Ilya Martynov ( [])

      • I don't think it's limited to one call, but even if it is you always have Good Ol' eval() there so it don't make much of a diff I'm afraid.


        -- Robin Berjon []

        • OK, so given any link you have to check it starts with /^(https?|ftp):/. Sounds fairly straightforward (though I don't do any checking in the AxKit wiki I don't think).

          Still, I think overall that means you've got a lot less coding to do with AxKit than with other (inferior ;-) solutions.
          • AxKit has the pro re XSS that it will be more likely to blow up given some treacherous charset than other solutions will be, especially if you charconv from UTF-8 to Latin-X at the end. Apart from that, it's prolly just as open as anything that deals with user-provided content.

            I'm not sure there's much to protecting the Wiki. A Wiki is, by definition, well, XSS enabled :) It pretty much works based on trusting other people. At any rate if you want to protect against javascript URLs, I'd check on !/


            -- Robin Berjon []

            • I'm not sure there's much to protecting the Wiki

              Very, very, very wrong. Security module of client side scripting is that there is single trust zone per one hostname. If you have, say, properly coded ecommerce shop and wiki with XSS bugs sitting on same domain than ecommerce shop is also vulnerable. Attacker needs only to lure ecommerce shop user on part of wiki with XSS bug and, bummer, user's auth coookie is known to "bad" guy.


              Ilya Martynov ( [])

              • Oh yeah, that I know. I was thinking about And I must say I haven't seen many Wiki that were on the same domain as an e-commerce site, it would be quite dangerous imho. There are so many ways to get JS code to run (URL, cookie-munging, on* event handlers, redirects, script elements...).


                -- Robin Berjon []

            • I'd explicitly list "good" schemas and reject all others. Various browsers used to support various dangerous schemas in addition to javascript. For example I recall some versions of MSIE had a bug which allowed to run javascript via about: schema.

              Ilya Martynov ( [])

              • Depends on what kind of security you want. For's Wiki I'd allow everything, including javascript:, so that we can have bookmarklets in there. For a site that has sensitive information I wouldn't use a Wiki.


                -- Robin Berjon []

                • Too late. Pod::SAX now explicitly only allows (https?|ftp|mailto|s?news|nntp).

                  Javascript is just too dangerous.

                  There's probably still bugs in the wiki in that it allows XML input, so you may be able to sneak something by that way, but hopefully the XSLT should disallow anything but known tags (and filter attributes sanely).
      • I forget the exact code, but you can do something like"" + document.cookie). You really need to explicitly check for JavaScript, which also means checking for derivatives like ecmascript:, as well as all the URL-encoded forms (encoding into %xx). IIRC, we URI-unescape it in Slash, then check the scheme against /script$/.