Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

pudge (1)

  (email not shown publicly)
AOL IM: Crimethnk (Add Buddy, Send Message)

I run this joint, see?

Journal of pudge (1)

Tuesday April 08, 2003
02:47 PM


[ #11525 ]

I just turned on Sendmail on my Mac OS X box that I use as a server. /var/spool/mqueue was empty, but I was getting hundreds of old cron emails. cfedde on #perl pointed me to clientmqueue, where I found 99,798 files. Oops.

killall sendmail and rm * ... argument list too long. perl -le 'opendir $x, "." or die $!; for (readdir $x) { unlink; print }'. There we go.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • $ echo * | xargs rm
    • Yeah, you probably use sed too! :)
      • xargs is especially useful with find - it lets you run a command (like rm) on large numbers of files at a time, but breaking them up so it is not too many at a time. (Using -exec from find runs a separate command for each file found, which can be very slow for a large/deep file tree.)
        • Yeah, I use xargs quite a bit, esp. with find ... however, I didn't think about using echo *. An ls took forever, I don't know how long echo * would take. readdir was surprisingly, relatively, fast.
          • ls -1 is better than echo *,
            because your shell still has
            to do the globbing for the latter...
            and you may lose
            Were that I say, pancakes?
            • That's sorta what I figured ... I don't know if the shell could have handled echo *. And again, ls took forever. Stoopid Unix!
              • Doh! I missed that the first time through.
                ls shouldn't be slow though. Stoopid MacBSD? :-P
                Of course it depends on the filesystem,
                it's not advisable for many to have single
                directories with so many files. It might have
                been quicker to remove the directory and
                recreate it afterwards ;-)
                Were that I say, pancakes?
                • ls -f has been around since version 7 days, back when men were men and computers were slow. The -f option tells ls to use the order that it finds entries in the directory - which means that there is no time required to sort the entries before displaying them. With a really large number of entries in a directory, that sort could take a while (especially when Unix was running on a PDP-11).
          • ls sorts the entries in the directory, while echo does not, which means ls will always be slower for large directories. echo * | xargs rm is simply the shell's glob operator, which is most likely implemented using readdir.

        • My understanding is that only works for GNU xargs, though.

          Of course, who would want to use a proprietary xargs?

          J. David works really hard, has a passion for writing good software, and knows many of the world's best Perl programmers