Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Robrt (1414)

  (email not shown publicly)

robert at perl dot org

Journal of Robrt (1414)

Thursday August 12, 2004
11:55 PM peeling the onion

[ #20388 ]

If you've been following Planet Perl or Ask's Journal you'll know that we lost our main mailing list server (onion) on Monday when we installed really cool Cyclades AlterPath Power Management hardware. And then we had a big release at work, and couldn't get back to the data center to fix it.

Last night around 11:02PM, Ask started memtest86+ to see if that was the problem.

It wasn't.

This morning I went to the datacenter to try and fix it up. First goal was making it work under FreeBSD. Sadly, this didn't work. Basically, onion wouldn't re-stabilize under FreeBSD. It liked to spontaneously reboot. (And usually while fscking.) The FreeBSD 5.2 Rescue CD threw lots of errors. Single user mode was no better.

The only thing that I could keep running on the machine for more than 2 minutes was the (R)ecovery (I)s (P)ossible Linux rescue system. It booted. It let me ssh. It let me mount the FreeBSD partitions. It even let me run agetty so I could use our wonderful console server.

Using RIP, I was able to get all of the raw partitions over to our big disk machine, and from there we moved the files to the new mailserver. Luckily, I discovered that somewhere in our all 100Mb network was something limiting me 10Mb/s. This meant that the onion backup would have taken over a day. not good. A little bit of quick repatching, and we were back up to real speeds.

Anyway, As I write this, Ask is tweaking the configuration, I'm messing with postfix, and hopefully mail will start to flow in the next few hours.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.