However, over time, the data kept getting increasingly more inaccurate. A voting system was put in place to try to minimize the problems - it didn't help. Since December, the site has been put into static mode - no new entries are being accepted.
An alternate site has started, The Cover Songs DB, but input is restricted to authorized users only (which looks to be 3 people - although impressively they've amassed 2417 cover tunes).
My question is: how can a person design one of these sites with a reasonable amount of accuracy, without having such an iron grip on the input.
The first thing that came to mind was to cross-reference the input data with an already existing DB; specifically freedb. Assuming freedb is mostly reliable you could probably minimize some bogus data if the cover and/or original does not exist on db. However, you can still get bad links between original and cover (although, i guess release date would solve any chronological errors). But, you could still attribute a cover as being an original. I guess then you'd have to store already existing original tunes in a local db to be examined. But, what if there are two songs of the same name, blah blah, blah! It gets really hairy, really quick