In several tightly controlled projects over the past few years, I seem to either follow or approximate this sequence of steps:
I haven't seen such a way of working mentioned elsewhere, so I thought I'd make note of it here.
The idea with the first step is to separate most of the thinking from the automatic task of writing the tests. I find if I do this, I get better test coverage, because separation allows me to retain an eagle-eye view of the model, whereas if I were to switch back and forth between thinking about the whole and writing about the parts, I'd lose sight of the whole. To some degree. Also, having something to flesh out cancels out the impulse to cheat and skip writing tests.
Step two ignores the mandate of TDD to write only one failing test at a time. I still prefer to have the whole test suite done before starting the implementation, again because I get rid of some context-switching. Usually I treat the implementation process in much the same way as if I had written the tests on-demand. It occasionally happens that a test already passes as soon as I write the minimal scaffold needed to run the tests. As I currently understand TDD, this is also "frowned upon". I leave them in there, because they're still part of the specification, and might even catch regressions in the future.
I tried this out last weekend, and it was a really nice match with the problem domain — an I/O-free core of a package installer:
And presto, a complete (core) implementation with great test coverage.
Those who follow the links to actual commits will note that mistakes are corrected during the implementation phase. That's a symptom of the haltingproblem-esque feature of code in general; you don't know its true quality until you've run it in all possible ways.