So you load a 10000 record file. On record 9990, an error is detected. Thankfully, Oracle throws an exception, and all your work is rolled back. Otherwise, you'd have a real dickens of a time starting over when you need to reload the same file.
Except that your DBA just called you, as mine did, and said, "You're affecting performance by not committing frequently! You need to commit every 1000 records or so!" That's great, but the purpose of transaction processing is so that all my records can go in as a unit. I want them all to go in at once, or not at all.
This is not the database I regularly work on, nor the DBA I regularly work with. As a matter of fact, the real problem to me seems to be that this database is extraordinarily slow. The regular database I work on would load all 10000 records in a heartbeat and not even shrug. Over there, loading 24 hours worth of data took 24 hours. On this database instance, loading 24 hours worth of data seems to take over 24 hours...
So what do I do? If I commit before the transaction is done, I lose the whole point of transaction processing. Isn't the database supposed to handle this kind of crap on its own? How do I appease the DBA without losing the fundamental point of why I need it to work this way?