Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

jdavidb (1361)

jdavidb
  (email not shown publicly)
http://voiceofjohn.blogspot.com/

J. David Blackstone has a Bachelor of Science in Computer Science and Engineering and nine years of experience at a wireless telecommunications company, where he learned Perl and never looked back. J. David has an advantage in that he works really hard, he has a passion for writing good software, and he knows many of the world's best Perl programmers.

Journal of jdavidb (1361)

Wednesday December 14, 2005
03:21 PM

Let's defeat purpose of transaction processing, shall we?

[ #27982 ]

So you load a 10000 record file. On record 9990, an error is detected. Thankfully, Oracle throws an exception, and all your work is rolled back. Otherwise, you'd have a real dickens of a time starting over when you need to reload the same file.

Except that your DBA just called you, as mine did, and said, "You're affecting performance by not committing frequently! You need to commit every 1000 records or so!" That's great, but the purpose of transaction processing is so that all my records can go in as a unit. I want them all to go in at once, or not at all.

This is not the database I regularly work on, nor the DBA I regularly work with. As a matter of fact, the real problem to me seems to be that this database is extraordinarily slow. The regular database I work on would load all 10000 records in a heartbeat and not even shrug. Over there, loading 24 hours worth of data took 24 hours. On this database instance, loading 24 hours worth of data seems to take over 24 hours...

So what do I do? If I commit before the transaction is done, I lose the whole point of transaction processing. Isn't the database supposed to handle this kind of crap on its own? How do I appease the DBA without losing the fundamental point of why I need it to work this way?

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • So what do I do?

    Er... can't you split your 10000 records in, say, 10 chunks of 1000 records each, and insert them separately?

    I've seen people having the very same problem and solving it that way...

    Or am I missing something?
  • PLAN A - Do what is natural in the business logc. Many a bulk file load does not have transactional nature at the business-logic level -- any one record could be cancelled separately. And sometime the whole file is a transaction to the business too. Some of my feeds it is required we reject the whole file if any record rejects, but others it is required we accept the rest of the records; I would claim the natural "transaction scope" for loading those files is different.

    However, business logic / natural

    --
    Bill
    # I had a sig when sigs were cool
    use Sig;
  • Let's see, you have a transaction of a mere 10k rows, and the DBA comes complaining to you? Sound to me the DBA has a problem, not you. It should be you going to him saying that you try to insert 10k rows, and the database is really slow - asking him whether his database is badly configured.
    • Thanks. I think in addition to the plans n1vux offered, I needed that validation to know I'm not being completely unreasonable.

      I did do some checking after my post and there are some 40K-record files I am loading. But that's not even a difference of an order of magnitude.

      --
      J. David works really hard, has a passion for writing good software, and knows many of the world's best Perl programmers