Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • At least not against a good database. (eg Oracle, PostgreSQL, etc - but MySQL would be fine.)

    The problem is that there is a trade-off between time spend preparing a query and query performance. It would not be inappropriate to think of preparing a query as a "compile and optimize this SQL" step. MySQL spends very little energy preparing, and therefore its ability to handle complex queries suffers. Most other databases put a lot of energy into preparing, and so it is very important to try to avoid recomp

    • Actually I am going to use it on a very very busy site ;) SQL preparing does not make much sense for me since our platform will usually work in the context of PostgreSQL PL/Proxy cluster *AND* the query could be dynamic enough to defeat ordinary DBI param binding.

      Actually runtime performance is the reason to choose source-filter solutions in the first place ;)

      Also, the SQL example is, well, merely an example...Filter::QuasiQuote's power reveals in the context of true DSLs ;)

      • My experience says that with a sane design you can run one of the top couple thousand busiest websites on the internet on a handful of webservers, paying only a modest amount of attention to performance of the code on your webservers.

        That same experience says that tiny mistakes in how you handle your database can cause that same site to melt unexpectedly.

        The lesson is to not worry about webserver performance, but be paranoid about database performance. Which means use placeholders properly. If you do it dynamically, sure, you might get query variants that are not in cache, and you have to pay the overhead of preparing. But what matters is that most of the time for most of your queries you avoid that overhead.

        Sure, clustering the database helps. But from what I've heard, splitting such a cluster after you run out of headroom is not really fun. And why buy yourself problems that you don't need to have? There are plenty of ways to dynamically build complex queries and pass in parameters. There is a huge performance benefit to doing so. That performance benefit comes at a known major bottleneck. Why wouldn't you do this?

        • I must say that all you say is indeed true for an ordinary web application :)

          But unfortunately I can't use prepare+execute in my OpenResty platform in particular. Why? Because it must scale by design to serve lots of apps here in Yahoo! China and Alibaba. So it must be a cluster or something like that.

          The PL/Proxy database server requires frontend queries to be of the following form:

                select xquery('account', 'select * from posts...', 1);

          That is, the user sql query itself must be a dynam

          • Responding out of order.

            On sending multiple insert statements at once. Yes, that can be a big win because you're cutting down on round trips to the database. Each round trip takes unavoidable resources on the client, server, and network. With network latency typically being the biggest deal. However there is an upper limit to the win from that. A compromise that works fairly well is to prepare a bulk insert that inserts multiple records, thereby bypassing the prepare and reducing round trips. YMMV. B

            • Right, preparing a bunch of insert statements first would be faster :) Merely have to deal with the last few specially :) Thanks for the tip.

              We use PgBouncer at the PL/Proxy level to cache connections to the data nodes. On the FastCGI level, a pre-forked lighttpd is used. Database connection to the PL/Proxy nodes are reused across fastcgi loops in a similar fashion as you described :)

              Well, I don't think the use of PL/Proxy necessarily means loss of relationality. Relational constraints still hold for data r

              • Tell me if a donation could help the open-sourcing decision. I'm btilly, at gmail dot com.