I keep seeing people refer to some kind of crisis in the world of cheap Perl web hosting. They talk about ISPs not supporting mod_perl and invoke PHP as an evil horde conquering all hosts.
Well, I'm fine with PHP, and congratulate it on its success. However, Perl is not in trouble.
First, for those who do want the full power of a mod_perl installation, or want to choose their own version of perl, their own webserver, etc., virtual hosting with root is available for about $30-$35 a month. That's not much for getting your own box. And yes, there's mod_perl hosting out there too, but I can't see why you'd use it when you can get full control this cheaply.
But obviously not everyone wants to administer their own system, so for everyone else there is FastCGI. Hosts with FastCGI are available for $5-$10 a month! Even if PHP became available for $2, that wouldn't be enough of a difference to matter. And FastCGI works just fine for perl.
There are other things you can complain about if you really want to: installing modules on some hosts is not obvious, and configuring FastCGI on some is not well-documented. However, unlike pricing, those are problems you can solve yourself. So please stop with the ISP panic. Perl web hosting is alright.
Just curious. I usually try to arrange them so that most calls to other subs in the same file are forward references and the internal subs are defined close to where they are called most prominently. Does anyone else think about this?
If you haven't seen it yet, CHI is the new replacement for the nice-but-slow Cache::Cache modules. I uploaded a DBI driver for it to CPAN. If you're skeptical of the value, try benchmarking MySQL against memcached. It's pretty close for this kind of thing, and on a local connection MySQL wins by a fair amount. It's also a lot simpler than trying to get memcached or Cache::FastMmap going on some random shared hosting provider.
I have another driver for IPC::MMA mostly done, which is looking extremely fast. After that will be BerkeleyDB.
The worst features of a language aren't the features that are obviously dangerous or useless. Those are easily avoided. The worst features are the attractive nuisances, the features that are both useful and dangerous.
The problem is that they're measuring the wrong thing. Is your program taking too much CPU, or is it taking too much time? If you're running on a computer built in the last decade, it's probably time that's the issue. So why do we profile CPU cycles?
Nearly every program outside of the scientific community (or other math-heavy situations) is I/O bound. You don't see the I/O when you profile CPU time. It looks like it's not even there.
I've seen this happen so many times on the Class::DBI, Template Toolkit, and mod_perl lists. People show up all desperate to eliminate every method call or rewrite their templating module in C based on a profile they did. Then you tell them to do one by wall time and they find out that 99.9% of the run time is actually waiting for database calls to come back or something similar.
I hate that.
Who would've thought? A Ruby programmer finally acknowledged that writing code which modifies other people's classes at runtime might not have been the best idea for writing maintainable code. He advocates writing and using actual APIs instead.
When MacOS X came out, combining things I like about the Mac with Unix, I was tempted. But I waited. I figured I'd give them time to iron out the bugs, time for the hardware to catch up to the new rendering technology, etc.
Now that Macs are running Intel chips, I figured I had waited long enough, and I bought one of the new iMacs. It's a really nice machine, and except for some glitchy rendering here and there (tooltips that remain on the screen, for example), it's a solid OS.
However, getting a dev environment that's as productive as Linux has been a real chore. Here are some of the things I've been dealing with:
You don't expect to spend a lot of time fussing with terminals these days. They should just work, right? Well, I first tried the terminal app that comes with the OS. It had broken key mappings for page up/down, which I fixed. It also makes only a really half-hearted attempt to do X11-style mouse paste, which is really irritating since I'm so accustomed to it.
I tried an alternative, called iTerm. It has better mouse paste support, but it has broken arrow key mappings. When was the last time you had to fix your arrow keys on Linux? It's like 1997 all over again.
Both of these seem to confuse my screen session about their terminal capabilities. iTerm does better at showing color (e.g. in man page headings) but both of them cause screen to do its horrific visual bell ("Wuff wuff!") until I manually tell it to use a normal bell.
Everyone seems to love TextMate. I do plan to try it, but it's very unlikely that I'm going to pay $80 for a closed source text editor in this day and age. If you're one of those people who loves TextMate and thinks it's worth the money, I'd be interested to hear your reasons.
I have tried TextWrangler, and found it pretty decent in some ways. I got perltidy wired up to it without trouble, and the SFTP browser is pretty lame, but works. I also like the Emacs key support. However, it has no code folding, and the syntax coloring is pretty weak compared to most Linux editors, like Kate, vim, and Emacs.
I tried Eclipse + EPIC, which is actually better than I expected. Syntax coloring is nice, and it feels pretty responsive. On the downside, I can't figure out how to make a filter for perltidy to work on just a selected region rather than a whole file. I also can't figure out how to get
I tried a promising-looking Emacs port called Aquamacs. It's Emacs with keys remapped to normal Mac bindings, and easy-to-use font menus, separate windows for each buffer, etc. It looked pretty nice, but none of the elisp extensions I want to use (tramp, mmm-mode) seem to work with it. I admit to being a novice at this stuff, but I gave it a pretty good try and couldn't make them work.
The Carbon Emacs package from Apple worked great with elisp packages. In fact, it's really nice all-around: good syntax coloring from cperl and mmm-mode, tramp for SSH access, easy perltidy integration. The only real issue I've had with it is how poorly it plays with the rest of the Mac. Copying and pasting something from the terminal or Firefox into it seems to require some tricky incantations. I haven't been able to do it without resorting to clicking on menus. I suspect there's three or more different copy/paste systems happening at once here and they are not playing well together. It also took me a crazy amount of effort to change my font size, which is very non-Mac but somewhat expected from Emacs.
Overall I do like my Mac, but it's disturbing how reminiscent of the early days of Linux this has been. I may resort to running X11 with an xterm and maybe Kate if I can figure out how to get it running. I figure if I have to I can always run a double-boot (or Parallels) with Fedora. If anyone has tips about getting X11 mouse paste to work on the Mac, or taming Carbon Emacs, or getting Kate to run, pass them over.
Straw poll time. Rank these factors in order of how much effect they have on whether or not a project will succeed:
At Plus Three, we built a large project using Class::DBI. When we started the project, Class::DBI was the ORM that best met our needs. I applied some patches from the mailing list and added a couple of CPAN modules and some custom code in order to get these features:
- LIMIT support for MySQL on all search queries.
- Ability to retrieve all records from one table with a sort ("retrieve_all_sorted").
- Ability to run any search query as a count instead of returning records.
- A safe version of find_and_create. The existing one was not atomic.
Surprisingly, after these enhancements, Class::DBI 0.96 proved up to the task for the entire project.
Since then Rose::DB::Object and DBIx::Class have matured, and some other interesting things have come along. These have much more complete querying abilities than Class::DBI. They generate multi-table joins with no trouble. They can fetch related objects in one query in order to avoid multiple trips to the db.
The obvious question is, how much different would the code be if it had been written with one of these instead. And the answer? It would be reduced, but not as much as you might imagine.
See, the great thing about Class::DBI is that it's very easy to add custom SQL to your classes. You just set up a SQL statement that returns the fields you want from the current class and give it a name and you have a custom search. You can even generate SQL programmatically at run-time and use it to select objects. It is limited by the fact that it can only result in a list of objects from one class, but in practice that is rarely an issue.
We used this feature extensively. We had a complex, normalized db, and many carefully tuned SQL queries. Looking at the SQL we wrote, I would guess that about half of it is relatively simple JOIN and LEFT JOIN queries that would be eliminated (or automated) by a more capable ORM. The rest though is beyond the capabilities of existing ORMs.
What's in it? Subqueries, both as derived tables (in the FROM clause) and as NOT EXISTS queries. Transaction control, with SELECT...FOR UPDATE. Database-specific extensions like INTERVAL and GROUP_CONCAT. Import/export commands like LOAD DATA INFILE. UPDATE statements that use joins. Temporary table creation. Not all applications require this level of sophistication with SQL, but I suspect all of the ones with large amounts of data and moderately complex schemas do.
In a few cases, an ORM could get the same results by writing the query in a simpler way. However, performance would suffer. You can get on your soapbox and cite some C.J. Date stuff you read about the relational model and how the phrasing of the query shouldn't matter, but in the real world it matters a great deal. And this is as true with Oracle as it with MySQL.
Could ORMs learn this? They could probably learn some of it. They could expand their coverage of SQL to include many of these operations. They could embed some common wisdom about how to optimize certain types of queries for one database or another, although this would still not always be correct.
In the end though, what's the point of an ORM that has all the complexity of SQL? It doesn't gain you anything unless it makes things simpler, which means it has to ignore a large amount of the capabilities of SQL. An ORM is mostly about making the easy work of simple fetches and saves as automated as possible, not about creating an impenetrable shield between your programmers and SQL.
What all this means to me in practical terms is that an easy way to use custom SQL is the most important feature to look for in any ORM. With a simplistic one like Class::DBI you have to go to custom SQL too soon, but even with a more powerful one you will eventually have to go there.