Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

scrottie (4167)

scrottie
  scott@slowass.net
http://slowass.net/

My email address is scott@slowass.net. Spam me harder! *moan*

Journal of scrottie (4167)

Friday June 04, 2010
02:19 AM

Moose, DBIx::Class, and the _new_ OO

Simula introduced OO in the 60s. Smalltalk took it to its logical and pure extreme in the 80s. C++ brought it to systems programming and gave it the performance that only static optimized code can enjoy.

Really smart people wrote really smart code using really powerful, really futuristic features in C++... and created great big steaming piles of crap.

Efforts to create any reasonable operating system or database system using C++ failed. Telephone switches, previous infallible, failed spectacularly in cascade. Countless klunky, slow, buggy, bloated, unmaintainable Windows apps were written. Government agencies swore off of it in favor of other systems, even avoiding systems with OO at all, favoring APL or C. Windows programmers eschewed the complexity of C++ in favor of VisualBasic.

This created a bit of a paradox.

Was it the language responsible for all of these flawed designs and executions? Did adding objects, destructors, multiple inheritance, and so on, create a language that it just isn't possible to write clean code in?

For a long time, people thought so. "C++ is a messy language", they said, conflating the learning curve of the syntax of the language with learning to design objects and APIs that made sense.

Gosling and those behind Java seemed to think so. They threw away multiple inheritance, operator overloading, and a pile of other things. For a time, Java code was clean. So clean that they started teaching it in schools. Projects started demanding it and all across the world, people with no prior programming knowledge quickly ramped up on the language. They joined the workforce and wrote buggy, overly complex, unmaintainable code.

Simula's idea of OO was to provide a new abstraction to programmers with which to better model the interactions of the parts of a complex system. If a programmer could conceptualize the program in a way that drew parallels to real objects acting on each other, the objects and their interactions could all be better understood. In so much as that's true, there's nothing wrong with OO and no reason it should lead to unmaintainable code.

Much earlier, Donald Knuth wrote _Literate Programming_. Local variables and functions were the celebrated abstraction with which people were making huge messes. Knuth sat down and asked what was going wrong and speculated about how those problems might be avoided. It proved far easier to offer people a new abstraction that they haven't yet ruined than to get them to example the psychological, social, and technical forces driving them to write crap.

When the C++ guys realized that not only were they writing terrible code but that they were predictably and consistently writing terrible code, they too sat down and put some of their profound intelligence into asking why this was. This was the birth of the field of "Object Oriented Design and Analysis". There were a lot of things that people tried to do with objects that just didn't make sense and didn't work in the long run. There are a lot of early warning signs of failures to conceptualize things well and map them to objects.

The Java guys, determined not to repeat the mistakes of the C++ guys, adopted less analytical, more rule-of-thumb versions of this and called them "Design Patterns", "Code Smells", and so on. They fairly successfully marketed to their own ranks the ideas of studying design for design's sake rather than merely learning a language.

The Perl camp briefly and very superficially flirted with these ideas too but the glamour wore off and sitting down and just winging it and seeing where it goes is just so gosh darn much fun. History is boring. Of course, if you've read history, repeating it is boring.

Even this sort of backfired for the Java camp; understanding so well how to build certain types of abstractions, they went nuts building them all over the place and then managed to construct great big steaming piles of crap out on a much higher level -- they built them composed out of large scale constructs devised to avoid problems at lower levels.

While Java failed to convince everyone to actually studying the inherent follies and blunders novices make when designing software in hopes of avoiding them, it did introduce the world to a different idea: rather than using objects to model programs in terms of parts that interact, use objects to create massive APIs.

It's a big step backwards but it kind of wins in the "worse is better" department in that it's a lot easier for people to wrap their heads around than OODA. The language vendor created a whole lot of objects for you to use that represent different parts of the system they built, and if you base your application largely around using these pre-created objects, you're less likely to fuck things up. The win of objects became easily traversing large sets of documentation to figure out how things are done. If you get a Foo back from calling Bar.bar, you can instantly look up the docs for it and see if maybe you can weasel a Baz out of that to pass to Quux. This began the umpteenth documentation push which started with flowcharts, spent years on massive shelves of spiral bound printed pages from vendors, and, at some point culminated in programmers having bookshelves full of O'Reilly books before branching out into CPAN display module docs.

Everyone got tired of Java and gave up hope that any sort of cure for the worlds programming ills would emerge from that camp. Yo dawg, I heard you like MVCs so I put an MVC in your MVC! Even reading histories of how Java fucked things up and repeatedly missed the point is boring, and history is interesting. Java just smears boring all over stuff.

Meanwhile, the Python folks were writing much better software without nearly so large of sticks up their butts. No one knows how the Python people did it so I guess they just supposed that Python is magical and code written in it is clean. Likewise, no one understands why Ruby is so magically delicious, so like amateurs banging on a piano keyboard after the maestro gets up, they're trying their hand at it.

Back to present day and Perl. OO and OO abstractions mean something entirely different than what the Simula guys had in mind.

Now, when we sit down and create a system, we don't conceptualize the parts of the system and their interactions. We don't model the problem space using analogues to real world ideas and real world interactions. We don't search for the correct idiom.

Instead, we use APIs, and the more, the better. There is no User; instead, there are DBIx::Class ResultSet objects for a user or login table; there are admin screens that check things in that; there are Apache Response objects for talking to the user through; there are piles of Moose meta things that have nothing to do with hypothetical objects modeling a hypothetical universe but do a neat job of setting up neat things. Everything is abstracted -- except for the concepts the program is trying to actually model in its own logic. If there are objects definitively, unique, and authoritative representing things and nothing but that thing in a normalized SQL sense, then those objects are all thrown together in a big pile. A lot of the C++ OODA textbook's pages are concerned with finding idioms for how things interact to model those interactions. In Perl, we just pass stuff. And we're proud of our redneck selves.

In C++, and again in Java, and most certainly in Perl, we've shat on the grave of Simula. Smalltalk did not; Ruby had a blessed birth by virtue of drawing so heavily from Smalltalk. Python programmers tried hard to keep the faith even though OO is borderline useless -- nearly as bad as Perl's -- in that language.

If we were to sit down and try to represent our actual problem space as objects -- what the program is trying to do rather than the APIs it's trying to use -- we'd find that we're knee deep in shit.

This isn't one man's opinion. C++ programmers trying to claw their language's way out of its grave named parallel inheritance hierarchies as a thing to avoid; Java redubbed them as "code smells". If you have multiple objects each representing a user in some capacity, but not representing some limited part or attribute of the user, you have this disease. If you're using a bunch of abstraction layers to represent the User, for example, you have this disease.

Yet it has been escaped from. There are cures. You can have your objects representing things which your program deals with and have good abstractions to program in too. MVC frameworks aren't the cure but some people benefit from any restraint they can get.

And here it is, 2010. Everyone wants to learn the language syntax and fuck around with it, which is fine and great. But not everyone is here, reading this, being told that you can and will paint yourself into a corner in any language -- even assembly language -- and that it isn't the language's fault but especially: the language won't help you.

Perl programmers and Perl programs suck because Perl programmers think that rather than Perl fostering bad code, it'll help you dig your way out, with all of the magical things it can do. This is what C++ programmers thought. Perl programmers would be far better off if they actually thought that Perl fostered bad code and worked against this imagined doom.

So, let me say it: every programmer in every language, if he lets himself tackle large or ill defined enough tasks, will code himself into a corner. Not him nor perhaps anyone who follows him will be able to dig him out. The house of cards will implode. Trusting in abstractions of the language to save you will accelerate this process unless, just perhaps, you're privy to the lore. Books talk about how to design inheritance hierarchies that make sense. They talk about how to handle multiple inheritance and how to conceptualize things as objects. There's lots of benefit to modeling your problem space not as "objects" in the sense of the API you're using but in the sense of actors and props in a drama.

Like C++ programmers of yore, Perl programmers reliably, consistently build houses of cards.

As Ruby programmers start to build larger systems and have the time to grow things to that point, they'll discover that merely representing things as objects isn't enough, and that the interactions between objects are out of hand.

This isn't to say that I'm not susceptible to these same forces. I most certainly am. And I fear them.

Wednesday June 02, 2010
02:14 AM

Walking into a new codebase: tips

Find out where the cache is. Rig tests or a test script to delete it before each run.

Make a file with the commands necessary to reinitialize the database to a known good state. Make that an optional step in running the tests.

Use the test suite as docs to find out which APIs are important and how they're used.

Use the debugger to figure out which cross section of the code runs for a given task. Step through with a mixture of 'n' (next statement) and 's' (single step including stepping into function/method calls). As you trace through execution, you'll learn which utility subroutines can safely be taken for granted and skipped over. Note which tables data gets stuffed into and pulled from.

Strategically pop over to another window and inspect database tables while in the middle of single stepping.

Write SQL queries to inspect database state. Even if you like DBIx::Class, it's handy to be able to simply write "select count(*) from foo group by state order by state desc" and other things.

If tests don't already inspect the tables for the correct left state, add tests that do. The utility queries will start life in a notebook or scratch file, get refined, then maybe wind up in a stub .pl, but don't stop there. Add them to the tests. Yes, tests should only test function, not implementation, but, in one sense, the API is probably just a database diddler with sideeffects, and its correct operation could be specified as not mucking up the database.

Get the code running on your local machine -- that should go without saying. Mock services, APIs, commands, and whatever is necessary to get the core code to run. Mock stuff until you get the code passing tests again and then start modifying the code. From one project, I have a mock implementation of a Vegas slot machine. My cow-orker and I referred to it affectionately as "ASCII Slots". It handshook, spoke XML over SSL, had a credit meter, tilted on protocol violations, and the whole nine yards. Furthermore, it could be told to abuse the client with a string of simulated network errors including every possible scenario for the connection having gone away after a packet was sent but before it was received, including for packet acknowledgments.

Before you start work, run the test harness piped into a file. After work, pipe it into a different file and diff it, or add the first one to git and let git show you what tests have started passing/failing when you do 'git diff'.

Comment the source code you're working on with questions, speculation, and so on. This will help you find stuff you were looking at by way of 'git diff'. You can always checkout HEAD on that file to get rid of it or else just delete the comment, but you may find that the comments you write to yourself as notes while you're exploring the code have lasting value.

Similarly to saving test harness output, save full program traces created with perl -d:Trace t/Whatever.t. Trace again later and diff it if you find that an innocent seeming change causes later tests to break. This can dig up the turning point where one datum value causes a while/if to take a different route.

If execution takes a route that it shouldn't have and meanders for a while before actually blowing up, add a sanity check earlier on.

-scott

Wednesday May 19, 2010
04:59 PM

Bane of the existance of IPSs...

In Minnesota around '93, an ISP started offering unlimiting dialup. Through an arrangement with the state to make Internet more widely available than government offices and college campuses, they were to resell from the UMN.edu modem pool and the one T1 coming into the state to the general public. After my brother and I tag-teamed 24/7 for about two months, they changed their unlimited usage policy "due to the actions of one user".

That's after the 24 hour labs I was sleeping in, in one case ruining a keyboard with drool. Oops.

For years, I paid for dedicated SLIP connections, first in Minnesota then in Arizona, then ran a ppp emulator that takes a dial-up shell account and gives you full PPP with nat. I helped kill Global Crossing staying dialed in to the "shell account" 24x7. Downloads ran at night while I was sleeping.

When Ricochet hit Phoenix, I couldn't resist. $70 was not too much for total freedom of movement. Getting a real IP address and having the option of a dedicated IP was just awesome. Ricochet went bust, sadly. The modems were too expensive to make, and they had Alps Electric of Japan making them, and they had too many made, riding the dot com optimism, and uptape was too low. God bless Ricochet. I seriously doubt wireless will be half that awesome again.

Then there was Cox in the days before you could get a home NAT appliance. I had a FreeBSD machine doing NAT -- strictly illegal. One modem (yes, cable modem) per computer on the 'net was the policy. And no serving. I carefully recorded scans against my network for a month and a half and firewalled to refuse, not drop, any future probes from those that probed me, then started hosting crap. Cox never caught me. When I moved, their system completely hosed my account. They could ping me, even as I held the modem in my hand completely disconnected from cable, power, Ethernet, or any other connection. Clearly it wasn't me they were pinging but I could never establish that with them. This was the first time I tried to convince techs that the system was *un*plugged. Cox started off on the wrong foot and just stayed there.

After months of paying for service that didn't work, I had to cancel and went DSL for the first time. Not many people had DSL at that time. Quest was giving out Cisco 675s that you configured by typing commands at over a serial cable from a terminal program -- not xterm but minicom or HyperTerminal or whatever. Having good enough lines was a really big deal. Most people didn't. Most people still don't but they sell you the service anyway, refuse to fix the lines, and, for many users, DSL sucks in comparison to cable.

When wireless data again became available, I hopped back on the bandwagon, this time with T-Mobile. For $30/monthy, you could unlimited dial-up speed data using GPRS. They ran you through a "transparent" HTTP proxy that recompressed images and cached shit, or made it look like it hadn't changed when it was. This was extremely disruptive to a would-be web developer. It also ran you through two levels of NAT. It is and was ghetto. Outages were frequent. It was like Atlas holding the globe up, but instead of a big strong guy who cares, it was a fat stoner dude who could just barely reach the fridge.

The next major trend I just had to jump on was aiming high gain antennas down the road. Shitty MMMC connectors that have a service life of 0.5 insertions and increased WiFi noise as more and more APs go in and move more and more data seems to be making this unusable.

So, back to DSL. Except that it disconnects constantly and has less throughput than 56k dialup. It keeps training down to 64kbps when I try to use it and then probably drops and if not, has an error rate that's through the roof. I'm pretty sure that dial-up would handle the noise better. This was after it didn't work at all until I took it into the ISP (little local ISPs rule -- I sat in the FastQ office for 2.5 hours while they messed with the thing). It worked on DHCP but not static (with static IPs) and no one could figure out why but everyone was interested. I didn't press for details, but I guess it turned out to be "technical reasons". I had a great time chatting about tech and old school shit while this went on though. I feel like I need to go hang out at the ISP more, and bring pizza next time.

I'm seriously tempted to turn that cell data back on. I might also give CDMA data a whirl, care of Cricket, who wants $40/month, no contract, 5gb/soft limit.

I could easily imagine having the gateway machine with the dipole antenna soldered onto the mini-PCI card and a 56k modem plugged in, this Keyocera CDMA data card sharing appliance box, the and DSL modem all running concurrently and me continuing to fumble around for a good connection all day long.

Friday April 09, 2010
06:14 PM

Nerd communication

I've written before about how being at the computer makes you look non-busy. Old people especially assume that if you're sitting there silently staring, you must be *desperate* for conversation. That you could be goofing off or concentrating hard on work makes this ambiguous to peers to.

I've also written about how it's impossible to communicate to clients what constitutes an emergency. Giving out my cell number to clients has never worked. I've been drunk dialed by chatty clients. Not being able to get to ESPN.com is an emergency related to the shopping cart somehow. If you yell at them, then they don't call you when orders get wedged or the site goes down. The amount of emphasis required to convince a client to only call you in case of an emergency exceeds the amount of emphasis needed to make them never call you. Same thing goes for the people sitting next to you. Add in that coding sessions can easily be 16 hours long and you're making completely unreasonable demands of people, socially speaking.

If you're going to write code in a non-trivial sense, you have to jealously guard your concentration.

That's nerd-non-nerd communication. Nerd conversation is something else entirely.

I had a hissy fit on Twitter recently when I realized -- or rather, when it was pointed out to me -- that an RT (request tracker) was automatically, silently created for my various modules and people had been filing reports in them for *years* without any notification being sent to me. I, Marc Lehmann, and apparently no one else thinks this is a huge problem. I stewed for a while pondering how anyone else could possibily think that this design is okay, until I realized that it fits this model: opt-in communication. The fact that bug reports get bit bucketed until the programmer goes looking for them is exactly what programmers want.

Let me tell you a story of Life in Programmerville. The highway department schedules road construction a month in advance using shared calendaring. If not everyone is able to make time for the road construction event, they'll postpone and attempt to reschedule for next month. Your large house with a second story (with an attic) has a basement. The doorbell has auto-away and won't ring unless you've been seen unidle in the past 10 minutes checking your doorball status. If someone presses the doorbell while you're (auto-)away, it'll log that someone pushed the doorbell and when you next go to check your doorbell status, you'll see that someone came and went and probably take a digital photo of them because that seems like a spiffy feature. Houses all have fiber so that there's no phone number that telemarketers can call that causes ssh sessions over DSL to timeout (and god knows you don't have a phone connected to the land-line anyway). The cell is set to silent before bed so no one wakes you up too early and then set to ring again before you go out for dinner in the evening so friends can tell you if they're going to be late. During the day, when the inclination strikes, you check Twitter, Facebook, LiveJournal, use.perl.org, perlbuzz, CPAN's new module feeds, SMS messages, email, RSS feeds, github, work email, various bug trackers including RT for your Perl modules, your httpd_access logs, dmesg, top, reddit, digg, and Slashdot. If one unexpected pop-up appears on your screen, you flip the hell out and don't stop modifying things until you've discovered five more holes and plugged those too. ... which is a ranty way to say that programmers, in the name of controlling when they're distracted, strongly favor polling over an interrupt based model. And that's generalizing, of course. All generalizations are false. I know.

Programmers have a lot of sympathy for other programmers but sometimes this model makes it very difficult to reach another programmer. It might take hoping between different communications mediums for a while before you catch them where they're polling, and the polling cycle might be very slow indeed. Too old of events get discarded either through having been pushed past the point where they're seen or else recognized as probably expired and intentionally ignored. Sometimes you have to try to get a message to a programmer weeks or months later. Though they follow vast numbers of channels -- or more likely because they follow vast numbers of channels -- each message is treated as far less significant.

Broadcast-subscriber-poll mechanisms are especially popular -- Foursquare, Twitter, commit logs, IRC, etc.

I critically failed to generate chatter on git... specifically, the email gateway for commit messages. I accidentally opted out of a very important communication mechanism, for lack of being able to see other people's communications due to a snafu. Everyone probably took it as intentional concentration defense tactics that they didn't like but were extremely disinclined to mention to my face out of this sort of programmer sympathy concentration.

A recent commenter (hello!) spoke of Asperger's and perceived arrogance. I don't have good data to speak from, but I have to wonder if years and years of trying to jealously guard concentration and tending to prefer quiet over spurious interrupts would create this social veil. Practicing ignoring verbal communication can't be good for learning non-verbal communication. And isn't arrogance essentially being more interested in what's going on in your own head than what other people have to say? Isn't everyone interested in their own thoughts essentially arrogant?

-scott

05:15 PM

"Do you think Nintendo would allow Flash-cross-compiled..."

Apple fanboys never cease to amaze me.

This gem is circulating on Twitter: "Do you think Nintendo would allow Flash-cross-compiled games?"

Um... _yes_. Quite a lot of companies are in exactly that market -- game engines. Any game made in the last ten years is a mish mash of licensed code, runtimes, toolkits, translators, portability wedges, cross platform libraries, and so on.

Apple is not acting like Nintendo. It's acting like Nokia did with the N-Gage. Except Apple actually put 3D accelerated hardware in their device. But Nokia at least had a D-pad and buttons.

20 years ago, Nintendo tried to lack developers into exclusives. In those days, writing a game for one system was so radically different from writing it for another (a "6510" 6502 variant in the NES vs Z80 in the Sega Master System, with the games written in assembly with custom bank selectors, for example) that porting code was out of the question. SNES/Genesis continued roughly along these lines except C became a possibility. By the time the PlayStation 1 and N64 came around, development was mostly C (with some custom assembly for the DSP). Then you could actually write the bulk of the game for both systems at the same time. Attempts to abstract away differences in GPUs have continued by 3rd parties ever since.

There's a lot of great stuff being written for the iPhone, but there's also a lot of great stuff being written for the Wii and Xbox 360 online market places. To tell developers that they have to write entirely separate versions for each (requiring code to be originally written in Objective C and not translated) is to add significant amounts of work to anyone who would want to write games for multiple platforms. They've created an artificial us-vs-them scenario, apparently hoping the other guys lose out.

When you advocate without having your facts straight, you've gone from being a tech enthusiast to a marketing tool.

-scott

Friday March 26, 2010
01:36 AM

On the bright side...

My six month contract is up. It's been a busy six months.

I tell people that consulting is feast or famine. I've enjoyed the feast. I've been eating good food from the farmer's market. Before that, I've killed a 25 pound sack of rice and a 50 pound one along with crazy amounts of dried black beans. I've put on weight. I was down to 155 pounds for a while.

The firewall/gateway machine was apparently suffering some severe hardware problem that caused it to crash more and more frequently. I got it booting memtest only to see that memtest was reporting the same unhandled page faults that Linux was reporting. Daaamn. That Toughbook was replaced with another. I have to find somewhere to recycle a 20 pound blob of magnesium now. That's just one misadventure of mine these guys had to endure that's now taken care of and the next guys won't.

Speaking of Toughbooks, poor old fluffy, the CF-R1, got replaced and retired, first with a CF-51 then with a CF-73. The 51 proved too bulky to haul around under my own power all over creation. It actually worked the backpack zipper loose once and leapt out hitting the sidewalk at a good speed (and survived, minus a chink in its armour). I could have pulled the 32gb Compact Flash card out of fluffy and stuck it directly into the 51 then 73 but fluffy was still on a 2.4 kernel and upgrading to 2.6 was one of the goals. 2.6 wasn't setting up AGP correctly on the thing so I had to keep it on 2.4 and glibc had long since passed 2.4 by and that was making it difficult to use things like alien and klik to install software on it. That and the 256 megs of RAM made upgrading critical. I decided to try to upgrade the existing OS and keep the software. That proved to be a huge time sync and a disaster. If I had it to do over again, I'd have just done a clean install. I used to be able to manage that but low level Linux deps have gotten far more complex. Worse, Slackware on these Toughbooks with the Intel WiFi hardware was -- and this blows my mind but it's perfectly replicatable -- looses my packets in Chicago. I don't think it's the Intel WiFi either though the firmware crashes constantly and constantly gets restarted. traceroute in Knoppix on the same machine shows almost no packet loss or lag; traceroute in Slackware on the same machine shows massive packet loss, terrible network performance, and high ping times. It may or may not impact wired connects. It may or may really have anything to do with the router in Chicago; the WiFi stack may have just been systematically losing replies and leaning heavily on retransmits of unack'd packets. This problem proved disastrous. Trips to Seattle and Minnesota as well as in-town coffee shops (often when the home network was down!) left me still stranded without network. An income left me the chance to upgrade hardware and that chance bit me in the ass. But, on the bright side, it's sorted out and next go, I won't be fighting with this one. I should be able to squeeze a couple years of this machine. And I have a spare. These puppies cost me $60 each on eBay. I've been bashing the Linux users that act like Microsoft users in reinstalling the latest version of the OS at the first sign of trouble, but I have to give it to these guys for being plucky and jumping into battle with just the tools de jour and doing a fantastic job of wielding them.

The thing with Slackware and Chicago reminds me of a certain FreeBSD NAT at a small company in Scottsdale that absolutely would not speak to a Windows webserver that one of their clients needed for work.

I got to spend some more time with jquery and I love it. I almost hate to say it, but HTML (heh, DHTML) is turning into a first class windowing toolkit. Compared to AWT or Swing or most things built on top of X, all of the redraw event and clipping stuff is hidden from you and still optimized, and HTML+CSS is far richer for describing a GUI than the XML that can be used to build GTK windows. HTML isn't a programming language but it's a fantastic example of an imperative language nevertheless. It declares things. Perl does things, one at a time. Creating apps that run in a webbrowser feels like a terrible abstraction inversion but I have to remember that these things change with time. Hell, Apples run "display PDF" and render HTML widgets in the toolbar. Anything is possible. I was playing with JavaScript in Netscape 1.2 (or was that 1.3?). Almost everything was arrays with numeric subscripts. Elements didn't have ids. You'd diddle document.form[5]. Things were buggy beyond description. It's come a looong way, baby.

I got to spend some serious quality time with DBIx::Class. I tried hard to be open minded but this has really cemented my feelings about ORMs. SQL is a powerful, expressive language. We're living in an age that finally values domain specific languages. Regex is one; it rocks. SQL is another. It rocks at imperative "programming" against relational datasets. Trying to replace SQL with Perl is dumb. That would be like trying to rewrite a good awk script in QBASIC. Or like writing a Perl program to, step by step, add visual elements to a screen (hey, that's what Java Swing does!). Sure, QBASIC is a more general purpose language, but it does not do what awk does, at least not cleanly and that's in the simple case. In the complex case, it's just downright painful. I know people don't like to mix Perl and SQL but for chrissakes, we're working with HTML, JavaScript and CSS already and probably lots of other things. There are some useful abstractions in DBIx::Class. My stabs at abstracting DBI dealt with those rather nicely I think. I should release some of that. I guess I was doubting that it's still relevant but I think it is. One thing DBIx::Class does do that's neat is deploy the DDL (data definition) for you. If you deploy Perl somewhere and point it at a database, it'll create the tables, constraints, and so on. Sweet! I described using DBIx::Class as reminding me of a Mr. Bean episode where he winds up trying to carry a sofa and other furniture home in a tiny car and has to rig up a system for steering, braking and accelerating using broom sticks that he actuates from on top of the sofa on top of the car. All of the indirection did not help; it didn't even just get in the way; it made the job almost impossible, comically so. Rather than just using identifiers that exist in the database, relations get nicknames in the Schema files. With this auto-generated stuff, you're refering to foreign tables by using the name that that table uses to refer to the current tables primary key. I think I have a hard time wrapping my head around anything I cannot fathom; so much of my hackery has been based on colliding my own design sense with others and anticipating good designs that I find it almost impossible to anticipate a bad design. But I'm better educated in this department to brush up my code and release it.

Then I got to spend some quality time with git. I did sit down and read about its internal datastructures. Bootstrapping knowledge is hard. A lot of stuff on the Web is misleading. This happens with young technologies -- people who really shouldn't pretend to be experts. Tutorials assume overly simplistic scenarios that get blown away when you're working with people that know what they're doing. You need to know a certain amount to be able to see signs that something is misleading or incorrect. I think git's reputation stems from the sort of alpha geek who early adopted git. These people get excited by cool technology but lack some human-human teaching instinct. They declare things to be "easy!" very readily and rattle off strings of commands that can easily go wrong if considerations aren't taken into account. They have no idea they're doing this. I'm generalizing several git users I've been exposed to for some time, here. I think Perl users tend to fit this same class. We're so good at what we do and we've been doing it for so long, we forget the pitfalls that novices step into. Every problem is "easy!". We're jumping to give the correct bit of code but failing to communicate what's needed to conceptualize what's happening or to otherwise generalize the knowledge. To those on the outside, this creates the impression that the technology in question is overly fickle and overly complex -- somewhat ironically, since the attempt was to portray it as easy. Anything unpredictable and hard to conceptualize is going to seem "hard". But I'm beginning to be able to conceptualize this thing. At least one individual in the git camp showed enough self awareness here to communicate that understanding git's datastructures is the key to understanding git. git does awesome things, no doubt, and with power comes a certain amount of necessary complexity. This complexity, be it in git or Perl, cannot be swept under the rug.

Then there's Zeus. I spent a week, on and off, just looking all over creation for the WSDL, to use as a cheat to figure out what the data being sent to the thing was supposed to look like. Turns out that even though the Zeus site insists that it has it, it doesn't but it can be found in the most unlikely place -- the help screen of the Zeus machine itself. Even though Zeus's admin and API are implemented in Perl, there are no Perl examples in the docs of using the datastructures. The various resources that must get built to set up a functioning loadbalancer through the API are numerous, haphazard, and badly non-normalized. The documentation lists almost every function as taking (Char[] names, Char[] values). Bloody thanks a lot. Names of which other resource? What's valid for the values? Sorting out the API took a lot of the same sort of reverse engineering I was doing right around 2000 trying to puzzle out the numerous bank credit card processing gateways before Authorize.net came along, published some *good* documentation, and ran everyone else out of business overnight (even though Authorize.net ran on Windows and had outages that would sometimes be long enough that the credit card companies reversed the charges -- something like 5 days). It's always good to get the chance to work with a product that's *expensive*. I can play with Linux all day long but you have to get a job at the right place to get to touch certain things. I should do an online article detailing what I've learned.

Oh yeah. And I got to spend a little time with Nagios. The logic for sucking Perl plugins in is just cranky. It violates an anti-pattern in caching failure and it doesn't signal failure in any useful way. I actually doubt that it knows the difference itself.

I have to thank these guys for investing in me to learn one thing after another. I wish I could repay that investment.

I haven't lost my touch in tickling bizarre bugs.

I practiced being nice, at the urging of friends. I'm still a deer in the headlights when confronted with hu-mans; I can never conceptualize what's going on in their heads, and I think my stress in facing them stresses them back out. It's like if you encounter an alien in the woods and you're all like "ARHGH!" and he's all like "ARHGGH!" and so you're all like "ARGHGH!". It's just no good. Even being nice, I have to learn how to distribute warm fuzzies. My normal model seems to be to antagonize people into answering questions. If things don't make sense, I tease people for being apparently silly. People *hate* being unintentionally, apparently silly, so this is a fantastic way to get them to answer things -- they stop what they're doing and vigorously explain away. Putting down that tool was hard. Learning other tools I expect will also be hard. Anyway, this was a wanted chance to experiment with that one.

Friday February 19, 2010
02:23 AM

A programmer just like me!

There was a Red Dwarf episode (spoofish British low brow scifi) where Rimmer, the officious, slimey roommate managed to get another hologram/clone of himself going. For a while, they annoyed the hell out of Lister, the other roommate. But then the two Rimmers became each other's worst nightmare.

Everyone thinks, gee, wouldn't it be great if I had another programmer just like myself? You kick out tons of code each day. You don't really get stuck on stupid things. Everything is logical and orderly usually all the right technologies.

Everyone wants this mysterious ninja coder. I'm sick of hearing about him. He writes lots of code and gets things done fast. He doesn't get stuck on things or need a lot of help. He agrees with style and module fashion and agrees you know software architecture.

Today, I kept hearing "we need really good programmers". I also kept hearing "I'm concerned those guys have been doing things their own way too long and aren't adaptable enough to our way".

You think you know how to architect but you don't. If you had someone just like you, they'd still think your code sucks.

The people that wrote the code can quickly add features to it and work on it. That's always how it is. That means nothing. Look at awstats. It's a fucking nightmare. Yet, for ages, it grew rapidly, adding features left and right, doing more and more things, and was one of the de facto stats packages. It's a mess of global code, global data structures, and undocumented transforms.

But really I'm making excuses for myself. I suck. I've taken stabs at it already but I need to compile a list of things I've done to blow this gig. That's another post. Subconsciously not willing to put code in unless I can convince myself that it's correct coupled with this mass of undocumented side-effect riddled code is perhaps the wost offender. Two gigs ago didn't work out so well, last time I worked for a medium sized company, so I swore to do things differently. I think I preferred that approach -- I instrumented when I needed to instrument and when people didn't like it, I told them to piss off. Trying to do things other people's way was educational. I know a lot more about several technologies now. Trying to swap out my development philosophy was just destructive and pointless. I will never, ever get the hang of this.

-scott

Tuesday February 16, 2010
12:41 PM

"Mozilla too much like Windows?" addendum

http://news.cnet.com/8301-13505_3-10453399-16.html?part=rss&subj=news&tag=2547-1 _3-0-20

I thought, oh, goody, finally someone else has woken up to this and documented it in a main stream news outlet. But no. His analogy is limited to sluggishness and "community".

Look, people. Mozilla crashes. It lacks the stability that used to be the hallmark of Unix software. Unix software didn't blame the user -- "you have too many tabs open, no wonder it's crashing". Unix software has traditionally withstood abuse.

If a user on my system launches firefox while another user is running it, the first user gets a new window with all of their start tabs in it open again and the second user doesn't get anything. What the fucking hell?! That's another way that Firefox is like Windows -- it's so preoccupied by the single user experience that they throw security out the window.

There's this bug report of Mozilla ruining an engagement when it leaked data from one user profile to another: https://bugzilla.mozilla.org/show_bug.cgi?id=330884

That's not an isolated case. Any "privacy" has been half assed and the creators repeatedly forget about the foibles of before and repeat them in various incarnations.

If one user launching firefox affects another user running it, then I seriously doubt that there's any real security -- security that's *free* with Unix unless you work to defeat it -- to protect the first user's data from the second. Actively working to negate free security in the name of rushing a whiz-bang product to market is decided from the Windows camp and Firefox does that over and over again.

It nags you to be your "primary browser". On Unix. I'm probably not running IE on Unix and if I am, it's not successfully hijacking "primary browserhood" from Firefox. Instead, it's fighting against things like Konq. Unix software shouldn't and, aside from Firefox, generally doesn't fight with other software in petty little squabbles like this. This was entirely a Windows thing.

Security. Abysmal. Even the legendarily bad Unix software such as sendmail and NFS/rpc require extremely few patches and settled down into extremely infrequent vulnerabilities much more quickly. Firefox makes sendmail look secure. That's pathetic. Bind 9 is a fortress by comparison to Firefox. And no, frequent security fixes is not the same as security. Bugs are inevitable, but endless torrents of them with no slow down in sight was not the norm before and it's tragic that it is now.

It's very much the Unix way that there be multiple competing offerings to do any task. Unix is modular. People readily replace 'ls' and 'more' with versions they like better. Firefox effectively had complete market share. The next contender was Konqourer which in olden days was fast and light but quickly turned into another bloated, crashing, sluggish monster. Now there's Chrome.

I held out with links -g and w3m (with the graphics patch, inside screen) as long as I could. Both of those are stable, well behaved programs. galeon was also fun. Eventually, I found I had to use Firefox or Seamonkey. Granted, the Web has long caused problems for Unix. Anyone remember client side VBScript/ASP, or Microsoft's IIS forcing clients to do NT domain authentication to access Web resources?

User interface race conditions were squarely in the domain of Windows. You should never have a field selected and then find that the program is modifying it at the same time you are. Yet Firefox's URL bar will suggest completions and then take unrelated keystrokes as confirmation that you want them; moments after you click into the URL box and after you've started typing, it'll highlight a chunk of text causing you to start typing over stuff; it'll move the cursor moments after you carefully click into the field where you want the cursor, either to the end or beginning. I recently found that I couldn't disable searching Google. I was trying to paste some javascript: bookmarketlet into the URL bar and every time I hit enter or clicked the button, it searched Google for the JavaScript. I found an explanation of how to disable that behavior -- remove all of the .xml in whereever/firefox/searchplugins and restart. The behavior continued. I looked in searchplugins and Firefox had replaced the .xml files. Fuck! There's no allowance for people who hate Google. The choice was made for you and thrust upon you -- they want money from Google therefore you're going to be scarified at Google's alter and if you don't like that, fuck you.

Just like Windows software, Firefox treats its users like shit. It's controlling, patronizing, combative, and demanding.

-scott

Tuesday February 09, 2010
12:04 PM

Appauling unprofesionalism and sexism at Frozen Perl

Someone -- male -- noticed and loudly proclaimed having noticed that a "sexist" word was used on the workshop's IRC channel. Of course, they didn't say what or who said it. I'm concerned it might have been me. I said that a certain talk "sucked donkey dong. and not in a good way". Perhaps I said other things.

Since the Ruby conf fiasco (link below), we've been a bit trigger happy to identify, point out, and fix anything possibly offensive to... well, I guess, anyone who could be offended.

http://theworkinggeek.com/2009/06/dirty-presentations-xkcd-and-the-perils-of-140 -cha.html

Commenting on that link (which commented on the Ruby presentation): It seems like they're saying that because it's stereotypical of adolescent geek sexuality, it's bad. Yes, there are bad (especially of what's underdeveloped) aspects in geek sexuality, but not everything in geek sexuality is automatically bad.

Grown men can't be juvenile; nether can geeks. The other day, I threw a small beanbag embroidered with the world "douchelord" at a woman (a well humored one -- she's married to a sometimes adolescent college roommate I had and she's the one who bought the beanbag) who was going off the plot intricacies of the _three_ Twilight books, which she read. I was being subjected to female adolescent sexuality and it was painful. Thank heavens (I believe in empty heavens free of deities and free of undead humans) that, as adults, we're able to communicate with the opposite gender. Or not.

I don't claim to offer wisdom here. Perhaps I should claim to offer the point of view of the average sexist idiot.

As it stands, (and I was told this -- I didn't count) two women attended the workshop which (ditto) had about 140 people at it. Should our concern really to be try to make it even more "professional" in hopes that the extremely few extremely tiny omissions are what's keeping the floodgate of women closed? And if it's so important that gender balance be struck in gatherings, why aren't we going to pottery class and book club?

Often I suggest on IRC and in person, through off color jokes, that I like sheep. I am essentially from Minnesota, after all. Sheep are not offended by this, I'm pretty sure -- the jokes, I mean. But humans often are, are often in the same way and to the same degree as any other remark I make, and as far as I know, there's no goal to try to make Frozen Perl more friendly to sheep.

I suppose my goal is, as has always been, is to offend people. It's my non-outgrown geek adolescent sexuality behind the wheel. I realize that it's possible to make inside jokes where the only people who would be offended don't get the joke, but that doesn't serve my goal. I suppose I want to tweak the normals. I hang out with freaks. I adore them. Lack of freaks makes me anxious. Frozen Perl makes me anxious. The phrase "white bread" comes to mind. The one mohawk went a long way to making me feel at home. Minnesota does pretty well on the freak scale -- better than Phoenix. Perhaps -- I can't say -- that this feeling out of place gives me some shared empathy with the primary demographic that we're so eager to make feel comfortable.

I guess when it boils down to it, on one hand, we're a barren wasteland of human sexuality trying to make people of different sexes and sexualities feel comfortable, and on the other hand, even though we've trained ourselves to react with horror and disapproval at anything resembling a gender issue, geeks are generally among the most non-judgmental and open-minded.

If a certain -- and that standard is set to a very low, non-demanding level -- if a certain level of color can't be established at a con, my bored brain might make it it's goal to get thrown out. I might fancy myself the title of the first person to get thrown out of a Perl con.

Hmm. "Uncomfortable and boring"... I've heard that phrase before.

-scott

Monday February 08, 2010
03:55 PM

Frozen Perl and Genetic Algorithms + Floating Point Tests

My Frozen Perl Lightning Talk was on the topic of large legacy codebases, sort of. Specifically, it was on how to continue to grow them without having to look at them. A more serious talk might follow on this one -- one that involves useful code visualization and inspection. It's a topic that's been on my mind for a long time and one that my friends and I often discuss.

I monkey patched Test::Harness to accept floating point values for test results rather than merely "ok" and "not ok" and then used that as a fitness test for a genetic algorithm combined with a Markvo chains chatter box trained from Perl sourcecode. It kept alive and to bread permutations of the Markov-chains generated code that did better on the tests.

It was part political commentary -- "Perl programmers pioneered building massive unmaintainable codebases and it's Perl programmers who will take it to the next level!". Also with my monkey patching and the genetic algorithm's mucking about in the Markov Chains' package, I was showing off _hackish_ styled code. In what I consider good Perl spirit, interesting modules were wired together in interesting ways.

The first thing people heard during the day was that you shouldn't use anonymous subroutines (!!) or vi or emacs but instead use an IDE (!!). This was from a presentation done in Keynote with Comic Sans. The last thing they heard during the day was "experiment, have fun, explore".

I'm tired of the message at these sorts of things being "Perl is dangerous and has lots of sharp pointy bits that'll put your eye out... we don't want Perl to get any more of a bad rep so stay on the trail and keep your arms in the bus". Fuck that. There are precious few "at your own risk" languages. Assembly is one of them. C++ can be. Ruby hackers indulge playfulness in its place.

I keep telling people that if Perl turns its back on hackery, it'll lose that ecosystem *and* it still won't break into Java's and Python's. It'll wind up without a user base at all. Computer companies, languages, automobiles -- any product at all -- that's unclear on its identity winds up with no user base. Perl programmers need to know that Perl came out of sed, awk, BASIC-PLUS, and a pile of other languages, and that the sharp edges exist for a reason, and a lot of parts of Perl don't make sense in the context of yet-another-C-like-language.

Also, FYI... I've been writing a lot on http://scrottie.livejournal.com lately, including stuff that I should probably put or copy here. And there's me on Twitter too.

Oh yeah... the code is at http://slowass.net/~scott/tmp/Test-Float-0.1.tar.gz and in http://slowass.net/~scott/tmp/Test-Float-0.1/

-scott