Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

scrottie (4167)

scrottie
  scott@slowass.net
http://slowass.net/

My email address is scott@slowass.net. Spam me harder! *moan*

Journal of scrottie (4167)

Thursday January 21, 2010
06:22 PM

Don't worry about it

I struggle with corporate culture. Last time I worked for a medium sized company, I resolved to adopt a "don't worry about it" attitude. I was worrying about things that were other people's jobs; things that could not be fixed; but worst of all, I was worrying about things involved in my interpretation of my project, as compared to my actual assignment. Right now, I'm looking at a device that has to be programmatically configured. The Web UI maintains and pulls state from the database, not the device, creating the possibility that the two will get out of sync. That's one of many examples. I keep running through "what if" scenarios in my head. Working on code alone or in very small teams, I have to. Here, I can't do this. I have to do something, get feedback, and then go from there. I have to leave most of the problems unsolved. For this feature, I need to start with an on/off switch.

Friday January 15, 2010
10:01 AM

Less non-constructive thoughts on DBIx::Class

Working with the Web, there's going to be some kind of a dispatch or system for getting requests to handlers and there is going to be the handlers. Then with a database there is logic to fetch from and store to the database. You've probably just stepped into a trap. In a useful object design, objects are named after the nouns the system was written to operate on, and after the verbs it performs on those objects. The program is a reflection of the problem it's meant to solve and enough of the problem's world. If you're writing an accounting system, you would, ideally, have objects for lineitems, accounts, etc, and if you're doing a visitor type thing, then for actions that can be done on other objects. But, chances are, that's not what you're doing. Your problem has shifted from accounting to that of dealing with the computer you're programming. Your objects are named after this problem space instead: request handlers; the database; and so on.

MVC won't save you from defaulting to dumping business logic into big classes because OO-ifying the business logic isn't as pressing as setting up the web side and database glue.

You could subclass your DBIx::Class resultset objects and trust your schema to model the actual problem. That could work nicely if your database is full of useful views and uses views to abstract away the changing, growing actual schema. Or you could make a dedicated effort at putting objects in front of record set objects for important things ("account", "customer", etc) and letting those serialize/deserialize themselves using DBIx::Class with a has-a sort of relationship to the recordset objects they correspond to. There are other things you could use instead of DBIx::Class if you only want persistence -- KiokuDB comes to mind.

I don't have any brilliant thoughts right now other than to say, don't do that.

-scott

Tuesday January 12, 2010
12:07 PM

Race condition Deja Vu

Part of the last gig involved doing high availability and extreme high reliability (large amounts of money involved) between two systems without locking primitivies. Perhaps a future version of XML or SOAP will include locking primitives. Rather than speaking HTTP, it was just raw XML over SSL (with, as per regs, authentication repeated inside of the SSL connection... no single point of failure was the guiding design). Either the server could be rebooted or any of the clients or both at any moment and they'd figure out where they were. This wasn't properly planned for to start with and it proved to be a major bugbear that kept cropping up. The regs also required that once something was done, it would not be un-done. The nanosecond that a random number was produced, it had to be preserved. Even if someone had an ultra sensitive EMI reader and could pick the randoms out of RAM as they're generated (and the generator churned constantly so picking up the seeds with EMI would be of limited utility) and had the ability to crash the server at any moment if the randoms selected weren't to their liking, it still wouldn't matter because they would just re-appear after the server came back up. This means that the server could make an important decision such as what a random was going to be, send it to the client, then the client would crash before it actually got then, and when it came back up, it would have to figure out that the server was further ahead of it and it would have to replay things happen in the future. Sync without locking is a bugbear.

So, now I'm doing web stuff with XML/SOAP, memcached, DBIx::Class, etc with async agents that push...

-scott

Wednesday November 11, 2009
01:50 PM

Every Toughbook ever made

I'm thinking outloud here but also telling a bit of a tale.

I could have taken a $1000 odd and bought a new laptop and really hoped I'd get on well with it, or else psyched myself into liking it by virtue of cognitive dissonance. Instead, I decided to buy used and not spend too much. So I bought a CF-51. It's has a 15" display. This is really bugging me. Among other things, it has worked the zipper loose on my backpack and leaped out of the bag while I was cruising down the canal on my bicycle. It survived minus a chunk missing. It probably wouldn't survive again hitting the same corner. This is a semi-rugged Toughbook. I ordered the other machine I was eying, a CF-T2. Compared to the R1 it's meant to replace, it's only slightly larger, its touchpad works (the button on the R1 is on the motherboard and it's worn out), and, critically, it takes twice as much RAM. I tried and failed to get ACPI to work. This is a dark art. There are no BIOS updates published by Panasonic for it and which exactly revision of the machine, BIOS, and Linux all interact here. I demand suspend of some sort. So I've been shopping again. I bought a T4, which is newer and takes even more RAM and most critically has a better armoured LCD. I cracked the screen on the R1 twice, though both under extreme circumstances (eg, getting run down by a Buick). But I'm also looking at the 73 which is a pound lighter than the 8 pound 51, has half the battery life, and has a 13" rather than 15" screen. It's better armoured than the T4 but has worse battery life at 3 hours and still weighs 7 pounds vs the 3 pound of the T series.

This must be awfully boring to read (seriously, why is anyone here?) but I'm obsessed with this.

I want light (my laptop goes lots of places), small enough screen to use on long Greyhound trips, good battery, durable (I wear laptops out but they also suffer harsh backpack conditions, and yes, the R1 was in a laptop case when it got smashed), suspend in Linux...

There's already a CF-27 fully rugged machine. So if I order that 73, I'll have a model from almost every line of Toughbook that Panasonic makes, excepting only the swivel screen version of the fully rugged machines and the hand-held industrial computers. I think I probably should write some comparative reviews...

-scott

01:38 PM

My own worst enemy

"by revdiablo (1502) Neutral on Tuesday November 10, @11:32PM (#71086) ...
I guess rants are meant to be hyperbolic, but what the hell? You might want to ask yourself whether your wounds are self-inflicted. Frankly, it sounds to me like you're the cause of your own problems."

I'm elevating this comment into a journal entry and duplicating (with modifications) my reply.

Yes, I struggle with that question -- to what degree my problems are self inflicted. As I said (either here or previously), I've tried very hard to do exactly what other people do and just run Debian and let it update itself. I don't know why things that just work for other people don't work for me. Maybe I push them harder. Maybe my mindset is different. My brother and I used to hang out in an arcade and we know how to crash half a dozen of the games there. It's well known that programmers make terrible testers of their own software. It's just too difficult to be caught up in the minutia of the implementation of it all and to keep the larger picture in mind at the same time of how people are going to try to use the whole thing. Vast numbers of Windows users swore that NT 3.5.1 was stable, and that NT 4 was rock solid, and so on. Turns out that they are if you rationalize away all of the crashes as some how being your fault and accordingly constantly modify your behavior so as not to provoke the OS. I think there's some of that same mindset going on in the Linux camp.

As with the arcade, a lot of my problems stem from one thing: I try to do a _lot_ of stuff. I build hundreds of packages from source. I cluster machines using single image patches. When I learned to crash Primal Rage 2.x, I wasn't trying to learn how to crash it. I was merely trying to game the game so that only the challenger would ever have to put a new quarter in it. If you get challenged during the final sequence where you had to fight each enemy from the entire game one at a time with one (double sized) bar of health, the game would attempt to resume after the challenge. This worked the first time but the second time, after playing all the way through again, boom, red 68k register dump screen.

A lot of my motivation for posting these sorts of entries is because I really want to be contradicted. If I can be corrected, I can be educated, and sources of frustration eliminated. It also connects me to how other people think, even if I don't chose to adopt that way of thinking. Often I do chose to. Frankly, I spent a fuck of a lot of time hacking around by myself before I was exposed to other programmers and I willfully shut myself off from other people. It's easy for people to take for granted how socialized almost everyone is. "Common sense" is hard to define but it seems to include a lot of implicit knowledge picked up through observation about what works and what doesn't.

And part of it is that I'm stupid. I've accomplished a lot through doggedness but now I'm told, tired, lazy and burnt out, and very pissed about that fact.

Another part of my frustration is that, in doing things that other people don't, I notice when those things stop working. I used to be really good at de-hosing hosed Linux (and other Unix) systems. This was a time honored sysadmin skill. It's gone by the wayside. It's just too difficult now. I'd have fun before installing Slackware over top of Knoppix, for example, then sorting out all of the libraries, /etc files, and so on, to come up with a system with all of Knoppix' canned packages but with system updates and the streamlined design of Slackware.

Sometimes my yelling at people works. I used to go around complaining all the time about how fragile five and six stage bootloaders are and how much they suck compared to the old two stagers. Most poeple have no idea how convulted Linux bootloaders have gotten and how many problems that can cause and all of the ways this can go wrong -- but eventually I ran into someone who *did*. The conversation was extremely educational. I learned a lot about what different kinds of diagnostics means. How much of "LILO" ("L", "LI", etc) indicates how far it has gotten in early bootstrap before any wedge. To the lay user, it's just a big bag of features that works right if you baby it in that certain way that you know how that enables you to be a Linux user. To a developer working on it, it's a complex and cranky beast with unresolvable edge cases due to the features and complexity.

I guess ignorant, mindless fanboy-ism pisses me off about as much as my bashing on things pisses the fanboys off. And that's probably by design. Again, I very much welcome being schooled by people who know better than I.

I like to encourage people not to read my journal. I know I'm not being especially constructive here, but dammit, it's my fucking journal. If you don't find my remarks constructive and you don't have any to make, go away. I won't miss you. I can say that with confidence, from experience. If you were trying to be helpful, great, but I'm waay ahead of ya there.

As far as the kernel not building, you were close -- it wasn't me per se, but my home directory, and specifically, my .bashrc. In the way of experimenting, I eventually ssh'd back into the same machine as guest to ditch the environment and then it worked. My home directory was a constant between different OS installs. This was after I pruned out environment variables that seemed related -- LDFLAGS including /usr/local/lib, similar for CFLAGS -- with no love. Whichever variable or file in my home directory is responsible I'll never know.

So Slackware is off the hook. RedHat probably is too. Still, from reading through Google, a _lot_ of things can cause this. It's one symptom with a myriad of causes. That's frustrating. I think you'll find that if you get into the implementation guts of gcc, you'll be revolted. I guess that's key... never look at things beyond the surface. Never have to. Refuse to. Maintain the illusion. That's the only way to keep a handle on this stuff.

-scott

Friday November 06, 2009
02:43 PM

The day Linux stopped being self hosting; or, Linux sucks

Background: I bought a new used CF-51 semi-rugged Toughbook to replace the ailing and failing CF-R1 [1]. I've wasted entirely too much time trying to get all of my crap moved over to a new OS install on the new machine.

The history of which OSes I've tried and what I've done to them has gotten quite long now, but most recently, I blew away CentOS and stuck Slackware back in as CentOS couldn't even compile its own kernel. I cursed CentOS as being stupid and decided I'd deal with the limited number of packages afforded by Slackware. So I go to build a kernel in Slackware because there's always something you need that's disabled... and lo, exact same problem. I had just been Googling for the problem with the word "centos" tacked on but I got curious and dropped any mention of any vendor and discovered that Gentoo and other systems had floods of bug reports of the same problem:

http://bugs.gentoo.org/show_bug.cgi?id=8132 ... every Linux vendor on Earth took a broken GCC and shipped a major release version that's not capable of building its own kernel.

Linux went non-self-hosting and most people never noticed.

There's no Slashdot headlines.

Perhaps we're still distracted with the flood of security announcements and still reeling from the profound ways that RedHat fucked up Perl, and Debian fucked up ssh, and so on. Perhaps our smashed expectations for Linux vendors to deliver a working desktop that we stopped caring that there's supposed to be a *Unix* *like* operating system under neigh. All we care about is that it has "Linux Inside". We don't care how much more convulted the init system is than HP/UX or how many more pointless CPU eating extensions it has than Solaris... only that it's Linux. It'll take ages for these idiot OS vendors to undo all of the good will and tarnish the reputation of Linux. As long as it's pretty, no one will look under the hood. They'll happily reinstall Linux over and over again to fix all of the stupid problems.

I'd love to make a serious effort to migrate to DragonFlyBSD since we have a possibly non-fucked-up BSD again now [2] but now days, "open source" software doesn't even build cleanly on Linux. Someone somewhere gets it to build just once at great effort and then it gets packaged into a .deb and never builds again. Try to build on something other than Linux and it's a huge project. Back when people ran shit like AIX2 it was easier to get a random package to compile for your random system. Making something compile on Ultrix was easier than getting something to compile on BSD. Infinite numbers of operating specific build crutches portability does not make.

I just want to say right now, all of you suck. This is the kind of crap that makes me just want to go work for Microsoft. Fuck you all.

Footnote 1: The cute, tiny CF-R1 is six or so years old now and the microswitch for the left trackpad button doesn't return any more. This same microswitch is soldered to the main board. Also it's maxed at 256 megs of RAM. It's also not as rugged as I'd like -- it's on it's third screen, though both were broken in events such as my being hit by a car. I really want a laptop I can use as a weapon against motorists... also Xorg doesn't seem to like the SiliconMotion video hardware on the CF-R1 and the 2.6 kernel had some problems.

Footnote 2: Previous rants were about how NetBSD, OpenBSD, and FreeBSD each, mostly out of fear and envy of Linux, screwed the pooch and made themselves obsolete by giving up the only thing that they had that Linux didn't: stability and sanity.

-scott

Thursday October 29, 2009
06:10 PM

Growing the team from small to medium sized

A few things have to change as a software team grows.

Your code organization system might be entirely logical to you, but that might be because you were there when it was written. Do the module names re-use the same few words over and over? "Manager", "Handler", "Master", "Server", etc are commonly overused, meaningless identifiers but any over-used words is a symptom of the same problem: meaningless names. New programmers will have to do archeology with grep to figure out how the thing is put together. Re-using the name of the company in multiple parts of the module path to somehow distinguish two parallel code hierarchies similarly creates confusion rather than organization. If a module lacks a meaningful name, it lacks a clear distinction for what goes in there as opposed to somewhere else. Also, the module names will start to sound like Monty Python's "Spam" skit. "Oh, did you mean to edit Data Manager Manager Manager Data Manager or Manager Manager Data Manager Manager Data Manager?".

Using a chat channel is generally a good idea. However, if new programmers are supposed to draw on the entire dev team for help rather than having any sort of per-project or general mentoring system, you have a lot of broadcast overhead. You also put each programmer in an awkward position of deciding at what point to finally step and help rather than hoping another has more time and knowledge of the matter.

There's a whole code standards thing. This combines with the organization problem. A new organization system will develop for each new programmer's limited understanding of the existing organization system. More broadly, are future programmers going to be even more bogged down? And what's going to be the priority then?

-scott

Tuesday October 20, 2009
03:13 AM

Sorts of telecommute... my routine then and now

For NG, I was a consultant in spirit even though I was W2, not 1099. On a typical day, I'd crawl out of bed somewhere between 10 and noon. I tried not to sleep past noon as I wanted to get to any important email in a reasonably timely fashion. Also Yahoo! Small Business email just loves to silently drop email. I can't tell you how much confusion and frustration all of us endured using this piece of shit that the company was actually paying money for. Yahoo! is a spectre of a previous Internet century and needs to die. For a while, I had everyone using codenames for things we were working on. "P" was the poker game (quotes included). "B" was the baccarat game, and the various slots games had two letter abbreviations. The theory was that emails were scoring so high on the spam test that they didn't even make it into the spam folder. This hypothesis proved incorrect as Yahoo! Small Business Email continued silently vanishing emails mailed between users in the same corporate domain.

Anyway... some evenings I'd work late into the night. Other evenings, after doing some email and hunting a few bugs, I'd go drink or engage in some other social activity. Sometimes these would be in clusters; I'd do no actual work for a week, only baby sit getting a release packaged or debugged; I'd do nothing but bake goodies for an upcoming bike tour, bike tour, then recover. Other times, I wouldn't leave the house for a week, working 14 hour days.

When I got something done, I'd send an email saying as much. Very seldom would I be asked for a status report. Once I got grumpy because I was woken up to a page asking for a report in the morning after I'd been asleep mere hours and after I'd sent one before going to bed, hours before. Of course, it turned out that Yahoo! Small Business Email ate it.

I tried to push for chat just because that would allow you get instant confirmation that an important factiod had been delivered, but one coworker was on wonky hours as I was but different and the other had to run around to the physical locations of vendors, investors, and so on and really didn't have time to monitor chat.

The non-disclosure agreement was a verbal one. In Vegas, threats really aren't made. Rather than making pretenses that you'll be sued if you screw your employer over, instead employers actually ask around the grapevine about people. You don't get many chances. Of course, if you commit a felony, you'll either go to jail or else go on the lamb. All of this self importance of long contracts signed in triplicate and multiple agreements covering various aspects of work is just missing. Business is done on a handshake. The legal department can't keep people from being dumb asses.

Code is not over engineered. Almost everything you see in Vegas is either z80 assembly written for the bare metal or else Macromedia Flash. No one talks about design patterns, best practices, architecture, API design, or any of that. They do try very hard to write, as the famous quote says, "obviously no bugs rather than no obvious bugs". KISS is the guiding principle.

We'd all go out for beer after work, during office visits. Some times almost every night.

The code repo sat on a colocated Linux machine and a qualified sysadmin did what was needed for him to feel comfortable for us to be able to push and pull from it. There was a tacit understanding that we couldn't allow our laptops to get pwned or stolen with sensitive data on them.

Conference calls were done on an as-needed basis between those concerned when email wasn't cutting it.

I was at liberty to underbill rather than make a quota; this allowed me to research technologies or just read code without concern of logging unproductive hours. I never felt like I couldn't sit down with a book and read all about how to do something.

I wanted to write in praise of my experience working for Vegas, not to diss on Web companies, but I have to complete the contrast. I'll do it as quickly as possible. I've done a number of these now -- I'm not pointing my finger at any one company.

Regular hours; mornings; team dynamics; "good fit"; commitment to 40 hours; VPNs; scrum; speaker phone in to meetings; ...

Sorry, I'm rehashing a common topic I've written about before. JaWS was similar to the Vegas gig; I've been in that basic situation twice now.

-scott

Wednesday October 14, 2009
11:01 AM

Optimism strategy

Famous pro sports athletes sometimes try to transition from one sport to another. A basket ball player will take up football, or martial arts, or whatever. They don't assume that they'll be successful. Often they'll say it's for the challenge and to expand themselves. This kind of optimism is why they did well in the sport that they were in. They seldom manage in the sport they've joined. That doesn't mean that the attitude is a bad one though.

The world is full of kids who just learned a bit of programming and are making dynamic websites for people. Often this ends badly, as the sites get comprimised or fail under load. But it's a good strategy for learning to program and to make websites.

I'm not good at being an optimist; out of necessity I decided to try. I'm jumping in head first to a pile of technologies that I've been avoiding, and the idea of tackling another code base made my skin crawl. Honestly, the software market is better than the light industrial, retail, etc markets, at least in Phoenix. I know this. I tried. I learned that call centers really have vanished from the US. After the dot com blowout, I worked in a call center for a while. This time, I failed to find such a job.

And, as I chose to ignore, the employer is going to be unhappy. But next time I try, I'll suck a little less. I hit the bottle pretty hard sometimes but I'm thinking at this point I need to explore other drugs that take the edge off of plummeting serotonin levels associated with exploring a new, huge code base. Or I need to transition away from working on large code bases. Branching into any new technology (for me) is going to involve a lot of not knowing.

I have to remember that optimism isn't easy. It doesn't have the predictability that cynicism does. It involves a lot of "I don't know, but I'm doing it anyway". If optimism about ActionScript doesn't pan out, I need to use some towards HP/UX. I've heard stories of idiot sysadmins. An optimist strategy is to think that I couldn't screw up any worse than them.

-scott

10:33 AM

Cynicism strategy

I badly need a newer, faster computer as the requirements of modern software have outpaced the 1.7ghz P4 Celeron desktop and the 800mhz fanless laptop I have. I have a little cash at the moment. Yet I'm far more inclined to spend it all stockpiling dry good and canned goods from the dented can store. I'm extremely apprehensive about splurging for newer, faster hardware. Even a $230 gPC3 with a 2.0gha Sempron would be a huge upgrade from this P4 Celeron -- two terrible things together at last! Obnoxious number of pipeline stages meets stripped out ALUs and cache. What a dog.

In a very real sense, my cynicism here is a self fulfilling prophecy. What kind of programmer carries around an 800mhz fanless computer as their main machine? How many programmers have to take pace breaks because the drive light is pegged yet again while it thrashes around in its maxed out 256 megs of RAM?

In a sense, through my actions, I'm saying "I think I can't, I think I can't".

There are plenty of ways to divide programmers according to philosophy. I'm one who worries about the worst case -- the worst case run time of the algorithm, the race conditions, the boogie men lurking in the basement of the codebase, and so on. Other programmers are unarguably optimistic, who try daring things and celebrate the moment it works, perhaps rightly thinking that getting it to work well will be the easy part.

I'm drowning in leaky abstractions. I think the people who think VPNs are great are more easily able to ignore losing all of their sessions those several or hundreds of times a day the VPN disconnects. People who think that any technology is great are able to put up with the downside of that technology... whereas I'm stuck in a cost:benefit analysis that even if it doesn't rule against the technology, makes me acutely aware of its downside. From my point of view, people are overly eager to assume the negative costs of technology for the promise of benefits. Optimistic programmers put the rest of us up to our necks in abstractions we don't need and often don't benefit from at all. Yet you can't yell at someone for being an optimist.

-scott