Good and bad news for this month's Melbourne PM meeting.
Good news, a record (for the new venue) number of mongers (25.5, including what seemed to be a 3 year old perl hacker
Bad news, i stuffed the airconditioning arrangements, so the meeting room got kinda warm. Hoping not to repeat that fiasco. Also, the quest to find a broadly acceptable place to eat/drink afterwards continues. Apparently the idea of booking a place to sit down and have a range of food and drink presented is harder than it should be. Targeting Young & Jacksons as a test for next meeting.
Need to find time to check out the remote_repository stuff for PAR too. At the moment it's looking like it has the potential to fix the "how to i distribute updatable code to non-root users" problem.
A missing component seems to be putting multi-byte unicode into a PDF document. Try this little beggar 狗 on for size.
Anyone know a tool that can do the job?
acme was asking about best practice for logging modules, citing Log::Log4Perl and Log::Dispatch. By preference and experience i'm a unix programmer, but recently (last couple of years) i've needed to write some stuff for the win32 platform. One of the more difficult things i've encountered is trying to log 'natively' on win32 and unix.
For me, native logging on win32 means logging to the windows event logger, which, as seen at Win32::EventLog, requires a numeric EventId and a set of parameters joined by the NUL character. Win32 combines these with a DLL provided by the application writer to produce the final localised message seen by the user in the event viewer. Unix on the other hand via syslog, requires a string which is sent to the appropriate log file. This presents a big problem when trying to create a common logging API.
So far, the usual way of combining the two seems to be by in the background, providing an EventId of 0 (or whatever number) and a single parameter consisting of the entire message. This allows this sort of interface:
However, the EventId is the only thing visible in the event viewer until you double click on the specific event to see more details about it. Therefore scrolling through X completely different entries all with an EventId of 0 is a bit tedious to say the least. So this sort of interface is familiar to the Unix programmers, but possibly a little misleading if they think their work will ever run seamlessly on windows.
My best effort at creating a combined win32 / unix log library has been to provide a call like this:
Which on unix opens the correct localisation file, creates the log message and sends it to syslog/wherever and on win32 sends direct to the event logger
Comments on these ideas?
I gave a talk on things to think about to make a Win32 port easier at Melbourne.PM last night.
Although, as pointed out at the end of the night, it was actually more of a talk on making a perl program behave like a native Win32 program from the POV of an external customer.
Points covered were
All up, about an hour's worth of talk, after which i had to go instead of pubbing due to a nasty cold.
Actually, having spent a bit more time with nginx and lighttpd, i confess that the previous accusations of being unable to support conf.d was unfair. Both were able to do it, but for various reasons only lighttpd on debian was actually supported (out of the four combinations that i'm tracking, lighttpd/nginx on fedora/debian). Appropriate bugs were raised with maintainers and upstream and now nginx is supporting conf.d on fedora 9 and in debian sid.
very cool and much thanks to the excellent people maintaining the nginx distributions at debian and fedora and to the good upstream support from Igor Sysoev.
i still need this bug solved for my application to run under on lighttpd
i seem to be evaluating a few new software packages, mostly web servers and web applications. i'm trying to think up heuristics to help me form a quick opinion of a new code base, mainly the probability that it will contain disastrous (requiring repeated and probably unsuccessfull (win32 shatter attacks) patching) flaws. one that i came up with was
grep -r close * | less
and check if the developers cared enough to actually check the return codes of system calls. i figured close is a good C && Perl compatible call to search for.
nginx seems to at least make a determined effort, i'm starting to like this web server a lot. lighttpd of course, didn't make an attempt that i could see. so, on went the test. the next five fairly major projects (non-CPAN) i tested failed impressively as well. bad/meaningless test maybe?
sadly enough, for linux and win32, i think all it means is that the system is broadcasting the fact that the responsible administrator is not applying security patches.
Linux at least seems to require a reboot for a new kernel rather a lot, like at least every couple of months.
On the bright side, $work is hosting the Melbourne Perl Monger meetings for an indefinite period, which is nice.
We had a meeting on Wednesday where we had a talk on git and lightning talks on fastcgi and port knocking. Followed up with beers across the road.
I've been porting my perl web application to as many web servers as possible, trying for cgi or better where better is defined as stable (for well written code) and faster. This has meant fooling around with external fastcgi applications (i have approx 30 stub cgi programs, which tends to exclude the web server controlling fastcgi and also exclude scgi). It's also meant finding another web server (nginx) that apparently has it's own highly experimental version of mod_perl.
So far, successfully ported to nginx (fastcgi), lighttpd (fastcgi), iis (isapi), apache (fastcgi && mod_perl) and apache2 (mod_perl2).
Even though my application is therefore successfully tested on at least these web servers, i can only write packages (msi/rpm/dpkg) for iis and apache(?:2)?. All the others refuse to acknowledge the idea that a writer of a web-application requires a conf.d style directory to place their web server configuration snippet into. And some of these "snippets" can get quite large.
i think it's interesting that, apart from main players, i haven't been able to find a web server whose developers sat down and thought "how are our clients actually going to interact with our program"
I've been experimenting with sending sms.
One of the available programs for this is smstools. Smstools requires the sender of the sms to write to a directory that the sms daemon monitors.
However, Fedora requires root permissions to write to that directory. Debian requires that you are the smsd user (who is a member of the dialout group). Neither system gives write privileges to a group.
Now, from the point of view of a user interacting with these packages, both provide the annoying problem that a set-uid binary is required, simply to send an sms.
However, i think this is also a security hole as well. Not in the actual package, but in that to use the package, a normal user must go through a privilege escalation process. Every system that wants to send an sms has to therefore write their own custom set-uid script/binary, causing the un-necessary potential for set-uid bugs/system takeover.
The Debian package of course only has the potential to elevate to smsd, but since smsd has the potential to send unlimited sms and erase all trace of it, the horror is still pretty real.
Does a security bug against these packages seem justified?
I have to admit i found this quite frustrating.
I've written a set of programs that do nightly builds that feed into nightly tests, blah, blah.
One of the requirements is that we bundle our own version of perl with the software, so, every night, perl gets
After time, perl's make test failed in 't/op/fork.t'. The next night it worked thou.
After receiving a query about the unexpected build failure from the local sys. admin. team, i did a bad thing and decided that if fork was failing and then started working again, this could indicate a hardware issue.
Sorry!!! it could also indicate that 't/op/fork.t' is actually testing the rand() functionality.
't/op/fork.t' doesn't do anything at all with srand before forking a calling rand in each new process.
The test depends on the rand() in each process producing different results
Not surprisingly, 't/op/fork.t' fails fairly regularly.