My current favourite vim trick when writing tests in Perl is CTRL-A.
My tests contain a line near the top that looks like
use Test::More tests => 12 where my test suite has 12 tests. I move my cursor over the number 12 and type
mc which sets a mark labelled "c" for test count.
Then, if I write 5 new tests, I type
`c (backtick c) to jump to my test count, then
5 CTRL-A to increment my test count by 5. Finally,
`` (2 backticks) takes me back to where I was last editing.
If I remove tests, I use CTRL-V instead of CTRL-A to decrement the test count.
I've given a talk on using the techniques as CPAN authors use in their public code in your private code twice this year: originally at Milton Keynes Perl Mongers in April, then at YAPC Europe in August.
And finally I have put my slides online!
I have noticed that most companies arrange their code in an unstructured way, and contrasted this to how CPAN arranges code as small distributions packaged with documentation and tests. I advocated the idea of taking this approach with other Perl code, even if it will never live on CPAN.
I got some useful feedback after my talk and had several interesing discussions about it at the conference. So I thought I would write up various extra ideas here.
I mentioned CPAN::Mini::Inject, but didn't know about CPAN::Inject, which works with a full CPAN mirror instead of a CPAN::Mini mirror. I also didn't know about CPAN::Site which does much the same. I don't know which circumstances each works best in.
I mainly focused on ExtUtils::MakeMaker and Module::Install for packaging modules together. I deliberately ignored Module::Build and didn't know about ExtUtils::ModuleMaker.
The DPAN project on CPAN looks like an ambitious approach to solving the issues I covered and much more too. I probably won't look at it until it has more documentation (I'm a bit of a late adopter), but I should keep an eye on it.
I was grateful for the advice of others who have tried the approaches that I mentioned, sometimes following more thoroughly than I have done. If you're breaking out your code base into lots of small CPAN-style modules, you might find judging the appropriate level of granularity awkward. Talk to your colleagues and don't rush to break things apart too early.
Last Thursday, Milton Keynes Perl Mongers held our second technical meeting of the year. How organised of us!
We had a typically varied, interesting set of talks, reflecting the different things we use Perl for.
Colin Bradford told us more about how Lovefilm used memcached to scale their popular Perl Web applications for hundreds of thousands of users. Colin gave some interesting suggestions based on hard won experience. My favourite tip was "avoid using methods called 'set' and 'get' because you will mistype them and fail to spot mistakes in code reviews. Use 'store' and 'retrieve' instead."
Tony Edwardson gave a brief introduction to Moose, which proved useful for our attendees who aren't die hard Perl developers, and offered a good overview for those of us who haven't done much with Moose yet.
I showed how developers can use CPAN's toolchain to develop and distribute their own private code. I chose not to use slides, and went for the live demo option which fortunately went fine.
Finally, Peter Edwards told us about using WxWidgets to develop cross platform GUI applications in Perl. The Padre Perl IDE has recently made this more popular: Peter told us about an older application he supports. I liked that he told us about supporting tools that might life easier without language bigotry: some of the most useful Wx tools are written in Python.
Unfortunately our meeting clashed with the London Perl Mongers' technical meeting, which affected a few people who wanted to attend both. Still, I consider this a good problem to have: We have several active Perl groups in England meeting regularly.
I'm currently organising our next set of talks some time in July. Please let me know if you would like to present a talk, or keep an eye on our Web site, mailing list or IRC channel if you would like to attend.
I find Test::Deep very useful when writing tests. It's very handy for checking complicated data structures in all sorts of different ways.
Recently I wanted to use Test::Deep in my own Test::Builder test subclass. I discovered Test::Deep::NoTest which seemed useful: it lets me call functions from Test::Deep without them expecting to run directly within a test script themselves.
So, Test::Deep::NoTest and eq_deeply() looked like what I wanted. And they did a fine job, except when comparisons failed: I could find out that a comparison had failed, but not why.
After a brisk stroll through Test::Deep's source code and a quick chat with its author, I wrote a patch to expose and document its cmp_details() method. This lets you compare data structures and get a description of any differences, if they exist.
If you want to use Test::Deep within your own test class, download version 0.104 or newer of Test::Deep and see the new documentation on Using Test::Deep with Test::Builder.
Cambridge Perl Mongers held our first meeting in a while last night. Seven of us met in a pub and discussed all sorts of things - even a little Perl.
We plan to meet every month - next month's meeting will take place on April 1st.
Unfortunately, several people who might have attended didn't know about the meeting due to a recent mailing list migration. If you think you're on the list, it's worth signing up again. Hopefully, the old list's subscribers will soon find their way on to the new list automatically, but that hasn't happened yet.
Despite its low attendance, the meeting went well. We drank various beers with confusing TLA names like JHB and IVB. Thankfully I managed to avoid asking for IVF.
If you're anywhere near Cambridge and interested in Perl, please subscribe to the mailing list, join us in #cam.pm on irc.perl.org or join us in the pub next month.
We also plan to hold a technical meeting before long with various talks about Perl. It's all quite exciting.
Milton Keynes Perl Mongers held our first meeting of 2009 on Tuesday.
Most of the meeting consisted of lightning talks where the speakers told us how they have used Perl to solve problems at the BBC, the Open University, Lovefilm and various investment banks.
We also heard about how the new online shop Penny's Arcade relies on Perl.
I have linked to the the speakers' slides from our Web site, so please take a look if you're interested in online retail with Perl, specifying module versions with only, Chart::Gnuplot, memcached, creating PDF files with Catalyst or exceptions and Perl.
Please get in touch if you would like to present at one of our future meetings, or if you'd like to come along to watch the fun unfold.
I'd promised a Milton Keynes Perl Mongers technical meeting for some time, and on Thursday we finally got round to holding one.
We've found that the Perl Mongers group has plenty in common with the local Linux User Group so we decided to hold a joint technical meeting. We've had joint social meetings for several months.
The meeting started with Dave Cross, our guest speaker, giving an Introduction to the Template Toolkit. Dave did a great job of showing what TT does well. I'm already a fan, so I hope Dave prompted other listeners to discover its wonderfulness.
Adam Lowe told us about some of the projects he's working on at The National Museum of Computing at Bletchley Park. They have some scarily huge machines - some of which work! - that can't even store an MP3 file on their tape drives. As Dave pointed out, this seems strange given that we used to store music on tapes not that long ago. Adam's talk made me want to revisit Bletchley Park and see what the Computing Museum people are up to.
Oliver Gorwits told us about Oxford University's network and how it works. There's lots of Perl, Linux and other open source in there, as you'd expect. I particularly like that SNMP::Effective plays a role in there, as Jan and Oliver have bounced around ideas for its development on the #miltonkeynes.pm IRC channel.
I've put all the slides online at http://miltonkeynes.pm.org/.
Conveniently, we found a real ale festival after the talks. We enjoyed a few beers and met up with Tony, who hadn't been able to make the talks.
So, for a group that's just over two years old, I'm pleased with what we've done. I'm looking forward to some great meetings in 2008 with the regulars, our old friends (I'm expecting visits from Birmingham.pm and Matt Trout), and hopefully a few new visitors too.
It's also great to watch the open source community grow in the city where I live. It seems we have lots of people doing interesting things here, but mostly we're not talking to each other. We're slowly improving this.
I hadn't been any conferences this year, so I thought I'd visit the first ever Pycon UK as it's nearby, in Birmingham, and I know almost nothing about Python.
So far the conference seems very well run: we have frequent coffee breaks, good Internet connectivity, talks keeping to schedule and a fairly tasty lunch.
Fortunately, I chose some interesting talks to attend this morning. It's always hard to know what you'll enjoy beforehand.
I started out in Paul Johnston's talk about SQLAlchemy, an ORM for Python. I've been hacking on DBIx::Class lately as my job involves lots of databases at the moment, so this area interests me. The issues Paul mentioned seemed very similar to those that DBIx::Class has tackled: don't issue too many queries; work well with pre-existing databases; build database schemas from code.
Paul also showed how to incorporate SQLAlchemy with a templating system, which reminded a little me of the talk I gave on Class::DBI at the London Perl Workshop in 2004. Apparently Python programmers have strong opinions about the different templating systems available, just like we do with Perl.
Next, Michael Foord showed how to run Python code in Web browsers using IronPython (Python within
Terry Jones's talk about representing and processing information had little technical content, but instead covered ideas about how we treat objects and attributes in the real world and in computing environments. For example, I can form an opinion about Birmingham and share that opinion without needing anyone's permission. In computing, we too often think of objects as having owners and permissions. Terry's ideas reminded me a little of a few years ago when I got excited by RDF and thought about metadata. We can regard any data as either data or metadata depending on our perspective: all data types intrinsically exist as both data and metadata, but we choose one description depending on how we model things.
Some of Terry's ideas, particularly personalised searching, reminded me of discussions I've had with Nigel Hamilton, the author of the Trexy search engine. I might have to play with Terry's marbl.es search tool when it goes live.
Finally, Michael Sparks talked about Managing Creativity. Michael mostly described how his colleagues at the BBC structure their project's Subversion repository to maximise creativity, experimentation and sharing. Essentially, all work goes in the repository: even if one developer considers something they made worthless, another developer might value it or part of it. As well as the traditional trunk and branch structure, each developer has a sketchbook where they put their experiments. Developers can view each others sketchbooks, but have no guarantee what state they will be in. Finally, you can't merge your own work onto the trunk. This struck me as a very good way to enforce peer review.
After lunch, I felt tired so I've sat in the sun and slept a little. I'm just about to head back for the lightning talks, and I'm looking forward to some interesting informal conversation over the conference dinner this evening. I have a place at the Memory table: each table has the name of a Python exception type.
I've been looking forward to today's London Perl Workshop for ages. I've put together a talk, "Testing When You Don't Have Time", that I thought would fit well with the workshop's theme.
I felt really tired last night, so I went to bed early despite friends texting me, inviting me to the pub.
Today I woke up feeling rough. I've only just got up, as of 11am, and I can't face the journey down to London.
Until now I had spoken at every (well, both) London Perl Workshops, so it's a shame to miss this one at the last minute. I hope everything goes well for the organisers, speakers and attendees, and I hope to see everyone I'd looked forward to talking to soon.
Like many people, I use DBD::SQLite in my unit tests so I don't need to rely on a live database.
I've been using File::Temp::tempfile() to create temporary files for me use as SQLite database files. Yesterday, I noticed that tempfile() and DBD::SQLite don't play nicely together. I could use tmpnam(), but as this doesn't create the temporary file a small race condition exists.
I discussed this with Matts last night and he pointed me towards SQLite's in-memory databases. My test cases don't use huge amounts of data, so I can run everything in memory and avoid hitting the disk:
DBI->connect('dbi:SQLite:dbname=:memory:', '', '');
This means I don't need to bother with temporary files any more, making my tests simpler and faster.