This week I participated in the Test Automation Conference hosted by Google in London. They are based just by Victoria Station and have pretty shiny offices indeed with a pool table, table football and lots of little perks like fridges full of Innocent smoothies. This two-day conference really interested me as I'm really interested in large scale testing - as the tshirt we got pointed out: "because life is too short for manual testing".
It's not easy organising a conference for 230 people but this was wonderfully pulled off: the schedule was full with a good mix of academic, Google staff and real world experience and we were very well fed. I particularly thought the Google staff talks were really interesting. However, I think the talks could easily have been 30 minutes instead of an hour. They are planning to stick videos of all the talks on Google Video, but I thought I'd share some highlights with you about it all.
It all kicked off with the SmartFrog crew asserting that "any application without adequate system tests does not exist. This seemed a little like a research project which can automatically deploy and test entire systems with many machines - but maybe onto virtual machines. In this talk, as in many others, there was a demo. However, watching characters scroll up the screen isn't very exciting (and that's if the demo doesn't crash and burn): please show me pretty charts and statistics instead. It looked interesting, and really pushed the idea of full system tests at the beginning of the conference.
On the other side, there were a few talks which amounted to FIT is interesting. I do like the idea of letting the user choose the functional tests, but at the moment I rarely have users that know what they want, so it's hard to implement for me.
Some of the talks did mention coding little domain specific languages in Java. I'm a fan on DSLs, but in Java you need to have so many braces I'm not sure it's worth it: in more dynamic languages you can get away with it much easier.
I particularly liked AutoTest, in which they extracted contracts from Eiffel applications and created random testing with clever optimisations like reducing the test state space and producing minimal test cases for failures using static program slicing. Something similar for perl is LectroTest.
The highlight of the second day was Goranka Bjedov explaining how she used open source tools for performance testing (mostly JMeter). She shared a great depth of knowledge with us all, from performance, stress, load (don't stress Linux with > 80% load, memory used), benchmark, scalability, performance, reliablility (becuase at any point in time a thousand systems are failing) and availability testing. She liked open source tools ("Why do we build tools from subatomic particles when we have bricks?") such as JMeter ("free as in puppy") and shared some pretty stats (with memory and CPU both as percentage) showing that "developers are totally delusional about software". A totally wonderful talk.
There was a Selenium talk, which should have been very interesting. Instead of talking about Selenium, however, he tried to show us a demo of using Selenium to record screencasts of testing web apps in virtual machines so that you can see what went wrong. It crashed and burned. Never do live demos, it's not worth it: always fake it with a screencast.
I also liked "Testing Metro Wifi": throwing cheap LinkSys routers and palmtops around Mountain View and testing with iperf. Very nice.
The shorter the talk, the better it is. So we finished up the conference with lightning talks: all were wonderful, particularly Ovid introducing the testing world to TAP.
Many thanks to Google and all the speakers and attendees. I learnt a lot from you all, especially the hallway track