Tuesday, April 7, 2009

The Gospel of Record and Playback

Microsoft unveiled its UI Automation capabilities for VSTS 2010 at Mix09. If you are interested, watch this video. Watching it was painful for me.

Some Highly Opinionated Observations
I felt truly bad for Brian Keller (Brian, if you're ever reading this, you did a great job). He was stuck practically sweeping up the ticker-tape from previous presentations. Even he found it ironic that his presentation was stuck at the end of the conference...like so many testing efforts. And like so many testers, he pleaded for feedback from a roomful of developers to help determine if there was significant interest in future talks about testing. Oh, and the technology wasn't helping either.

In 17 minutes, I saw a perfect example of the inherent difficulty with record-and-playback, even though it was capturing a lot of deep information. A functional test was recorded and the first failure was detected upon playback, only it was triggered by errors made while editing the generated code.

Recordin' Payback
Now, don't get me wrong. I am VERY impressed by what Microsoft is putting into the hands of the team - but at what premium? This is where I would like to insert my favorite argument, "Why do I need to pay $X,XXX.YY/seat for a tool that gives me the honor of pushing the button our team was tasked with creating?" How expensive is it to do any/all of the following during development:

- Define unique control IDs
- Define Active Accessibility parameters (y'know...those weird parameter fields you NEVER fill out when adding controls to GUIs in Visual Studio)
- Override control's WndProc methods with custom logic to persist user interactions to log files that can later be used for playback
- Keep logic OUT of the forms so that your application can run headlessly
- Implement Michael Hunter's Automation Stack
- Charge a junior test engineer with the task of learning how Win32 Window APIs, MsAA, Java Active Accessibility Bridge, .NET UiAutomation namespaces, etc. really works.
- Make testability hooks critical features in the application
- Make log files critical features in the application
- Report Source Code changes to the entire team

Does it take longer to implement these things? Maybe. It will definitely take longer to impress any management that merely hovers over the process with all the disinterest one could expect from a super model on a blind date with <insert NASA scientist here>. But I can say this much: if the department I'm in is considering necessary economic cutbacks, I'm glad I won't be competing with maintenance fees for COTS tool X.

My opinion is that these are more valuable for the organization because they invest in the individual(s) and their knowledge, not dumping money into 3rd party solutions to solve what are perceived as technological issues. To me, this is the same as if my parents, after being diagnosed with Diabetes, determined that the problem wasn't their diet but was actually their cooking. And then proceeded to "solve" this by eating the same foods at restaurants because...heck...they know how to cook. Sure, it costs more, but look how much healthier we are!

Hmmm. I guess it depends on the food/restaurant.

Finally, on a more personal note: I long for the day when the ridiculous moniker of "evangelist" is purged from the tech nomenclature. I understand that it can bear secular meaning, but its use is imbued with religious connotation. An evangelist is one whose very existence bespeaks of a salvific message. As a tester and a man of faith, I find the idea that a technological solution could save us is...well...laughable; and the very reason I have a job.

No comments: