Monday, December 19, 2011

Are Test Oracles a mathematical nicety to cover-up problems?

Last week I had an interesting discussion with a colleague that did not agree with my take on Model-Based Testing. He was unwilling to buy my arguments for why Model-Based Testing is a more efficient way of automating test cases compared to traditional scenario automation. He was a firm believer in Model-Based Testing but his experiences told him that modeling was an expensive cost up-front, which could be made up for in maintenance savings, whereas I claimed that modeling was a much cheaper automation approach.

Furthermore, he argued that my model was close to useless, because it did not come with a validation oracle. At a deeper level, the discussion also revealed some interesting discrepancies to me: He believed.
Needless to say, we never reached agreement on the subject – but I realized later on that we were arguing from completely different perspectives.
After the discussion I started thinking: “How can we be of so differing opinions?” After fumbling around with this question in my head over the weekend, I realized that we were using Model-Based Testing for two completely separate purposes.
A notion started to form in my head – it seemed that there were different ‘schools’ (or views) within Model-Based Testing.
·         Theoretical modelers: They want strict rules around their models, and conduct rigorous validation in an effort to validate the conformity of their models. They have a theoretical approach to software testing and like to mathematically prove algorithms and construct test cases that cover just the necessary and sufficient cases.
·         Pragmatic modelers: They are more of the ad-hoc modeling type. They have a pragmatic approach to Model-Based Testing in which a model is valuable on its own. They understand that the model should be validated, but they can live with limited validations. They see value in the model as a means for communication.

Tuesday, December 6, 2011

Requirements and Model-Based Testing: A rocky road

This post is on something that has been bogging my mind a lot lately, which I haven’t been able to fully express, but I’m hoping getting it published will help me settle my mind.
I had the pleasure of attending the ETSI Model-Based Testing User Conference – and let me start by giving kudos to all the great people in the field attending and giving some sharp presentations!
At the conference I got a look into all the available vendor tools for Model-Based Testing (I won’t list them here; this is not supposed to be an advertisement blog). All of them are pretty powerful tools, and they allow you to build models from a set of requirements that you gather in the beginning. In the model you specify which actions corresponds to a certain requirement – e.g. in finance you may have a requirement that posting a sales order will produce a set of ledger entries so this requirement would be associated with the posting action. Some can even import your requirements from external tools, and keep track of changes to requirements. This is all pretty nice, and one tool showed the ability to visualize the impact of a requirement change directly in the rendered model view – now that is awesome!
A significant number of presenters at the conference were also happy to report that they had discarded existing metrics for quality of their software testing, and replaced it with requirements coverage. Meaning, they now record the number of times requirements are covered in their generated test suite.
But then a thought came to me:
“If I know all my requirements up-front, why would I use Model-Based Testing instead of just writing one or two scenario tests per requirement, which I know covers these requirements well?”