Furthermore, he argued that my model was close to useless, because it did not come with a validation oracle. At a deeper level, the discussion also revealed some interesting discrepancies to me: He believed.
Needless to say, we never reached agreement on the subject – but I realized later on that we were arguing from completely different perspectives.
After the discussion I started thinking: “How can we be of so differing opinions?” After fumbling around with this question in my head over the weekend, I realized that we were using Model-Based Testing for two completely separate purposes.
A notion started to form in my head – it seemed that there were different ‘schools’ (or views) within Model-Based Testing.
· Theoretical modelers: They want strict rules around their models, and conduct rigorous validation in an effort to validate the conformity of their models. They have a theoretical approach to software testing and like to mathematically prove algorithms and construct test cases that cover just the necessary and sufficient cases.
· Pragmatic modelers: They are more of the ad-hoc modeling type. They have a pragmatic approach to Model-Based Testing in which a model is valuable on its own. They understand that the model should be validated, but they can live with limited validations. They see value in the model as a means for communication.
Of course, there is no right way of modeling. Both schools in the extreme limits are losing the grip on reality. If you dig yourself too deeply into the theoretical aspects of modeling you will stagnate, whereas a too pragmatic approach to modeling most likely results in low quality models.
However, I find myself to be closer related with the latter group. I tend to want to see my models grow before I care too much about validation.
My take on Test Oracles
This prelude story brings me to the actual point I want to discuss today: Test Oracles.
In software testing a Test Oracle is an entity in the system which – just like in the movie “The Matrix” – holds all the answers, at any point you can ask your test oracle “is the system in a valid state?” and it will tell you the answer. Very handy for Model-Based Testing!
Test Oracles are also these imaginary things that testers like to talk about, but no one have actually seen nor written. Okay, maybe I’m being a bit harsh here, but except for very isolated cases, Test Oracles are intractable because the complexity of the underlying system is too high. I have the feeling that in many cases problems are deferred by relying on some Test Oracles to be written in the “near future”, in an effort to ignore the tough problem of writing solid validation.
I believe the two Model-Based Testing schools are fundamentally different in their view on validation.
Theoretical modelers will seek to write strict Test Oracles at first, which is a really challenging task. I believe this is one of the reasons why Model-Based Testing is perceived as being difficult. If you start thinking too much about oracles and validations, you never get started on your modeling.
Pragmatic modelers on the other hand are much more likely to explore Model-Based Testing without dealing with the validation problems initially, and rely on Heuristic Oracles later on. This makes for a lower barrier to entry.
Heuristic Oracles are approximation oracle; they will tell you if something has gone totally awry. For example a heuristic invariant could be that a computed length value must never be negative.
In conclusion Test Oracles are nice for closed problems, where it is easy to identify invariants that are necessary and sufficient to prove an algorithm – like sorting algorithms or tree-structures. But, aside from the text-book examples on algorithms and data structures, frankly most test problems are infeasible to write Test Oracles for. For any large-scale system, writing an Oracle would be too time-consuming, and most likely it would end up being more complicated to maintain than the actual product you are trying to test. Heuristic Oracles are less strict in their validation, but they allow testers to get some value out of their initial modeling efforts, and lower the entry barrier.