This post is on something that has been bogging my mind a lot lately, which I haven’t been able to fully express, but I’m hoping getting it published will help me settle my mind.
I had the pleasure of attending the ETSI Model-Based Testing User Conference – and let me start by giving kudos to all the great people in the field attending and giving some sharp presentations!
At the conference I got a look into all the available vendor tools for Model-Based Testing (I won’t list them here; this is not supposed to be an advertisement blog). All of them are pretty powerful tools, and they allow you to build models from a set of requirements that you gather in the beginning. In the model you specify which actions corresponds to a certain requirement – e.g. in finance you may have a requirement that posting a sales order will produce a set of ledger entries so this requirement would be associated with the posting action. Some can even import your requirements from external tools, and keep track of changes to requirements. This is all pretty nice, and one tool showed the ability to visualize the impact of a requirement change directly in the rendered model view – now that is awesome!
A significant number of presenters at the conference were also happy to report that they had discarded existing metrics for quality of their software testing, and replaced it with requirements coverage. Meaning, they now record the number of times requirements are covered in their generated test suite.
But then a thought came to me:
“If I know all my requirements up-front, why would I use Model-Based Testing instead of just writing one or two scenario tests per requirement, which I know covers these requirements well?”
Well, the reason I would use Model-Based Testing is that it produces many more permutations of actions sequences that I would not care to test if I had to write the scenarios by hand. I usually start out writing a very crude and simple model, which has some limited value (it covers few and basic requirements) and then I see what happens when I run the tests. Almost always I find that some of my tests break because some preconditions were not met. This means that the executed action was not valid from the given state, and I have a model error. I then analyze these failures and turn them into preconditions on the actions, such that they are no longer executed from states where they are invalid in the SUT.
So what actually just happened here? By developing the model iteratively we start to reveal more information about the SUT, but every time we have to put in a precondition check on an action, we have actually uncovered an unknown requirement. For example we didn’t actually know that “you are not allowed to delete an item from a list if the list is empty” (sounds a lot like a requirement to me…) before we tried it! Once we tried doing it we uncovered an unknown or implicit requirement.
This sparked another thought:
“When was the last time I developed a feature where I knew all the requirements up-front?”
And the truth is, it never really happened, because most features I have worked on were either too complicated to document requirements fully, or the feature evolved over time because new ideas emerged.
In this new world of Scrum & Agile, modeling from your requirements feels very much like the old way of doing things – almost like the waterfall approach to software testing. In my view on Agile, the team cannot expect to fully understand the requirements when they set out to work on their feature.
Don’t get me wrong, I think it’s important to keep track of your requirements, and verify that they indeed are covered by your model and generated test suite. It serves as a sanity check, because modeling is a difficult challenge, and you can easily leave out a requirement in your model or generate a test suite that misses one or more requirements.
But please leave it at this, don’t get too focused on your known requirements, they are not what matters most. The real beauty of Model-Based Testing is its ability to uncover implicit requirements.
No comments:
Post a Comment