Monday, December 19, 2011

Are Test Oracles a mathematical nicety to cover-up problems?

Last week I had an interesting discussion with a colleague that did not agree with my take on Model-Based Testing. He was unwilling to buy my arguments for why Model-Based Testing is a more efficient way of automating test cases compared to traditional scenario automation. He was a firm believer in Model-Based Testing but his experiences told him that modeling was an expensive cost up-front, which could be made up for in maintenance savings, whereas I claimed that modeling was a much cheaper automation approach.

Furthermore, he argued that my model was close to useless, because it did not come with a validation oracle. At a deeper level, the discussion also revealed some interesting discrepancies to me: He believed.
Needless to say, we never reached agreement on the subject – but I realized later on that we were arguing from completely different perspectives.
After the discussion I started thinking: “How can we be of so differing opinions?” After fumbling around with this question in my head over the weekend, I realized that we were using Model-Based Testing for two completely separate purposes.
A notion started to form in my head – it seemed that there were different ‘schools’ (or views) within Model-Based Testing.
·         Theoretical modelers: They want strict rules around their models, and conduct rigorous validation in an effort to validate the conformity of their models. They have a theoretical approach to software testing and like to mathematically prove algorithms and construct test cases that cover just the necessary and sufficient cases.
·         Pragmatic modelers: They are more of the ad-hoc modeling type. They have a pragmatic approach to Model-Based Testing in which a model is valuable on its own. They understand that the model should be validated, but they can live with limited validations. They see value in the model as a means for communication.

Tuesday, December 6, 2011

Requirements and Model-Based Testing: A rocky road

This post is on something that has been bogging my mind a lot lately, which I haven’t been able to fully express, but I’m hoping getting it published will help me settle my mind.
I had the pleasure of attending the ETSI Model-Based Testing User Conference – and let me start by giving kudos to all the great people in the field attending and giving some sharp presentations!
At the conference I got a look into all the available vendor tools for Model-Based Testing (I won’t list them here; this is not supposed to be an advertisement blog). All of them are pretty powerful tools, and they allow you to build models from a set of requirements that you gather in the beginning. In the model you specify which actions corresponds to a certain requirement – e.g. in finance you may have a requirement that posting a sales order will produce a set of ledger entries so this requirement would be associated with the posting action. Some can even import your requirements from external tools, and keep track of changes to requirements. This is all pretty nice, and one tool showed the ability to visualize the impact of a requirement change directly in the rendered model view – now that is awesome!
A significant number of presenters at the conference were also happy to report that they had discarded existing metrics for quality of their software testing, and replaced it with requirements coverage. Meaning, they now record the number of times requirements are covered in their generated test suite.
But then a thought came to me:
“If I know all my requirements up-front, why would I use Model-Based Testing instead of just writing one or two scenario tests per requirement, which I know covers these requirements well?”

Sunday, November 13, 2011

Model-Based Testing of Legacy Code – Part II – Risk Profiling

Last time I left you after describing my latest challenge – making sense of a large piece of legacy code. I eluded to the fact that we started building a risk profile for the changelist that made up the difference between a well-tested root branch and a poorly tested branch.
Okay, so what did we do to tackle this immense problem? Well, our approach was to compute a risk profile based on a model of the probability of code lines containing bugs. Without digging too deep into the mathematics, this is how the profile is built:

A)     All lines of code for files that differs are loaded into the profile
B)     Each line of code is assigned two risk weights: Test coverage, and difference weights
C)     For each file the weights are summed up and normalized and a third weight is introduced: Revision weight

The three weights are combined into a total weight for each code line and aggregated up to file level. The computation is a weighted average of the normalized weights, where the test coverage and difference weights are multiplied together:

Wednesday, November 2, 2011

Model-Based Testing of Legacy Code – Part I

It has been a while since I had time to post on this blog. I've been kept busy attending the Model-Based Testing User Conference - it had some really great presentation, that I also want to find some time on reporting on. I have a trip report ready, but I need to boil it down to the relevant stuff before I post it here. Granted, my next post is not going to be on Model-Based Testing, but it will however be the first in a series of posts that - with a bit of luck - will build up to some really interesting Model-Based Testing!

That being said, let's jump into it. Recently I’m being faced with a challenge of taking a large set of legacy code changes and test it in a way that is sensible. I work with a colleague in a two man team on solving this problem.
You can think of the changes as a source control branch. The root branch has good test coverage, but the new branch has no automated test coverage. The changes contains both added functionality to the product, but also smaller tweaks to existing code in the form of changes to existing functionality and regulatory changes.

The objective is of course to apply Model-Based Testing somehow, but the major challenges are:
A)     How do we make sense of this new branch?
B)     How do we approach automation of legacy functionality?
C)     How much can be obtained from modeling?

I will be writing a few blog posts for the coming weeks on the progress we make as we go along, but ultimately I would like to hear from my readers, what ideas they have on how to tackle this problem.

So let’s start with the first challenge...

How do we make sense of this new branch?
We decided to analyze the changes and group them into two sets: Features and integration points. A feature is a coherent set of changes that was developed in the same go, whereas integration points cover the incoherent changes that are either smaller or made over a longer period of time. Later we will see how this categorization affects the automation strategy we would like to apply.

Luckily we have documentation that names these features and link them to source files, so we do not have to make a detailed analysis to uncover what is coherent and what is not. Some documented features are small and can be seen more as integration points.

All integration points can be identified through a simple source code difference analysis. Notice this includes all features as well. It’s an open challenge how to identify overlap...

This was a brief introduction to the upcoming project from my side.

The next step that we are working on is coming up with a structured approach on how to handle legacy code, where we build a risk profile over the changes, more on this next week...

Tuesday, October 18, 2011

Dealing with floating points in models - Part II

Today I’m going to follow up on Part I, but I’ll give you the implementation details of the classes. I provided you with a generic interface:

Now as I pointed out previously, we can choose to implement this interface for any domain type, so let’s try implementing it for a double domain:
    public class DomainDoubleSampler : IDomainVariableSampler<double>
        public double Maximum { get { return 100.0; } }
        public double Minimum { get { return -100.0; } }

        public double BoundaryNegative(double boundary)
            return boundary - double.Epsilon;

        public double BoundaryPositive(double boundary)
            return boundary + double.Epsilon;

        public double Sample(double lowerBound, double upperBound)
            Random rand = new Random(1);
            return lowerBound + rand.NextDouble() * (upperBound - lowerBound);

This implementation defines an input range of [-100.0, 100.0] and is sampling at random inside partitions of this interval.

Friday, October 14, 2011

James D. McCaffrey implements Recursive Binary Search Tree

Today I came across this blog article by James D. McCaffrey, where he actually implements a recursive binary search tree, and I couldn’t resist the urge to apply my own Model-Based Testing solution [1,2] to his implementation!

I had to extend his solution a bit to allow for an internal validation function of any constructed tree, but besides from that my only comment to his solution is that he treats error conditions by silently ignore them – that is the following actions generate no errors/exceptions:
A)     Deleting an element that does not exist in the tree.
B)     Inserting an element twice into the tree.

Patching these up in his code, my model generated test cases all passed, so his implementation is rock-solid! Kudos for that!

[1] Application of Model Based Testing to a Binary Search Tree - Part I
[2] Application of Model Based Testing to a Binary Search Tree - Part II

Wednesday, October 5, 2011

Domain input partitioning / dealing with floating points in models – Part I

Originally I wanted to post everything in one article, but it got too long, so I decided to split it and save the hardcore details for the next post.

I find that dealing with models involving floating point input can be a tad tricky, so why not post on this topic? The problem here is infinity. It stems from the fact that any given slice of a floating point argument is of infinite size (or at least close to). Often when working with integer models we limit the input variables to a specified range like: where x in {0..2}. However, this does of course not work if x is a floating point input.

So how do we deal with infinity? First of all we need to clarify what it is we want. Clearly we do not want to test every possible floating point values between 0 and 2. Also, what we want is determined by what we are testing. So let’s make up an example. Assume we are to test the square root function (of real values to keep it simple), what kind of inputs would we give it? If you are a software tester your testing aptitude should kick-in right about now. We want to try a sunshine scenario with a positive value. We also want to make sure any negative input gives an error and also the boundary case 0 gives an error. Then we also have some extreme ranges like a very large positive number, a large negative number and the maximum and minimum values of the input type.

If we analyze our pattern here, all we are doing is partitioning the input domain. If you have been following this blog for a while you may recall I referenced equivalence class partitioning before – this is the same stuff. Simplifying a bit we end up with an input domain like:

The important part here is that we have fixed points of interest (the boundaries) and between we have ranges. For the boundary cases, besides the actual boundary value of interest is the near neighbors ±e, where e is a very small number. For the ranges we do not really care what the exact number is, for want of better we draw a random sample from the region.

Note on Spec Explorer and small floating point values:
Unfortunately I found a bug in Spec Explorer, that it crashes when working with small values in floating point. This limits me from trying out combinations where the parameter is e. The implementation of the automatic domain partitioning will thus also be limited to testing only at boundary cases and not their near neighbors as well.

Tuesday, October 4, 2011

Comments fixed

I realized today that comments were not working on my blog.

This should have been fixed now, and you should be able to post anonymous comments!

Sorry for the inconvenience, and please leave a comment to test the site.

Thursday, September 22, 2011

Application of Model-Based Testing to a Stack

Today I would like to go back to basics, and also show how to easily switch the underlying implementation. Some of my most read posts are on applying Model-Based Testing to basic data structures, so why not pick up yet another basic data structure?

This time I’m keeping it ultra-simple: The Stack.

I’m sure we all know what a stack is, but to briefly remind you it’s a LIFO (Last-in, first-out) container. In a classic stack elements are added on top of the stack using the “Push” command and removed from the top one-by-one using “Pop”. When an element is popped it is removed from the stack and returned. It is not possible to reference elements that are not on the top of the stack.

A not-so-classic stack variation contains a “Peek” function that will return the top element of the stack without removing it.

So without further ado let’s jump into modeling!

Tuesday, September 6, 2011

Multi-Threaded Modeling – Barrier Synchronization

Okay, all my previous posts have been based on static system models, where rules and actions are static functions in classes. A lot can be accomplished with this, and it is also possible to wrap instance based applications inside static functions. However, Spec Explorer allows us to work with actual instance based models.

So what does this mean for us? Well, it means we are allowed to instantiate any number of objects of some type and Spec Explorer can reference each of these objects individually. For example, you could write a class that represents a complex number, and have Spec Explorer create two instances of this object and then add them together. This would be a basic example, but instead let’s jump to something more tricky – multi-threading!

You can imagine that instance based modeling and multi-threading are closely related. But there are some key issues one must understand first. Instance based modeling is not multi-threaded, it is exactly as sequential as any static model. Spec Explorer will not create a new thread for you when you instantiate an object. Any interaction by Spec Explorer will be from the “main”-thread, you have to handle communication with your threads.

Barrier synchronization
The example I have picked for this post is a barrier synchronization implementation. Barrier synchronization is a pattern for worker threads in multi-threaded application. With barrier synchronization each thread must wait for its co-workers to complete their cycle of work before the next cycle starts. Denoting the number of cycles completed by thread i as ci, this can be stated by the following invariant:
Essentially each thread reaches a “barrier” upon completion of its work. This “barrier” is only lowered once all threads have reached it. Upon release of the “barrier” the worker threads continues their work.

Sunday, August 21, 2011

Model-Based Testing in Agile Development Cycles – Part II

Last post I left you hanging after a model had emerged from our initial acceptance requirements. The model we generated looked like:

Today’s post is about what you can use this model for in agile development, and what kind of test cases it produces.

Design discussions
As I already pointed out in the previous post, models are great in discussions. Often when you explain the model to the product owner he/she will start noticing quirks. For the example model we developed, one such quirk is that the system is not allowed to reverse a payment if said payment has been involved in a financial application. Although it is a requirement in the system, it was not mentioned in the acceptance criteria because it is not a sun-shine scenario. Design discussions based on models will help you uncover these implicit requirements.

Sunday, August 7, 2011

Model-Based Testing in Agile Development Cycles – Part I

Today I’m strafing away from Model-Based Integration testing as I’d like to ramble a bit about using Model-Based Testing in an Agile development cycle. For those few of you who haven’t heard about Agile, here’s my crude and narrow look at it.

Agile development
Agile development is often referred to as Scrum, however Scrum is just one Agile methodology, many more exists. In Agile development work is broken up into short sprints of roughly two weeks duration. At the beginning of a sprint you sit down with your team (size 4-7 roughly) and plan what you believe you can achieve during the next two weeks. The main idea is that when you complete a sprint you are done. In contrast, way too often in waterfall you hear the developer/tester saying I’m done (or “almost done”) and what they mean is that the code is done, but they need to run tests, validate performance, security, stress, etc. That does not fly in Agile, done means done, your code is ready for production. The idea of course, is that you do not have to go back and revisit your work at a later point in time – it is Agile because you do not drag a huge backlog of items you need to do when you enter the next sprint.

Okay, enough about Agile – there are lots of other (and better) sources out there online [1] and in books [2].

Model-Based Testing and Agile
Recently I’ve gained some experience applying Model-Based Testing in an Agile development cycle. It’s tricky business and you have to balance your time carefully. There are definitely some pros of applying Model-Based Testing in Agile, but one has to be very careful not to focus too much on modeling through the sprint - instead I suggest taking the time early on in the sprint to formalize a model, which servers as a good aid in discussions as well as a great reference tool later on.

Sunday, July 24, 2011

Model-Based Integration Testing – Part II (State encapsulation)

Last time we saw how we could perform Model-Based Integration Testing of a FileSystem model and a Network model using extension models. One problem with this approach was that the first model had to expose its internal state to the other model. In this post I’m going to talk about model state space encapsulation.

A step in the right direction is to do model encapsulation. Let’s start out by obtaining the same model results but having the FileSystem model encapsulating its state. Simply change accessibility of all internal state variables to private:
        private static int drives, currentDirectory;
        private static SequenceContainer<int> directories = new SequenceContainer<int>();

Now of course our Network model won’t compile, as it was referencing the FileSystem state variables directly. We have to realize what the common concept is here – the Network layer is supposed to mimic a directory structure and to do this it must have an interface to create a drive in the FileSystem models state space. One easy way to obtain this is to increase the accessibility of the CreateDrive rule to public, so the Network model can directly call a rule in the FileSystem model:
        [Rule(Action = "MapNetworkDrive()")]
        static void MapNetworkDrive()

Monday, July 18, 2011

Model-Based Integration Testing – Part I

First let me introduce the concept of integration testing. Integration testing opposed to unit testing is all about the big picture, and is designed to verify that components of an application are working together correctly. Take Windows as an example. The operating system has literally tons of components, and many of these interconnect.

One such example could be when you map a network drive on your computer. The network layer integrates into the file system by creating a virtual drive, while the file system integrates into the network layer by reading network paths. The actual network location is integrated into your file system and displayed as a drive icon under My Computer. Even though the networking layer and file system layer are both tested in isolation, there is no guarantee that the two components will work together once they are connected to each other. A ton of problems could occur in this integration! Integration testing is all about finding such issues.

Let’s start by modeling a file system. We model the following:
    static class FileSystem

        [Rule(Action = "CreateDirectory()")]
        static void CreateDirectory()

        [Rule(Action = "ChangeDirectory(directory)")]
        static void ChangeDirectory(int directory)
        [Rule(Action = "CreateFile()")]
        static void CreateFile()

        [Rule(Action = "CreateDrive()")]
        static void CreateDrive()

        [Rule(Action = "FilesOnDrive(drive)/result")]
        static int FilesOnDrive(int drive)

The idea here is of course that we have a file system, with a default drive (say “C:\”). We can create new directories and in these we can create files, we also have a validation function that counts the number of files on a drive.

Sunday, July 10, 2011

T-wise combinatorial Model-Based testing – Part II

In the previous post we saw how Model-Based Testing can be used to generate combinatorial input to the SUT. This is very nice, because we can capture this behavior in a generic way, and easily extend it, and the model will automatically generate the necessary combinations.
One of the oddities we observed, however, was that the model generated equivalent test cases where the parameter order was swapped. For pair-wise testing this is an annoyance because the model generates double the number of necessary tests, but for higher orders of t this leads to big problems as the duplications scale as n x t! (that’s t-factorial!), where n is the number on unique tests and t is the dimensionality of the combinatorial test generation.

Wednesday, July 6, 2011

T-wise combinatorial Model-Based testing – Part I

One of the strengths of Model-Based Testing is the ability to generate combinatorial inputs from a model. Say for example we are using a black-box testing approach on a scientific implementation of the inverse of a multi-dimensional matrix function:
The SUT is designed to compute the inverse of f(x,y,z) for any value of x, y, z. We may state the test invariant that
Which states that the matrix product should not differ from the identity matrix by more than some small residual error. The nice thing about matrix inverse is the relative simplicity in verify the correctness of the implementation, because direct matrix multiplication is simple.

The actual setup is somewhat constructed, but it serves as a good example. The point is that we are testing an implementation on a multi-dimensional function (in this case 3-dimensional). Keep in mind that the SUT could be any function that requires more than one input.

Monday, July 4, 2011

We Are Going To Berlin

We got some great news to share! Apparently the ETSI board liked our abstract on Model-Based Integration Testing so much they decided to grant us a 20 minute presentation slot at the 2011 Model-Based Testing User Conference in Berlin on October 18-20th (which I previously announced on the blog).

Seeing how this idea on Model-Based Integration Testing has been blue-stamped by an authority, I thought it would be a good idea to share some of the details of our framework over a series of upcoming blog posts. The plan is to reveal some more details here that can be presented in 20 minutes. Then I will be able to refer participants to this blog for more details.

The basic idea of the framework is to generate model based tests that span multiple feature boundaries within the SUT. We obtain this behavior using a producer/consumer pattern on “Universally Recognized Artifacts” (URAs) which are cleverly chosen pieces of information inside the SUT that “resides on the boundaries of system features”.

For now, we are very excited about this, and looking forward to a great conference!

Sunday, June 26, 2011

Applying Model-Based Testing for Fault-Injection in cloud services – Part III

2-services model
Interestingly enough, when modeling more than one service, the model will also generate tests that verify services continue functioning even when others go down. Essentially what we are checking here is that we don’t have any unnecessary dependencies between the services, and that a service won’t hang because another one is down.
When designing models including more than one service the state-space can rapidly explode. Consider adding just another service: Payment Service into the model which we depend on upon check-out.
We extend our model by adding:
    public class PaymentService : ModelService
        public override string Name { get { return "Payment"; } }

        public void ProcessPayment()

And changing the model to:
    static class WebshopModelProgram
        static ShoppingCartService cartService = new ShoppingCartService();
        static PaymentService paymentService = new PaymentService();

        [Rule(Action = "AddToCart()")]
        static void AddToCart()

        [Rule(Action = "ProceedToCheckOut()/result")]
        static bool ProceedToCheckOut()
            if(cartService.Count() == 0)
                return false;


            return true;

Sunday, June 19, 2011

Applying Model-Based Testing for Fault-Injection in cloud services – Part II

Okay, so I have to admit I’m mixing things up a bit here. This week I wanted to go to a 2-service model, but I realized that would be jumping ahead of things. Instead I want to take the next steps in actually implementing the shopping cart service and the model interface to this service, as this has some interesting challenges.

First the service…

Shopping cart service
The shopping cart service is a Windows Azure WCF Service Web Role project. It implements a very crude interface. You can add an item to the cart, reset it (empty it), retrieve the number of stored items and proceed to checkout (which includes emptying the cart). The interface is:
    public interface IShoppingCart
        /// <summary>
        /// Empties the shopping cart
        /// </summary>
        void Reset();

        /// <summary>
        /// Adds the item with given name to the shopping cart
        /// </summary>
        /// <param name="item">Name of item to add</param>
        void AddItem(string item);

        /// <summary>
        /// Returns the number of items currently in the cart
        /// </summary>
        /// <returns>Number of</returns>
        int GetCount();

        /// <summary>
        /// Continues to payment input, and empties the shopping cart
        /// </summary>
        /// <returns>False if no items are in the cart</returns>
        bool ProceedToCheckout();

The service can be started from within Visual Studio where you can debug it, or you can generate an Azure ASP.NET Web Role that consumes it.

Writing tests against a service is piece-of-cake. You simply add a service reference to the test project and create a static instance of the service client, which you can then call. The adapter layer of our model based tests will simply call the appropriate service method for each action, with the exception of the “KillService” and “RestoreService” rules that are special.

The implementation of a shopping cart service can be found in the appendix [1]. 

These actions are a bit more tricky to implement...

Sunday, June 12, 2011

Applying Model-Based Testing for Fault-Injection in cloud services – Part I

I’ve been playing around with the thought on how to leverage model based testing for cloud based services. Of course, you can create models that interface with your services the usual way like testing any application, but can we get more out of it?

Well – what are some of the fundamental design differences when designing services in the cloud? Scale – it is all about scale, and with scale comes fragmentation and connectivity issues. Instead of having a giant monolithic application maintaining state, the state must instead be tracked in requests or some means of distributed state must exist. The points become interesting to attack from a testing perspective. You start asking yourself, what would happen if a service is suddenly unavailable, in a buggy implementation I could lose part of my state, can the services recover from that error condition? Are there services that are more vulnerable than others? What if the service dies just after I sent it a message, will my time-out recovery mechanism handle this correctly?

To make things more understandable I’ll give an example of a cloud based application. Imagine we are testing an online web shop composed of a website and three supporting services one for authentication, payments and shopping carts talking together to provide a fully functioning application. The underlying implementation could be fragmented like this.

Immediately we start asking questions like: What if the shopping cart service goes down, will my user loose the selection? What if the payment service does not respond to a payment request, is the error propagated to the website or will it be swallowed by the shopping cart service?

Sunday, June 5, 2011

Model-Generated tests as part of regression

Any automated test case can be included in your regression suite – but does it make sense to include Model-Based tests in a regression suite? I’ve asked myself that question at work today, and I’d like to discuss some of the pros and cons here.

To put things into perspective, our regression suite has thousands of highly stable automated tests, and it is run after any minute change to the product has been made. If any single test from the regression suite fails the product change is reverted. Given this setting our four main priorities are reliability, efficiency, maintainability and ease of debugging.

Reliability of the automated test cases is of course top priority, we cannot afford to reject developer changes due to test instabilities.

However, often reliability is governed by the underlying framework and not the tests themselves. From my experience model generated tests has the same reliability as any other function test.

Execution speed of the test is the main concern. We are time constrained in how long we can test a developers change, because we cannot slow the productivity of the organization down to a grinding halt because we want to run an unbounded number of tests.

In terms of execution speed model generated tests suffers because they often repeat a lot of steps between tests, effectively re-running the same piece of the test over and over again, where a manually written test case could leverage database backup/restore to avoid redoing steps or simply design smarter cases with less redundancy.

But MBT also suffers from generating too many tests. The model exploration will generate all possible combinations, with no means of determining a priority on individual tests. Effectively we cannot make a meaningful choice between model generated tests, so we are either forced to take all, none or a random selection. Because we want to minimize the risk of bugs slipping into the product, making arbitrary decisions is unadvisable (I have some ideas how we can do better at this, but that is for another blog post).

Any regression suite will sooner or later need to change, because requirements are changing for the product. Thus some amount of work will be put into refactoring existing regressions tests once in a while.

Model generated tests are actually easier to maintain (given you have staff with the required competencies) than regular functional tests. This boils down to what I blogged about earlier – the essence being that we can easier accommodate design changes because changes to the model automatically propagate into the tests. The additional abstraction actually helps us in this case. Conceptually it is also easier to understand what needs to be done to accommodate a change in requirements when you have a picture of the model state space to look at.

Ease of debugging
Congratulations your test is failing! But is this a bug or a defective test case? This must be the ultimate question we need to answer – and we want an answer fast!

A good starting point is to understand the conditions that apply before the failing step is taken. Reading the test code at first is not very helpful in trying to establish this, because it is auto generated gibberish with no comments to help you understand it. So at first we may conclude that this is a problem.

However, from my experience, even if it is harder to debug failing model based tests, it is not impossible, instead it changes the game. It is now a matter of understanding how the generated test case relates to the state space of the model. The state at which the test were before the offending step was taken is easily read, and for nice models you can trace the steps easily and in that way build up the understanding of the conditions.

Once you get good at it you start to read the model picture and realize important facts. For example, if you have one failing test making a state transition that another test is taking without failing, then something is amass. You need to start investigate why your model does not mimic the system under test, and this could be either a limitation of your model you need to handle or an actual bug in the product.

Model generated tests for regressions suites make sense on some points and are even better on the maintainability aspect, but unfortunately it falls flat on its face when we start considering efficiency. The lack of optimization in execution combined with a complete lack of prioritization of tests makes for a poor test pool when trying to establish what is relevant to run for regression.

Friday, May 27, 2011

Application of Model Based Testing to a Binary Search Tree - Part II

Okay, today I want to wrap up on the model based testing of the binary search tree implementation I did last time. Remember how we uncovered a problem that the model did not cover all of the code? Drawing from our experience from the Application of Code Coverage to Model Based Testing post we understand that our model does not reflect closely the actual implementation, and we have a risk in terms of a test hole.

Understanding the problem
Before we jump in to adding additional tests, let’s try and understand what the problem really is. Remember I hinted that it has to do with our choice of container in the model. So let’s try to understand this some more by building some trees from the model:

Notice that even though these three trees are very different constructs, the internal set representation of the model reads (0, 1, 2) for all cases.

Friday, May 20, 2011

Application of Model Based Testing to a Binary Search Tree - Part I

I wanted to post some real modeling examples for a change, where I show how to use model based testing to explore test case combinatorics. The obvious choice is of course the more than sufficiently modeled calculator. So I decided not to choose a calculator, but something a bit different. I thought to myself, why not a simple Binary Search Tree? Hmm, but does it have any potential?

BSTs are really nice in that you can write invariants for them: 
For all nodes n in T: value(left(n)) < value(n) < value(right(n))

However, in a normal functional testing paradigm this is not entirely sufficient to validate the tree. The problem is that any given sub-tree of the BST will pass the integrity check – thus if I were to introduce a bug that removed the whole sub-tree when deleting a node from the tree, the resulting tree is still a valid BST but it’s not the expected! Normally we would need to check the node count and also that the expected values are to be found in the tree, however in a model based testing paradigm this is no longer required as we will see later on.

Monday, May 16, 2011

Flexibility of model based testing in practice

We all hear people arguing, model based testing is much better than traditional testing. "Why" you ask? "Well, it's much more flexible" comes the answer. And you sit back and think, "hmm, that really didn't answer my question?".

So let me try to answer your question - why are models more flexible? Let me give you a real life example of a case where model based testing proved to be flexible. I was working on a model for a new feature for the system under test. We started out designing and implementing model based testing on this particular feature the way it was implemented. It so happens that the feature contains a list of items (we call it a journal, when you confirm your entries you post the journal), and each item has associated a set of attributes (we call these dimensions - they are generic and used for analysis later on). We had developed a model for testing the posting functionality of this journal, and the model created roughly 300 test cases. Now it so happens that the dimension attributes can be set to a blocked state preventing posting the journal - which we had modeled.

Now somebody comes along and says: "You know what? This is simply used for forecasting, there is no need to prevent posting of blocked dimensions." At exactly this time you take a deep breath and brace youself for an argument on how the requirements were setup before you realize, well... this is not a problem at all, we used model based testing. We went back and looked at the model, and literally changed one line of model code changing the expected result from the PostJournal action to always be true. Viola, we had fixed 300 test cases.

In conclusion, model based testing allows for design changes at a later stage of development, as changes only need to be introduced on the model level. This is an huge benefit, especially because during the end game the pressure starts rising to finish before the deadline, and it is exactly at that time you do not have time to change 300 test cases because a requirement changed. Oh, and by the way, do requirements change in late game? Of course they do, that's the whole reason we invented SCRUM instead of the waterfall model (but that's for another post).