Thursday, May 10, 2012

Modeling Stochastic Systems in Spec Explorer

One of the more advanced – and powerful – features of Spec Explorer is its ability to handle stochastic SUTs. Stochasticity is common in systems with external dependencies, this could for example be the file system, network interface or external devices. Common for all these dependencies is that they sometimes fail for various reasons. You cannot predict when this will happen, and for the most part in Software testing we strive to avoid these spurious behaviors at all cost.
Another source of stochasticity is the use of random in the SUT (which I will explore a bit more in this post) or stochastic behavior due to race-conditions in multi-threaded applications (which I will investigate further in a later post).
First of all, what does a stochastic model look like? Take a look here:
This picture represents a model of a stochastic accumulator. It shows a slice where the accumulator is increased twice then reset. Each addition is stochastic in that it can add either 1 or 2 to the counter.
When working with stochastic models, in addition to the normal “round” states, Spec Explorer introduces diamond-shaped decision states. Each diamond represents a choice in the model where the SUT can behave in one of many ways. In the case of the stochastic accumulator we can see that the value of the accumulator can go from the initial value of 0 to either 1 or 2 in the first addition step.
Making choices in stochastic models
The fundamental component of stochastic modeling is the Choice class. This class provides an interface to generate a random choice in Spec Explorer. The following illustrates how to use it to choose a random enum value:
    public enum StochasticAction

    switch (Choice.Some<StochasticAction>())
        case StochasticAction.A:
        case StochasticAction.B:

So why not use Random directly in the model?
        [Rule(Action = "Add()/result")]
        static int Random_AddRule()
            int v = 1 + random.Next(2);
            counter += v;
            return v;

Well – if you do that, Spec Explorer will not know that the action is stochastic, and the outcome model will be a single random sample of the full stochastic model, which will vary when you re-explore the model:
Instead, using the Choice class will make Spec Explorer aware that the action is stochastic and it will understand that there are different outcomes:
        [Rule(Action = "Add()/result")]
        static int Choice_AddRule()
            int v = Choice.Some<int>((i) => i > 0 && i < 3);
            counter += v;
            return v;

This will produce the stochastic model depicted in the top of the post.
Generating tests from stochastic models
When generating test cases from a stochastic model you need to set the AllowUndeterminedCoverage = true flag for the construct:
machine StochasticTestSuite() : Main where ForExploration = true, TestEnabled = true
    construct test cases where strategy = "ShortTests", AllowUndeterminedCoverage = true
        for SlicedStochasticModelProgram()

The generated tests are no longer scenarios like we are used to, but they become graphs themselves, visualizing how the test can go down different paths based on the stochastic behavior of the SUT:
The test will start out in the initial state 0. Then it will call Add. Depending on the return value of the Add call it will either dive deeper into the left path (if Add returned 1) or right path (Add returned 2). Test exploration will then continue from whatever state it reached after the stochastic Add action.
An obvious question to ask is – why does some of the stochastic actions merge back into existing branches, while others go to new branches, even though the states are the same? It turns out, there is a nice switch which will clean this up:
config Main
    switch EnableExplorationCleanup = true;

The resulting test graph then looks like:
Which is nice – but come on guys, why do I need to manually set a switch to produce a cleaned-up graph?
Auto-generated test code from stochastic models
While the feature is very powerful and a lot of effort has been put into the visualization, such that the decision points in the models are clear, the generated test code lacks the same polish. I’m aware that it is auto-generated, and you don’t put as much attention into it. But when you get to the point that one of your test cases fails and you need to debug what went wrong, having high quality test code helps a lot in the debugging effort. Unfortunately, high quality is not a word I would use about the generated test code. It is _littered_ with labels and goto statements, nested blocks inside yet more nested blocks, and finally dummy exception like:
throw new InvalidOperationException("never reached");

thrown at multiple places in the code. This looks like a part of the product that could use some attention…
It’s a good thing it’s hidden away for the most part, and the verbosity of the test is quite good. For example if the SUT does not behave according to one of the modeled behaviors the following error is raised:
expected matching return, found return Add()/0. Diagnosis:
1.       outputs do not match
comment: checking step 'return Add/2'
assert  failed: expected '2', actual '0' (return of Add, state S1)
2.       outputs do not match
comment: checking step 'return Add/1'
assert  failed: expected '1', actual '0' (return of Add, state S1)
which is fairly descriptive. In this case the Add method returned 0, but it is only expected by the model to return either 1 or 2.
Uniqueness of action labels in stochastic models
In the above model for a stochastic accumulator, the Add action returns the value that was added to the counter. From a modeling perspective, this information is irrelevant, because I can simply inspect the counter and observe the change – actually it’s perfectly fine to do just that for a model:

If you compare to the model on the top of the post, the only difference is that Add does not reveal what value was added. This works for the model, but once you go into exploring test cases, Spec Explorer will complain with the following error:
Block(Restriction of exploration: some kinds of composite behaviors (including test cases construct,
requirement coverage construct, etc.) won't work when inner behavior has any state which has outgoing
steps with exactly the same label. If so, try adjusting your model to avoid it and re-explore.)
The essence of this convoluted error message is that “any state which has outgoing steps with exactly the same label” is prohibited. You can think of the return value of a stochastic action as a checkmark for Spec Explorer to determine which branch the SUT actually took during run-time test execution. This implies that the return values for a given action must be unique, otherwise Spec Explorer would not be able to tell exactly which path was followed. This furthermore puts a constraint on the SUT, as the SUT (or adapter) must reveal which path the SUT took run-time.
This is not always possible if you are doing black-box testing, for some SUTs the stochastic choices may be hidden from the outside by the design of the system. In these cases it is up to the adapter layer to probe the SUT after a stochastic choice was made to determine which test-branch the SUT followed. Normally the effect of a stochastic choice is measurable in the state of the SUT, but in rare cases the effect may be invisible to the outside, and only affect later decisions. If this is the case, then you will not be able to determine the run-time test-branch before later in the test case, which means you cannot satisfy Spec Explorers condition on unique labels. A way around this though, is to make a coarser granularity action that consolidates all of the actions required for you to determine the stochastic choice made in the first action.
In more exotic cases, the behavior of the SUT could be affected by some truly random physical effect. Here we are out of luck, as measuring the random effect is likely to affect the outcome of the decision.
[1] Stochastic Accumulator Model

No comments:

Post a Comment