Thursday, March 29, 2012

Collateral code coverage, ramifications on Model-Based Testing

My relationship to code coverage is one of love and hate. I love code coverage because it’s a quantitative metric which tells you something about the relation between your tests suites and your product code. I hate code coverage because it must be the most widely abused metric in software testing. For more on this check my previous post on ‘Application of Code Coverage to Model Based Testing

So why is code coverage so bad? There seems to be a strong drive from management to reach high coverage numbers (which is great, it reduces the risk of having test holes). But then it’s often related to the quality of the product – almost as the only metric of quality. The critical problem with this approach is that code coverage tells you nothing about whether the code is correct or not – only that it was exercised.
Let me coin a new term here: Collateral code coverage.
Definition: Additional code coverage from executing a test compared to the minimum possible code coverage required to validate the behavior being tested. In other words, the amount of code coverage where the result of exercising that code is never verified by the test.
Let me illustrate by an example. Consider a small program that converts an input value in kilometers to miles. The program consists of a Windows form application that calls a class library component that converts the value and displays it on the form. Say we want to test that this sample application converts the input value 1 km correctly to 0.62 miles we may develop a unit test that calls the class library directly with the input value and output values:
        [TestMethod]
        public void TestConversionOfKilometersToMiles()
        {
            Assert.AreEqual(0.62, Converter.KilometersToMiles(1));
        }

But it’s equally common to use a UI based test framework which enters the value and presses the convert button and then reads the computed value from the output field (equally common, but extremely more complex…), which could look like:
        [TestMethod]
        public void TestConversionOfKilometersToMiles()
        {
            Form.Kilometer.SetValue(1);
            Form.ConvertButton.Click();
            Assert.AreEqual(0.62, Form.Miles.GetValue<decimal>());
        }

The tests have the same purpose and the same level of validation, but the UI based test will cover considerably more code than the unit test. Since the purpose of both tests is simply to validate that the converted value is correct, the additional coverage from using the UI based approach is considered collateral.

Friday, March 23, 2012

Complex types in immutable containers and ‘magic rules’ – TSM Part III

In part II we saw one approach to optimize the growing algorithm by using a more intelligent concept for extending graphs, than the brute force way of part I.
With the new approach implemented we can now lift our restriction on the input domain size. Effectively there is no need for constraining the grid domain when the algorithm works on the edges instead of grid input combinations.
The original implementation converted vertices and edges into integer representations (using index = y*size + x), but this approach is no longer applicable when the input domain is unbounded. The first step in fixing this is to refactor the model to store vertices and edges as structs instead:
    public struct VertexData
    {
        public int x, y;

        public VertexData(int x, int y)
        {
            this.x = x;
            this.y = y;
        }
    }

    public struct EdgeData
    {
        public VertexData v1, v2;

        public EdgeData(VertexData from, VertexData to)
        {
            this.v1 = from;
            this.v2 = to;
        }
    }

A SetContainer can take a struct, such that our definition of active vertices in the model changes to:
        static SetContainer<VertexData> vertices = new SetContainer<VertexData>();

Because structs are simple data structures instead of object the SetContainer will correctly identify permutations of the same sequence as being equivalent, whereas had we used the Vertex class directly, the SetContainer would be storing object references instead of the object data and sequence would matter.