Designing for Testability

by Kevin Garwood

Limiting testing efforts

The first step in designing for testability is to limit the scope of our testing effort. Two general design decisions allow us to define what areas we'll test:
General Design-4 : Wherever possible, limit the paths of execution that are likely to occur.
General Design-7 : Encapsulate business concept and data storage layers of the architecture through service APIs. Do not allow clients to know which class is implementing the service interfaces.

Instead of trying to test every public method of every class, we will restrict testing to business classes and service interfaces.

Testability-1 : Test coverage will be limited to public methods of classes and service interfaces that are defined within the business concept layer.

Next, we should identify what kinds of test scenarios we want to cover. It is useful to divide test cases into the types of behaviour they exhibit. The following categories were used to classify all test cases:

Testability-2 : Test scenarios will be divided based on the following categories: common, uncommon, error, and malicious.

Testing for uncommon behaviour greatly depends on the nature of the test data set that is managed in the RIF database. Currently, the data set is designed to demonstrate RIF features to scientists rather than to be rich enough to provide cases where zero or one result might be returned. Until the data set becomes more complex, we should defer testing this category of test scenario.

Testability-3 : Defer testing for uncommon behaviour until we build a more diverse test data set that could return unusual results.

Minimising test scenarios that consider the order in which service methods are called

For this section, we're going to develop an approach for creating test cases that test rifServices.businessConceptLayer.RIFStudySubmissionAPI and rifServices.businessConceptLayer.RIFStudyResultRetrievalAPI.

Accounting for the future, there are at least 35 methods that are part of the study submission service and at least 32 methods that are part of the study result retrieval service. All considered, there at first appears to be at least 67 methods that need to be tested. However, both of these service interfaces inherit from rifServices.businessConceptLayer.RIFStudyServiceAPI, which has 10 methods.

It is possible that completely different classes could implement each service, thereby making it necessary for the 10 methods in RIFStudyServiceAPI to be tested in each of the service classes. If we made no assumption about the underlying classes that implement these services and tested the ten methods twice, we would be doing what is "black box testing". In this form of testing, we guide test case development on what methods are advertised to the test code, not on how those methods are implemented.

However, we know that implementations of these services share a superclass that implements these methods. Therefore, if we test one of the shared methods getGeoLevelSelectValues(...) in ProductionRIFStudySubmissionService, we do not need to test the same method in ProductionRIFStudyResultRetrieval. Here we are using an aspect of "white box testing", which is using an intimate knowledge of the code base to create test cases. We now have to test at least 57 methods, which reduces our testing effort.

Testability-4 : For services interfaces which are likely to have one code implementation, use knowledge of code shared between services that would help reduce test scenarios to consider.

Next, we have to consider how the order of executing these methods could affect the outcome of a test case. For example, if calling methodA() before methodB has a different outcome than calling them in the opposite order, then our test suites should contain test cases that verify behaviour in both these scenarios. The test cases could be very important, especially considering that the browser-based clients make asynchronous calls to the web services, which in turn call the Java-based service classes.

If were to choose a sequence of any 2 methods from a possible 57, then we would have 57 choices for the first method and 56 choices for the second, giving us 3,192 scenarios. However, will use the following assumption to simplify testing:

Testability-5 : Limit testing efforts by testing the effect of service methods in isolation rather than in combination with one another. Assume that the order in which the methods are called will not affect the outcome.

We can be more confident in making this assumption if we take the following steps:

Testability- : Isolate tests for service methods that would be influenced on the order in which they are called with respect to other methods. Then assume that for the remaining service methods, the order in which they are called will not affect their outcome.
Create one test suite for each service method. In each suite, only that service method will be called.

Minimising test scenarios for a service method

Consider how we would develop test cases for the method getMapAreas(), which is an important method used by mapping functions in the Study Submission and Study Result Retrieval tools.
   ArrayList getMapAreas(
      User user,
      Geography geography,
      GeoLevelSelect geoLevelSelect,
      GeoLevelArea geoLevelArea,
      GeoLevelToMap geoLevelToMap)
      throws RIFServiceExeption;

In this method, each parameter object can have exactly one of the following five states:

We could potentially have 5x5x5x5x5 = 3125 test scenarios. However, we will use the following assumption to greatly reduce this number:

Testability- : Assume that if we can generate an exception when any one parameter value is infeasible, that the service will generate an exception when multiple parameter values are infeasible.
Testability- : Assume that in any test method call, we will have at most one service parameter value that is infeasible. When one parameter value is infeasible, all the others will be feasible.
minimum test cases
   = 1 scenario where all parameter values are feasible
    + 5 null value scenarios 
    + 5 invalid value scenarios (eg: blank required field)
    + 5 non-existent value scenarios
    + 5 malicious value scenarios
    
    = at least 21

There are two other kinds of test scenarios we could consider:

As an example of the first kind of scenario, consider a version of the RIF database that is loaded with geospatial data for both Ireland and UK geographies. The following combinations of values are all feasible on their own, but will not make sense when considered together:

As an example of the second kind of scenario, consider parameter values which would result in either 0 or 1 area being returned. This would mark a valid but unusual outcome of calling the method.

Both of these scenarios depend on the nature of the data set that exists in the database. Currently SAHSULAND contains a small, very generic data set that is meant to help demonstrate features of the RIF Tool Suite. It lacks enough variety in records to support these either type of situation.

Testability- : Defer testing for cases where all of the method parameters are feasible in isolation but together result in an exception. We will add these test cases when we produce a more diverse test data set.

Organising test code into test suites and test cases

Create test packages that mirror dataStorageLayer and businessConceptLayer packages Create one test class for each business class that can be used as a parameter for a service method. Create one test class for each service method Cross-reference results of running test suites for business concepts and service classes. If The main kinds of test cases we have are: common, uncommon, error and malicious. Have one test class per business class. Have one test class per service method.