Wednesday, February 7, 2018

Object-oriented vs Requirement-oriented Testing

The "orientation" is related to the context and granularity in which you think about and develop a test, and with which you organize and structure a test.

Traditionally, in software, based on principles that hold true for unit testing, each requirement results in a single test (or several) for verification.  However, testing levels beyond unit testing (including manual testing), applying that approach is resulting in ever increasing and unwieldy test libraries, under which many (and eventually all) software development organizations will topple over.  The cost of test development and maintenance (even of manual tests) becomes prohibitive.  In fact, were we to apply this same approach to testing in any other industry, we would be laughed out of a job, or lead our companies to bankruptcy.  

Using "crash testing" of vehicles as an analogy, it becomes obvious that a fundamental change in our approach to organizing, structuring, executing and reporting tests is due if we are ever to perform truly "Agile" testing.

In crash testing, were we to apply the approach commonly used on software testing, there would be an individual test for each requirement.  Say we had three requirements, each related to a driver sustaining injury in a head-on collision.  One for the effect of whiplash (g-force sustained) to a driver's head; another for the damage sustained by a driver's face; and another the damage sustained by a driver's chest.  Using the approach commonly used on software integration testing, we execute three tests, one for each requirement utilizing an appropriate sensor, requiring the time and materials necessary to setup and execute each test, resulting in data for each requirement, at the cost of one vehicle.

Were we truly doing physical crash testing, we would wisely orient the test to the object being validated (a driver occupant in a vehicle), attach all sensors, one for each requirement to be verified, and crash a single vehicle, providing us all the necessary data.  In software, this same approach can be utilized by implementing a Validator (executes every verification to be made regardless of the number which fail), and improved logging of the object being tested and the requirements covered, to produce an Application and Requirements Traceability Report.  

This, coupled with the encapsulating the various scenarios (e.g. multiple occupants) to be included in each execution of the test, and you now have a manageable library of tests which remains relative to the size and complexity of the system under test, rather than continuing to grow to an unwieldy size based on the number of requirements ever written for the development of the system under test.  The resource, time and material cost become stable, predictable and reliable.

As a side note (came up during discussion):  automated testing which is purely "workflow" testing (each test starting from creation of data and working that data through various states and systems) is exponentially more costly than tests which validate specific state transitions.  They are much more fragile, require much more investigation time, and are much more costly to maintain.  Some use-case, or workflow testing may be necessary, but should be pared down to the most critical scenarios so that they are manageable.

20180205_135207.jpg20180205_135213.jpg

1 comment:

  1. Session Host:
    Craig A. Stockton
    Practice Architect
    TEKsystems Global Services QMS
    craig.a.stockton@gmail.com

    ReplyDelete