News Stay informed about the latest enterprise technology news and product updates.

Is software becoming more testable?

Software testing and validation hasn't advanced much recently, although test-driven development (TDD) has made software more testable. However, one innovation, model-based testing, is making waves.

A Conservative Past

Innovation in software testing and validation has been scarce - certainly in comparison to the constant stream of new languages, design approaches and frameworks that has characterized design and implementation. The techniques employed for the various test and QA activities - unit testing, integration testing, code inspection, UI testing, acceptance testing etc. - seem to have changed little over the years, largely unaffected even by the advent of object-oriented techniques (although classes provided a handy way of grouping unit tests). What about Test Driven Development (TDD) and JUnit? Well yes, unit testing has become more popular and pervasive, but it's still necessary to create unit tests manually in a very familiar way. TDD should have a more fundamental impact, by forcing class design into test-amenable forms, and we'll come back to that.

Further downstream, integration testing still tends to dissolve into ad-hoc test harness development, or may even be largely ignored in favour of client-side testing. System-level testing remains fragmented and labour-intensive. It's a project manager's dream to be able to press a button (or even a few buttons) and just have the damn thing checked and tested, but we seem as far away as ever from that. The initial promise of automated UI test tools that use scripts (recorded and otherwise) to drive applications and check outputs has dissipated; these tools have a particularly chequered history, promising a lot but usually delivering little in the face of volatile real-world applications. A popular approach now is to make the UI as thin as possible and automate tests to the interface layer driven by the UI; again improving testability by changing the design and separating out a stable interface.

One innovative approach to client-side (frequently UI) testing that has been around for many years without making significant practical inroads is Model-Based Testing. Possibly it is just starting to achieve some real-world usage but it features primarily in academic papers and small-scale examples. Model-based testing changes the source and mechanism of testing: Rather than scripting sets of tests and expected results, the idea is to create an abstract (client) model of the system under test - usually a state-machine based model. Tests are generated automatically from successive partial traversals of the model, and attached code fragments link model elements to actual classes and methods, allowing the creation of concrete tests.

This sounds more like it. Rather than creating and maintaining a set of procedural tests manually all we have to do is create and update a client model and then generate tests from that. That must be easier, right? Well, not necessarily. Despite considerable advocacy of model-based testing, its uptake has been very limited. The reason seems to be the difficulty of constructing and maintaining complex models. The modelling and traversal approach is alien to test teams and the information defining the model still has to be prepared from scratch, or program specifications, or something.

In general, the search for new ways of improving testing and reliability for any form of software design has yielded meagre results. If an application design is wholly new then tests and test data have to be prepared from scratch, and testing is always going to be a lengthy process.

Clearly, part of the reason that software testing hasn't evolved much over the years is poor source material: Badly thought-out designs are obviously going to be hard to test, and for structurally diverse and constantly changing applications it's going to be next to impossible to develop useful, standard validation techniques.

Changing the Design

As emphasised by the very many articles on TDD, thinking about how to test a system should improve the design as a side effect of making it more testable. That's good, but the testing process remains one of manually writing tests and checking output. TDD primarily addresses unit testing. At more architectural levels of design has similar attention been paid to testing? Until recently, the answer would be no. Few architectural frameworks had come into the world with ease of testing as a defining feature: J2EE in general and EJB in particular provide a well-known example. But recent design trends towards using test-related design patterns such as dependency injection, incorporating well-formed component lifecycles with easy configuration have created frameworks inherently supportive of testing. Some early popular examples include the Spring Framework and other lightweight containers.

In general, design patterns have made a big impact in standardizing forms of design and how these are communicated. Possibly the systematic use of standard design patterns should allow automated validation? Unfortunately few practical results have so far emerged from considering general designs in that way. However, patterns embodied in frameworks are a different matter: The increasingly widespread use of well-designed frameworks as the basis for applications should provide a practical basis for automating testing and validation, at least partially. What forms should that take?

Stored Testing Knowledge in Frameworks and Tools

Obviously, one possibility for increasing the testability of a framework is providing built-in testing support. This includes the ease with which framework components and services can be separated and executed independently; the ease with which framework operation can be recorded, and support for creating dummy parts. But perhaps an even more important factor for framework testability is its popularity: If you write your own framework incorporating testability support that's useful; but the third party market for design patterns, design analysis, and supporting tools will be limited. For popular frameworks these aftermarket effects should dwarf the effect of built-in test support

In order to provide additional automatic assistance, test and QA tools need to embody more knowledge of the systems they are testing. To do that requires designs that are standardized and stable and preferably popular and preferably written according to test-friendly design patterns. And we are now starting to see frameworks that exhibit exactly these criteria.

An interesting related movement towards design standardization is Microsoft's recent advocacy of Domain Specific Languages (DSL) and Software Factories. These may have the potential to make model-driven design work at a level other than "as sketch" and provide a complimentary source of domain-level abstract models to assist with automatic checking and test generation (although at present most of the DSLs described are aimed at the construction of generic tools). Again, the largest benefits should come once DSL-based models are standardized.

Automatic Checks and Tests

For certain types of programs, some limited areas of validation knowledge have already been encoded with useful effect. There are several systems that perform static language checks to detect dangerous or unmaintainable constructs in Java, C+ and other languages. There are also various thread use and memory checkers, including Purify, which greatly increased the reliability of C++ programs by applying an exact model of memory use. So, in general we are not looking for one grand solution for program validation - we can model and verify a range of program aspects, including resource usage, functional correctness and application structure.

If we are dealing with standardized frameworks and designs, we can describe at an abstract level good and bad construction practices for applications based on the framework. Many books exist that describe how to use patterns and anti-patterns in the context of a framework., and for standard frameworks we should be able to apply much of this knowledge automatically. The fact that an application, say, uses a J2EE application server, says a lot about how it has been (and should be) constructed. Some of this knowledge can be applied as simple static checks on class and method relationships, but other aspects, dependent on the extensive use of framework services and their configuration, are more readily detected by dynamic monitoring and modeling. An abstract dynamic model of the application can be defined automatically which tracks components, services -such as transaction and database services, their states and attributes, and their relationships. Examining and checking the model dynamically as the application executes allows automatic checks on the quality of the application and suggests natural ways of visualizing dynamic abstract application structure and usage that can itself detect problems.

Both static and dynamic construction checking are effective in tracking down a range of potential problems automatically. We can also use the models generated to, at least partially, write tests: Systems can identify components as they are being used, record calling sequences and then auto generalize them (remove detailed dependencies on specific variable values, replacing these with required relationships between input and between inputs and outputs) and store these as tests for later replay. It should be possible to similarly construct a partial client model that could then be completed manually and used to generate tests in a model-based approach, simplifying model construction by basing it iteratively on the application.

Summary

There is obviously a degree of tension between design and validation: The more standardized (and stable) a software design is the more possibility there is for test automation and innovation. But the more variation there is, the greater the possibility for design innovation. Obviously the situation is dynamic - new areas of software application naturally create a rash of competing approaches and frameworks. But in more established areas we could hope for greater standardization to allow testing to become better understood and automated. However the traditional lack of focus on testing in computer science has probably weighted the equation in the past towards continued fiddling and variation on designs to the detriment of the finished product; it's more fun that way.

But the overall signs are clearly that testing is finally becoming an important driver for design and that the standardization of frameworks featuring consistent, easily monitored, separable designs will provide the basis for automating a great deal of application checking and testing. Hopefully, validation and testing will soon be much more effective and less of a drudge.

About the Author

Alan West is CTO of eoLogic (http://www.eologic.com), responsible for all product development and testing. He was previously a founder of Objective Software Technology Ltd., and has over 20 years experience in software tool design and constructing large software systems.

This article originally appeared on TheServerSide.com

Dig Deeper on Test-Driven and Model-Driven Development

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close