Manage Learn to apply best practices and optimize your operations.

Successful test-driven development (TDD) with external systems

It's best to think of test-driven development (TDD) as a flexible system that does not require you to run unit tests at every compile, says Grant Lammi. That's particularly true when testing against external systems like a Web service or a database.

There is growing sentiment that test-driven development (TDD), with its insistence on running unit tests with every compile, delivers higher quality than traditional methodologies. However, when introducing TDD, questions inevitably arise about how to write tests against an external system like a database or a Web service.

Generally, there is not a good answer to the question of TDD and external systems. If you use the actual external system, you write tests against code you do not control and that can change without your knowledge. You can still run the tests successfully with each compile, but it is difficult.

You can use mock objects to approximate the functionality of an external system, but they do not completely solve the problem. In fact, mocks can hinder TDD adoption because you have to write test code that may not map correctly to the real world.

The best solution is to think of TDD as a flexible system that does not require you to run tests at every single compile. You can write TDD integration tests but exclude them from the post-build step where they are traditionally run. This provides the benefits of TDD (better code and higher quality) while minimizing the setup. It also eliminates the need for mock objects and the maintenance associated with them.

Example: User authentication

I recently used Kerberos to add single sign-on authentication support to a Mac OS X application. The external system in this case is Microsoft Active Directory. The application creates all the various Kerberos tickets needed for user authentication and confirms the data with Active Directory.

Fixture setup
Prior to testing, I created the Xcode project for the application and the C++ unit testing framework, UnitTest++. I wrote each unit to a failure condition first and then filled in code until the functionality passed.

Example 1 shows the fixture setup for this set of tests, which is the code executed before and after every test. It includes the following:

  • The unit testing code
  • The primary Kerberos authentication object (CClientAuthData)
  • Some relatively empty fixture build up and tear down routines
  • The m_sTarget data member, which is a string that holds the name of the Kerberos service the application accesses
   struct CClientAuthDataFixture
      CClientAuthData testData;
      CTTString m_sTarget;
         m_sTarget = "";

Example 1: Fixture setup

Example: Checking initialization

I wrote some simple tests to ensure that objects were created correctly. Then I created more advanced tests like example 2, which checks that the initialization fails when the target is empty. It also evaluates the error message.

      bool bReturn = testData.Initialize(m_sTarget);
      CHECK(bReturn == false);
      CTTString sErrorMsg = testData.GetLastError();      
      CHECK(sErrorMsg.Compare("A service target must be given in order 
to use Single Sign-On.") == 0);      

Example 2: Initialization test

Example: Invalid values

I wrote a test to ensure the system returns an error string when the target contained an invalid value. I wanted to make absolutely sure the expected failure was failing for the right reason. Notice that the code sends the error message to the console.

   TEST_FIXTURE(CClientAuthDataFixture, InitializeWithInvalidTarget)
      m_sTarget = "invalid/invalid";
      bool bReturn = testData.Initialize(m_sTarget);
      CHECK(bReturn == false);
      CTTString sErrorMsg = testData.GetLastError();
      // This could be many different things. 
      // Print it out for visual inspection.
      std::cout << "Error message to verify: " 
                << sErrorMsg.GetStringPtr() 
                << std::endl << std::endl;      

Example 3: Invalid values

The output sent to the console shows that Kerberos could not resolve the target. The test was indeed valid.

Error message to verify: Unable to initialize security context. 
GSSAPI major error[131072]: An invalid name was supplied  minor 
error[-1765328168]: Hostname cannot be canonicalized

Example 4: Checking an error from the external system

This is an example of how not executing every test after every compile is beneficial. Ugly error messages from an external system are tough to test. The error could be different based on the configuration, or it could change when the system is upgraded to a newer version.

Example: Successful authentication

The final example tests for a successful authentication. It has the target variable properly initialized with a host of

   TEST_FIXTURE(CClientAuthDataFixture, InitializeWithValidTarget)
      m_sTarget = "service1/";
      bool bReturn = testData.Initialize(m_sTarget);
      CHECK(bReturn == true);
      CTTString sErrorMsg = testData.GetLastError();   
      if (!sErrorMsg.IsEmpty())
         std::cout << "Error message:"  
                   << sErrorMsg.GetStringPtr() 
                   << std::endl; 

Example 5: Successful authentication

Again, the example shows how this kind of unit testing is valuable locally for design purposes. However, you can also see how fragile it would be to run it every single time against an external system that is out of your control.

TDD results: Only one bug

The most telling statistic for this project was the final bug count. When the QA department tested the application, they found a grand total of one defect. That bug was an obscure Kerberos interaction bug between very specific versions of Mac OS X and Active Directory.

Once the initial development was completed, I only had to run the unit tests against Active Directory at specific development milestones like the beginning of alpha or beta testing. The quality remained high, but I eliminated the hassle of setting up the Kerberos information every time.

In the end, by ignoring the TDD principle of running all tests every time, this project still achieved the most import goal of TDD: clean, high quality code.

Grant Lammi

About the author
Grant Lammi is a technology strategist at Seapine Software who has over a decade of experience as a developer in the software industry. You can follow his blog at

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.