Friday, March 28, 2008

Test Fixture Strategies

In his book, xUnit Test Patterns, Gerard Meszaros provides an in-depth, analytical discourse on unit testing patterns. Meszaros provides prerequisite information before getting into the patterns. There is a section on test fixtures that I found particularly useful. He defines a test fixture as everything that we need to exercise the system under test (SUT) - in other words, the pre-conditions of the test. Let's suppose we are testing an XML parser. Our test fixture will include an XML document that will be fed to the parser. The fixture setup is the part of the test logic that is executed to set up the test fixture. Continuing with our parser example, our fixture setup might require reading an XML document from the file system, or it may involve constructing a document in memory at runtime. After defining some terminology, Meszaros goes through common test fixture strategies. These strategies lay the ground work for the patterns discussed in the book. In fact, it quickly becomes apparent that understanding these strategies plays a big role in getting the most out of these patterns.

Transient Fresh Fixture

A transient fresh fixture exists only in memory and only during the test in which it is used. It does not outlive the test. Fixture tear down is implicit (assuming a language that provides garbage collection). The fixture is created at the start of the test, and it is discarded at the end of the test. Each test creates its own fixture. In other words, The test creates the objects that it needs. Creation of those objects might be delegated to some helper object, but it is the test itself that is initiating the creation. The test does not re-use any part of a pre-built fixture or a fixture from another test. If we elect to use a transient fixture for our XML parser, then the test must create the document that will be fed to the parser. The primary disadvantafge of a transient fresh fixture is that it must be created for each and every test. In some situations this may lead to performance degradation. Despite this potential drawback, transient fresh fixtures offer the best avenue for keeping fixture logic clear and simple and thus, resulting in tests as documentation. The benefits of not having to deal with tear down logic simply cannot be overstated.

Persistent Fresh Fixture

A persistent fresh fixture lives beyond the test method in which it is used. It requires explicit tear down at the end of each test. We often wind up using this fixture when we are testing objects that are tightly coupled to a database. Let's revisit our parser example. Suppose We need to add a test that verifies that the parser can handle consuming documents from the file system. For the fixture set up, the test creates an XML document and then writes it to disk so that we can exercise our parser for this scenario. So far, our test is pretty similar to one that is using a transient fresh fixture. The difference however, reveals itself with tear down. The test using a transient fresh fixture does not have to worry about doing any tear down - it is implicit. Our test on the other hand, must explicitly tear down the fixture. We could implement this easily enough by deleting the document from the file system. It is worth mentioning that this is a pretty straightforward example of tearing downing a persistent fresh fixture. Things can quickly get more complicated, particularly when dealing with database. In these situations, we can easily wind up with obscure tests. Another test smell that is often encountered with persistent fresh fixtures is slow tests. This usually occurs as a reuls of the fixture having a high-latency dependency. For example, if we have to create our XML document on a remote file system over the network, we will likely experience high latency. High latency is commonly encountered when a database is involved.

Shared Fixture
A shared fixture is deliberately reused across different tests. Let's say that our parser has special requirements for handling very large documents. A shared fixture may seem like a logical approach in this situation. The advantage is improved execution time of tests since we cut out a lot of the set up and tear down work required. The primary disadvantage of this strategy is that is easily leads to interacting tests. Interacting tests is an anti-pattern in which there is an inter-dependency among tests. Let's suppose that are parser needs to support both reading from and writing to XML documents. We could very quickly wind up with interacting tests. One test modifies the document while another test reads the document. If the document is expected to be in a particular state, then the latter test could easily break as a result of the former test (which modifies the document) running first. When using a shared fixture, a couple questions should be considered:
  • To what extent should the fixture be shared?
  • How often do we rebuild the fixture?
Should we reuse our XML document across multiple test cases? Across the entire test suite? In general, we want to minimize the extent to which we share our fixture. As for how often we should rebuild the fixture, that may depend on a number of factors. In the case of an immutable fixture, we might very well be able altogether forgo rebuilding the fixture. Let's revisit a scenario in which we need to test both read and write operations for our parser. If we can guarantee the order of tests, then we can arrange for all of the read-only tests to run in sequence. Then for those tests we do not have to worry about rebuilding the fixture in between runs.

Conclusion
In most circumstances, a transient fresh fixture is the best strategy because it simply does not have to deal with the challenges presented by the other fixture strategies, namely fixture tear down. There are times when it is all but impossible to avoid using either a persistent fresh fixture or a shared fixture. Data access tests involving a database is the most prevalent example. Understanding the ramifications of the other fixture strategies is crucial to writing effective tests when they must be used; otherwise, we inevitably fall victim to the anti-patterns presented by Meszaros. Just as an understanding of the more mainstream patterns like the widely embraced GoF pattrerns leads to better designed software, an understanding of the sundry testing patterns leads to more effective tests, which in turn ultimately leads to better software.

No comments:

Post a Comment