- •contents
- •preface
- •acknowledgments
- •about this book
- •Special features
- •Best practices
- •Design patterns in action
- •Software directory
- •Roadmap
- •Part 1: JUnit distilled
- •Part 2: Testing strategies
- •Part 3: Testing components
- •Code
- •References
- •Author online
- •about the authors
- •about the title
- •about the cover illustration
- •JUnit jumpstart
- •1.1 Proving it works
- •1.2 Starting from scratch
- •1.3 Understanding unit testing frameworks
- •1.4 Setting up JUnit
- •1.5 Testing with JUnit
- •1.6 Summary
- •2.1 Exploring core JUnit
- •2.2 Launching tests with test runners
- •2.2.1 Selecting a test runner
- •2.2.2 Defining your own test runner
- •2.3 Composing tests with TestSuite
- •2.3.1 Running the automatic suite
- •2.3.2 Rolling your own test suite
- •2.4 Collecting parameters with TestResult
- •2.5 Observing results with TestListener
- •2.6 Working with TestCase
- •2.6.1 Managing resources with a fixture
- •2.6.2 Creating unit test methods
- •2.7 Stepping through TestCalculator
- •2.7.1 Creating a TestSuite
- •2.7.2 Creating a TestResult
- •2.7.3 Executing the test methods
- •2.7.4 Reviewing the full JUnit life cycle
- •2.8 Summary
- •3.1 Introducing the controller component
- •3.1.1 Designing the interfaces
- •3.1.2 Implementing the base classes
- •3.2 Let’s test it!
- •3.2.1 Testing the DefaultController
- •3.2.2 Adding a handler
- •3.2.3 Processing a request
- •3.2.4 Improving testProcessRequest
- •3.3 Testing exception-handling
- •3.3.1 Simulating exceptional conditions
- •3.3.2 Testing for exceptions
- •3.4 Setting up a project for testing
- •3.5 Summary
- •4.1 The need for unit tests
- •4.1.1 Allowing greater test coverage
- •4.1.2 Enabling teamwork
- •4.1.3 Preventing regression and limiting debugging
- •4.1.4 Enabling refactoring
- •4.1.5 Improving implementation design
- •4.1.6 Serving as developer documentation
- •4.1.7 Having fun
- •4.2 Different kinds of tests
- •4.2.1 The four flavors of software tests
- •4.2.2 The three flavors of unit tests
- •4.3 Determining how good tests are
- •4.3.1 Measuring test coverage
- •4.3.2 Generating test coverage reports
- •4.3.3 Testing interactions
- •4.4 Test-Driven Development
- •4.4.1 Tweaking the cycle
- •4.5 Testing in the development cycle
- •4.6 Summary
- •5.1 A day in the life
- •5.2 Running tests from Ant
- •5.2.1 Ant, indispensable Ant
- •5.2.2 Ant targets, projects, properties, and tasks
- •5.2.3 The javac task
- •5.2.4 The JUnit task
- •5.2.5 Putting Ant to the task
- •5.2.6 Pretty printing with JUnitReport
- •5.2.7 Automatically finding the tests to run
- •5.3 Running tests from Maven
- •5.3.2 Configuring Maven for a project
- •5.3.3 Executing JUnit tests with Maven
- •5.3.4 Handling dependent jars with Maven
- •5.4 Running tests from Eclipse
- •5.4.1 Creating an Eclipse project
- •5.4.2 Running JUnit tests in Eclipse
- •5.5 Summary
- •6.1 Introducing stubs
- •6.2 Practicing on an HTTP connection sample
- •6.2.1 Choosing a stubbing solution
- •6.2.2 Using Jetty as an embedded server
- •6.3 Stubbing the web server’s resources
- •6.3.1 Setting up the first stub test
- •6.3.2 Testing for failure conditions
- •6.3.3 Reviewing the first stub test
- •6.4 Stubbing the connection
- •6.4.1 Producing a custom URL protocol handler
- •6.4.2 Creating a JDK HttpURLConnection stub
- •6.4.3 Running the test
- •6.5 Summary
- •7.1 Introducing mock objects
- •7.2 Mock tasting: a simple example
- •7.3 Using mock objects as a refactoring technique
- •7.3.1 Easy refactoring
- •7.3.2 Allowing more flexible code
- •7.4 Practicing on an HTTP connection sample
- •7.4.1 Defining the mock object
- •7.4.2 Testing a sample method
- •7.4.3 Try #1: easy method refactoring technique
- •7.4.4 Try #2: refactoring by using a class factory
- •7.5 Using mocks as Trojan horses
- •7.6 Deciding when to use mock objects
- •7.7 Summary
- •8.1 The problem with unit-testing components
- •8.2 Testing components using mock objects
- •8.2.1 Testing the servlet sample using EasyMock
- •8.2.2 Pros and cons of using mock objects to test components
- •8.3 What are integration unit tests?
- •8.4 Introducing Cactus
- •8.5 Testing components using Cactus
- •8.5.1 Running Cactus tests
- •8.5.2 Executing the tests using Cactus/Jetty integration
- •8.6 How Cactus works
- •8.6.2 Stepping through a test
- •8.7 Summary
- •9.1 Presenting the Administration application
- •9.2 Writing servlet tests with Cactus
- •9.2.1 Designing the first test
- •9.2.2 Using Maven to run Cactus tests
- •9.2.3 Finishing the Cactus servlet tests
- •9.3 Testing servlets with mock objects
- •9.3.1 Writing a test using DynaMocks and DynaBeans
- •9.3.2 Finishing the DynaMock tests
- •9.4 Writing filter tests with Cactus
- •9.4.1 Testing the filter with a SELECT query
- •9.4.2 Testing the filter for other query types
- •9.4.3 Running the Cactus filter tests with Maven
- •9.5 When to use Cactus, and when to use mock objects
- •9.6 Summary
- •10.1 Revisiting the Administration application
- •10.2 What is JSP unit testing?
- •10.3 Unit-testing a JSP in isolation with Cactus
- •10.3.1 Executing a JSP with SQL results data
- •10.3.2 Writing the Cactus test
- •10.3.3 Executing Cactus JSP tests with Maven
- •10.4 Unit-testing taglibs with Cactus
- •10.4.1 Defining a custom tag
- •10.4.2 Testing the custom tag
- •10.5 Unit-testing taglibs with mock objects
- •10.5.1 Introducing MockMaker and installing its Eclipse plugin
- •10.5.2 Using MockMaker to generate mocks from classes
- •10.6 When to use mock objects and when to use Cactus
- •10.7 Summary
- •Unit-testing database applications
- •11.1 Introduction to unit-testing databases
- •11.2 Testing business logic in isolation from the database
- •11.2.1 Implementing a database access layer interface
- •11.2.2 Setting up a mock database interface layer
- •11.2.3 Mocking the database interface layer
- •11.3 Testing persistence code in isolation from the database
- •11.3.1 Testing the execute method
- •11.3.2 Using expectations to verify state
- •11.4 Writing database integration unit tests
- •11.4.1 Filling the requirements for database integration tests
- •11.4.2 Presetting database data
- •11.5 Running the Cactus test using Ant
- •11.5.1 Reviewing the project structure
- •11.5.2 Introducing the Cactus/Ant integration module
- •11.5.3 Creating the Ant build file step by step
- •11.5.4 Executing the Cactus tests
- •11.6 Tuning for build performance
- •11.6.2 Grouping tests in functional test suites
- •11.7.1 Choosing an approach
- •11.7.2 Applying continuous integration
- •11.8 Summary
- •Unit-testing EJBs
- •12.1 Defining a sample EJB application
- •12.2 Using a façade strategy
- •12.3 Unit-testing JNDI code using mock objects
- •12.4 Unit-testing session beans
- •12.4.1 Using the factory method strategy
- •12.4.2 Using the factory class strategy
- •12.4.3 Using the mock JNDI implementation strategy
- •12.5 Using mock objects to test message-driven beans
- •12.6 Using mock objects to test entity beans
- •12.7 Choosing the right mock-objects strategy
- •12.8 Using integration unit tests
- •12.9 Using JUnit and remote calls
- •12.9.1 Requirements for using JUnit directly
- •12.9.2 Packaging the Petstore application in an ear file
- •12.9.3 Performing automatic deployment and execution of tests
- •12.9.4 Writing a remote JUnit test for PetstoreEJB
- •12.9.5 Fixing JNDI names
- •12.9.6 Running the tests
- •12.10 Using Cactus
- •12.10.1 Writing an EJB unit test with Cactus
- •12.10.2 Project directory structure
- •12.10.3 Packaging the Cactus tests
- •12.10.4 Executing the Cactus tests
- •12.11 Summary
- •A.1 Getting the source code
- •A.2 Source code overview
- •A.3 External libraries
- •A.4 Jar versions
- •A.5 Directory structure conventions
- •B.1 Installing Eclipse
- •B.2 Setting up Eclipse projects from the sources
- •B.3 Running JUnit tests from Eclipse
- •B.4 Running Ant scripts from Eclipse
- •B.5 Running Cactus tests from Eclipse
- •references
- •index
60CHAPTER 3
Sampling JUnit
JUnit best practices: let the test improve the code
Writing unit tests often helps you write better code. The reason is simple: A test case is a user of your code. And, it is only when using code that you find its shortcomings. Thus, do not hesitate to listen to your tests and refactor your code so that it is easier to use. The practice of Test-Driven Development (TDD) relies on this principle. By writing the tests first, you develop your classes from the point of view of a user of your code. See chapter 4 for more about TDD.
But because the duplication hasn’t happened yet, let’s resist the urge to anticipate change, and let it stand. (“Don’t anticipate, navigator!” the captain barked.)
3.3.2Testing for exceptions
During testing, you found that addHandler throws an undocumented RuntimeException if you try to register a request with a duplicate name. (By undocumented, we mean that it doesn’t appear in the signature.) Looking at the code, you see that getHandler throws a RuntimeException if the request hasn’t been registered.
Whether you should throw undocumented RuntimeException exceptions is a larger design issue. (You can make that a to-do for later study.) For now, let’s write some tests that prove the methods will behave as designed.
Listing 3.14 shows two test methods that prove addHandler and getHandler will throw runtime exceptions when expected.
Listing 3.14 Testing methods that throw an exception
public class TestDefaultController extends TestCase
{ |
|
|
[...] |
|
B |
public void testGetHandlerNotDefined() |
||
{ |
|
|
TestRequest request = new TestRequest("testNotDefined"); |
||
try |
|
|
{ |
|
D |
controller.getHandler(request); |
||
fail("An exception should be raised if the requested " |
||
+ "handler has not been registered"); |
||
} |
|
F |
catch (RuntimeException expected) |
||
{ |
G |
|
assertTrue(true); |
|
|
} |
|
|
} |
|
|
public void testAddRequestDuplicateName() |
H |
C
E
Testing exception-handling |
61 |
|
|
{
TestRequest request = new TestRequest(); |
|
TestHandler handler = new TestHandler(); |
|
try |
|
{ |
I |
controller.addHandler(request, handler); |
fail("An exception should be raised if the default " + "TestRequest has already been registered");
}
catch (RuntimeException expected)
{
assertTrue(true);
}
}
}
bGive the test an obvious name. Because this test represents an exceptional case, append NotDefined to the standard testGetHandler prefix. Doing so keeps all the
getHandler tests together and documents the purpose of each derivation.
C |
You create the request object for the test, also giving it an obvious name. |
D |
Pass the (unregistered) request to the default getHandler method. |
EIntroduce the fail method, inherited from the TestCase superclass. If a test ever reaches a fail statement, the test (unsurprisingly) will fail, just as if an assertion
had failed (essentially, assertTrue(false)). If the getHandler statement throws an exception, as you expect it will, the fail statement will not be reached.
F Execution proceeds to the catch statement, and the test is deemed a success.
GYou clearly state that this is the expected success condition. Although this line is not necessary (because it always evaluates to true), we have found that it makes the test easier to read. For the same reason, at F you name the exception variable expected.
HIn the second test, you again use a descriptive name. (Also note that you do not combine tests, but write a separate test for each case.)
I You follow the same pattern as the first method:
1Insert a statement that should throw an exception.
2Follow it with a fail statement (in case the exception isn’t thrown).
3Catch the exception you expect, naming the exception expected so the reader can easily guess that the exception is expected!
4Proceed normally.
62CHAPTER 3
Sampling JUnit
JUnit best practices: make exception tests easy to read
Name the exception variable in the catch block expected. Doing so clearly tells readers that an exception is expected to make the test pass. It also helps to add an assertTrue(true) statement in the catch block to stress even further that this is the correct path.
The controller class is by no means done, but you have a respectable first iteration and a test suite proving that it works. Now you can commit the controller package, along with its tests, to the project’s code repository and move on to the next task on your list.
JUnit best practices: let the test improve the code
An easy way to identify exceptional paths is to examine the different branches in the code you’re testing. By branches, we mean the outcome of if clauses, switch statements, and try/catch blocks. When you start following these branches, sometimes you may find that testing each alternative is painful. If code is difficult to test, it is usually just as difficult to use. When testing indicates a poor design (called a code smell, http://c2.com/cgi/wiki?CodeSmell), you should stop and refactor the domain code.
In the case of too many branches, the solution is usually to split a larger method into several smaller methods.5 Or, you may need to modify the class hierarchy to better represent the problem domain.6 Other situations would call for different refactorings.
A test is your code’s first “customer,” and, as the maxim goes, “the customer is always right.”
3.4 Setting up a project for testing
Because this chapter covers testing a fairly realistic component, let’s finish up by looking at how you set up the controller package as part of a larger project. In chapter 1, you kept all the Java domain code and test code in the same folder. They were introductory tests on an example class, so this approach seemed simplest for everyone. In this chapter, you’ve begun to build real classes with real
5Fowler, Refactoring, “Extract Method.”
6Ibid., “Extract Hierarchy.”
Setting up a project for testing |
63 |
|
|
tests, as you would for one of your own projects. Accordingly, you’ve set up the source code repository just like you would for a real project.
So far, you have only one test case. Mixing this in with the domain classes would not have been a big deal. But, experience tells us that soon you will have at least as many test classes as you have domain classes. Placing all of them in the same directory will begin to create file-management issues. It will become difficult to find the class you want to edit next.
Meanwhile, you want the test classes to be able to unit-test protected methods, so you want to keep everything in the same Java package. The solution? One package, two folders. Figure 3.2 shows a snapshot of how the directory structure looks in a popular integrated development environment (IDE).
This is the code for the “sampling” chapter, so we used sampling for the toplevel project directory name (see appendix A). The IDE shows it as junitbooksampling, because this is how we named the project. Under the sampling directory we created separate java and test folders. Under each of these, the actual package structure begins.
In this case, all of the code falls under the junitbook.sampling package. The working interfaces and classes go under src/java/junitbook/sampling; the classes we write for testing only go under the src/test/junitbook/sampling directory.
Beyond eliminating clutter, a “separate but equal” directory structure yields several other benefits. Right now, the only test class has the convenient Test prefix. Later you may need other helper classes to create more sophisticated tests.
Figure 3.2
A “separate but equal” filing system keeps tests in the same package but in different directories.