One of the metrics of code quality is the percentage of code for which tests are written.  While this tests individual units of functionality, it does not test overall functionality.

The aim of Java projects in the early 2000's was to get 100% code coverage in unit tests.  I have never seen this achieved.  While code coverage is useful, most of the development effort on these full coverage projects was spent keeping junit tests working.  Junit tests have a tendency to become irrelevant quickly as the solution evolves.  The more juint tests you have, the less appealing a refactor becomes because it tends to break the fragile, hard coded unit test, not the solution itself.  Nobody wants to break the build (the penalty for which is often wearing a funny hat) and the codebase quickly stagnates, slowing development.

How much code coverage is too much?  It depends on the scenario but generally more than 80% has diminishing returns.  Also, tests should be fast.  Junit tests that take ten minutes to execute will slow development to a crawl unless those tests are only executed as part of a continuous integration build.

So keep Junit tests for the important tricky pieces of code only, such as rules engines, calculations, date manipulations, not for simple Controllers or getters and setters.  Make sure at least the critical must-never-break functions are covered, and don't waste your time testing code that usually doesn't do anything.

@Test
public void testComment() throws Exception  
{    
    Customer customer = new Customer();
    customer.setComment("Hello");         
    Assert.assertEquals("Hello", customer.getComment());
}

Of course the value of customer.getComment() will be the value just put in so the test adds no value.   This test was created by a junior developer to adhere to a 100% code coverage rule in a team.

blog comments powered by Disqus