Monday, September 20, 2010

JavaOne 2010: Unit Testing That's Not So Bad: Small Things That Make a Big Difference

I knew that the JavaOne 2010 presentation "Unit Testing That's Not That Bad: Small Things That Make a Big Difference" would be popular and so I signed up for it early and made sure I got there in the time period in which early access seating applied.  We were told that there was a line of at least seventy people waiting to get in when there appeared to be only about 15-20 seats left so I was glad that I had signed up early.

Neal Ford began the presentation by calling his own presentation the "worst named presentation at JavaOne." He went on to say that he wanted to call this updated version of his 2009 JavaOne presentation on unit tests the same name he used last year: Unit Testing That Sucks Less: Small Things Make a Big Difference.  The name was changed on him when it was placed in the schedule information.

Ford talked about how Hamcrest makes it easier to write fluent unit tests.  He also discussed Infinitest and stated that it attempts to provide near instantaneous feedback in the IDE for unit test semantics that the IDE provides for compile time checking.  Infinitest is available as a plug-in for both Eclipse and IntelliJ, but apparently not for NetBeans or JDeveloper.  The main page for Infinitest refers to this tool as "a continuous test runner for Java."

Ford introduced Unitils and called it a "Swiss army chainsaw."  He discussed its support for various popular Java frameworks. The Unitils Summary page has a similar description: "Unitils provides general assertion utilities, support for database testing, support for testing with mock objects and offers integration with Spring, Hibernate and the Java Persistence API (JPA)."  Ford talked about Unitils's reflection assertions and lenient assertions.

Ford also discussed dbUnit support for database testing with Unitils and managing data state appropriately during the tests. I agree with Ford's assertion that using a database in a unit test can be easy to start with, but scales linearly as tests get more involved.  This approach simplifies things, but there are still performance issues with getting and releasing connections that could slow tests down significantly.

Ford covered Unitils's mocking support. He stated that it's not necessarily better or worse than other Java mocking libraries, but might be of special interest to those using Unitils for other things anyway.

In introducing JTestR, Ford stated that one of the biggest points he is making in this presentation is that in 2010 a Java developer does not need to write Java tests in Java.  He specifically mentioned using JRuby and Groovy for the unit tests. Ford stated (and I agree) that Groovy is the best suited of the two languages for Java integration. However, Ford said that the best testing tools and frameworks "on Earth" are in the Ruby community and JRuby would give access to those Ruby-based testing frameworks.  As Java + Ruby, JRuby is the obvious choice if one is testing Java code with Ruby tools.

JtestR is specifically designed for testing of Java code with Ruby-based test tools.  The main JtestR page describes JtestR this way:
JtestR is a tool that will make it easier to test Java code with state of the art Ruby tools. The main project is a collection of Ruby libraries bundled together with JRuby integration so that running tests is totally painless to set up. The project also includes a background server so that the startup cost of JRuby can be avoided. Examples of Ruby libraries included are RSpec, dust,Test/Unit, mocha and ActiveSupport.
RSpec is a behavior driven development framework modeled after JBehave.

As part of his discussion of behavior driven development, Ford talked about Ruby-based Cucumber. He talked of Cucumber reaching a level of community popularity such that other projects are being built around it.

Ford broached the subject of testing private methods. He stated that people who argue that you only test the public methods that call private methods are assuming that you don't use test-driven development. A work-around is to make all methods public or package scope, but that has its own undesirable aspects. Another approach is to use reflection.  Ford covered some of the disadvantages of using reflection in this way (significant checked exception handling and other issues that occur when you "touch Java in a sensitive place").  He used this as the segue into how Groovy makes this approach easier.

Ford pointed out that Groovy essentially turns all exceptions into runtime/unchecked exceptions.  This makes for friendlier unit test code that uses reflection.  Ford pointed out that Groovy's dirty little secret is that it ignores private and so you can access the private parts directly and avoid the reflection complexity.  Ford states that this is "technically a bug" that "they've been in no hurry to fix because it's insanely useful."

Ford showed a slide with jmock syntax and said, "This really sucks."  He then showed another slide with Groovy's Java-like syntax and its support of name/value hash pairs to make it easier to mock via hash and closure code blocks.

Ford ended with some "relatively bold statements."  He called it "professionally irresponsible to ignore software testing." He stated that it is "our [software engineering] professional rigor." He also stated that it is professional irresponsible to use the most cumbersome testing tools.  He finished with the quip and slide that writing software without unit testing is like trying to barbecue and swim at the same time.

I quoted Ford directly several times because he is good at keeping his audience engaged.  This is more challenging with a large audience like this one, but he pulled it off with humor.  I also appreciated the bold statements.  Even if there may have been a little hyperbole, such enthusiasm and quotable assertions generally make for a more memorable presentation.  There weren't many people who left this session early.

No comments: