Can unit testing be a waste?

Picture courtesy of Kecko@flickr

It is sometimes hard to start implementing new feature (or fix a bug) by writing an unit test even for a very agile and experienced developer, that’s for sure. But it pays off and it usually turns out to be the easiest and the most efficient way. Besides this, developers have to decide what to test. “You should test everything!” some of you may say, but what does it mean?

Should I test all my JavaBeans (as Java is the language I mostly use I will give examples in this technology)? Should I test all my Data Access Objects? Should I test all toString(), equals(), hashCode(), etc. methods? Or maybe I should test only stuff where I “integrate” all those “low-level” components/classes?

I’ll try to answer the question of what level of abstraction should be tested by your unit tests and why some unit tests may be considered a waste in this post.

Testing everything (literally) is a waste
You should start developing new features from unit tests. More so, you should start by writing tests from the highest possible level of abstraction. I mean that if you want to test adding new users to the database feature you should start from the service class that takes input parameters, creates user object and then stores it using some kind of DAO object into the database. You should not start from testing and implementing User class or UserDao class.

Why? Well, if you start thinking of objects you will need for implementing your feature you may invent some overkill architecture, many unnecessary classes that will eventually be unused. With this approach you will start from implementing User class, then UserDao class and eventually you will implement the service class that will use these both classes and expose some public methods to the outer world.
With this approach, if you are still agile and start from writing an unit test, you should end up with at least three unit tests (one for each class). And assuming that User class is stupid JavaBean with only getters and setters you will have to write unit test for those methods, right?

This is a waste (especially unit testing getters and setters) because you could have only one unit test class with less code and have 100% code coverage.
Agile methodologies focus on eliminating waste – that’s why you should not test everything if you want to be really agile.

The only way is to start from the highest possible level of abstraction and then adding necessary classes just-in-time. And the example follows…

What should you test
I will not give you an example of adding users to the database but instead an example of Struts2 action that retrieves announcements from the database and creates RssChannel object (that is processed by the View layer later on and returned as an application/rss+xml content type that is readable by the RSS readers). Take a look at the attached RssAction.java file. This action uses other classes like:

  • AnnouncementDao
  • RssChannel
  • Announcement

Now take a look at the test for this action: RssActionTest.java. At the first sight this unit test tests only com.bielu.annoboard.action.rss.RssAction but in reality I also tests all classes mentioned above.

The effect? Cobertura tells me that:

  • RssAction is covered at 100%
  • RssChannelHelper is covered at 92%
  • RssChannel is covered at 95%
  • AnnouncementDaoIml (which is an implementation of the AnnouncementDao) is covered at 100%
  • Announcement is covered at 100%

Voila! No waste and everything (or almost everything) is tested!

Use your common sense
As always there are some pitfalls lurking – my previous opinions are not always true. For sure you should NOT test getters and setters in your JavaBeans. But what if you implement for example toString(), equals() or compareTo() method? Should you test them or not?

I generally test classes from the highest possible abstraction level but I sometimes also test lower-level classes. Here are some pieces of advice you should consider following:

  • it is a very good idea to test toString() method
  • it is an absolute MUST to test (to death) equals() and hashCode() methods and contract relations between them (refer to “Effective Java” book) – if you use Sets, Maps and collections in general this is a critical part – I even remember the very small bug there in one of my previous projects and two seasoned Java developers looking for the problem for couple of hours – this is my coding horror
  • it is a good idea to also test compareTo() and contract relations to equals() method
  • check with your code coverage tool what is still not tested after the high-level tests
  • consider removing the code that is not tested and see whether it was used at all (see this post)
  • if it is necessary but not tested then test it from the lower level

Use these pieces of advice as well as your commons sense wisely and remember to eliminate waste every time you can (yes! writing unnecessary unit tests is a waste).

I’m really curious about your opinions about writing unnecessary unit tests. Do you think writing unnecessary tests can be even dangerous? Do you consider unnecessary and overlapping unit tests a waste? Share your opinions here.

15 thoughts on “Can unit testing be a waste?”

  1. I think that we should estimate the cost of a 100% coverage compare to 80%.
    You certainly know the rule 80, 20% 😉

    The bigest problem of testing is if you have not a good coverage tool you are wasting your time to blindness test your code. It’s happening when a team does not make difference between unit testing and integration test…

    I think that a good unit testing is not synonym of a 100% coverage, but a good balance between time of coding time of testing.

  2. Indeed “a good unit testing is not synonym of a 100% coverage”. But on the other hand how would you know what parts of your code are not tested? Code coverage tool is there to help you see what you are missing. But 100% code coverage should not be the goal of the project – your goal is to deliver quality product and to satisfy your customers. I showed code coverage to prove that you should not test everything. If you start implementing your stuff by testing from the appropriate level you will definitely balance coding and testing time.

    As far as difference between unit testing and integration testing is concerned, for me it’s just the name. In my example testing RssAction is an integration testing from the Announcement and AnnouncementDao classes’ perspective but it is unit testing from the RssAction class’ perspective. It really doesn’t matter how you call is – it is suffice if you have your code tested.

  3. You should start developing new features from unit tests. More so, you should start by writing tests from the highest possible level of abstraction. I mean that if you want to test adding new users to the database feature you should start from the service class that takes input parameters, creates user object and then stores it using some kind of DAO object into the database. You should not start from testing and implementing User class or UserDao class.

    Amen to that.

  4. I enjoyed your article and believe you are right on the money. The approach is very much the same in all level’s of testing (at least in my practice); we think of it as identifying the highest risk areas of the particular target and work down from that prioritized list.

    Here is my question and ultimately a tangential topic (I think)… How do you sell this mentality to someone who has never done it? To them, they see this in many cases as introducing latency into the process. It’s understood that the numbers speak for themselves; when you start to reek the benefit of code coverage and the myriad of additional data that helps you grow quality… but how do you handle the situation where you are making the first sell and the team and or individuals can’t see how this would help them (never mind the statistics that prove it to be true)?

    In my experience, we have tried to barter with developers and asked them for (x) amount of time, normally at least 2 iterations, to try this new idea. If at which point (x) time is over and the new process doesn’t work, we devise our new plan and move ahead.

    How do you sell this concept for the first time?

  5. I will tell you how my colleague and excellent engineer (my personal guru) Wojciech Seliga sold this idea to the best damn team you could imagine work with.
    Basically Wojciech prepared, together with me, Java 5 (this was my part) and JUnit training for the team and other guys from Intel (our employer these days). He showed us on the projector every step from the business problem, the concept in his head ending with the unit tests written with JUnit. No blog post like this or even the best books (even from Kent Beck) could do better than such short but extremely valuable live demo of unit testing.

    After this training (it last one hour at most! but for experienced developers) I just said “AHA! This is how I should start implementing features by writing unit test!!!” And believe me I knew how to write unit tests and was seasoned developer at that time.

    Try this. If you are located in Europe maybe you could ask Wojciech (or me) to do such training for your teams.

    1. Coverage is only the negative metric – it tells you what you are missing not what you did correctly
    2. Code coverage is not a measure of usefulness of testing – amen to that
    3. Code coverage, on the other hand, helps you see that only high level tests are sufficient to really test majority of your code
    4. I suggested using “common sense” i.e. if you feel you need lower level unit testing – go ahead! It’s your code
    5. There is no “One best solution” and this article is only my opinion – I’m glad you disagree and gave your comment here. Thanks
  6. I agree with the other commentators that the only value of coverage is the negative metric it provides: what isn’t tested? But beyond that, code coverage metrics are noise to me.

    I consider the “real” coverage metric to be the deficits found outside of unit testing – fewer deficits means better tests and more appropriate coverage. Too many times have I seen an adequate coverage metric for a project where no thought was placed into what actually needed to be tested (thank YOU VS.NET 2008 auto-test-generation-tool!).

    These project spend weeks in QA because of basic problems that could be avoided with simple parameter validation and object interaction testing; or worse, they enter the field with problems that could have been foreseen and tested with a little creativity.

    In the end I consider unit testing to be a developer’s tool, not a QA or PM tool. My unit tests are there to bolster my confidence in my code, and to make me comfortable making changes and knowing what’s been affected. In that way, the only “unnecessary” unit tests are the ones that don’t provide that confidence.

  7. I think you have to be careful when talking about waste. Elimination of waste is actually a concept borrowed from the Lean world. But Lean itself recognizes two types of waste: pure waste and necessary waste.

    Testing, virtually as a whole, could be considered necessary waste. Hey, if we developers wrote perfect code all the time nobody would insist on spending months on testing. Why? Testing adds no value to the system.

    But we don’t write perfect code so testing is necessary. Necessary waste.

    Then you have to consider WHY we write unit tests. Part of the benefit is psychological in nature. Does it matter if you have a few redundant unit tests if it provides developers with the confidence to aggressively refactor? Depends. What’s the maintenance cost of those redundant tests vs the benefit from eliminating technical debt via refactoring?

    I’m also not totally comfortable with the idea of using coverage to tell me when I’ve done enough unit testing. Coverage can’t really tell me when something is sufficiently covered, it can only tell me when something is NOT covered. I need to write exactly as many unit tests as necessary to test the conditions my code needs to respond to. If that works out to 50% coverage, then maybe I’ve got some unecessary cruft in the code. If it works out to 100% then, MAYBE, I’m done.

    Finally, I have to ask myself what’s my primary goal. Is it to produce quality software or is it to avoid writing unnecessary unit tests? I guess that depends on what the definition of quality is for my particular application but, again, I tend to prefer erring on the side of too much, waste elimination philosophy notwithstanding.

    Bottom line: I’ve only worked with a few teams who had this particular problem and, usually, by the team a team can be considered unit testing veterans most of the developers already know most of the tips in this article. For new teams, I still tend to prefer “test everything” and worry about redundant testing later.

  8. Excellent remarks – thanks for your comment.

    I just want to disagree with the point that testing virtually as a whole is a necessary waste – it is not a waste. Even if we had perfect developers who write bugless code there will be another developer (maybe even more perfect) who will change our perfect code in the future. Tests are here to PREVENT him from braking previous code. It is definitely not a waste to have a set of unit tests that will help him understand what is going on in the system.

    Remember that tests are for developers to verify they implement what they really want and to PREVENT problems in the future. They are not for fixing bugs.

    And one more thing – testing CAN bring huge value to the product – e.g. they can be used as a user’s guide (well, only for developers) on how to use the system.

  9. While I agree in principle, I disagree in the messy world known as software development with varying degrees of developer skills.

    Because I think testing Data Transfer objects is tedious, but necessary, I developed a testing framework that sits on top of the Unit Testing framework. With about 15 lines of code and some very simple constructor parameter naming rules, I can perform all of the testing for a DTO with 12 properties, whereas for 100% coverage on that class it was close to 100 lines without the framework.

    While some may say it is overkill….15 lines to test an entire DTO class makes it a “why not?” rather than an “goodness, I’ve got write all that junk?”

  10. Assuming that automated unit tests are created gradually over the development life cycle (or over multiple Sprints or iterations), Unit Tests never go waste because of 2 reasons – a) Developers get an opportunity to prioritize and cover critical test scenarios and cover the rest as they move on and b) Automation will optimize the overall efforts and provide additional bandwidth to increase coverage. However, one needs to be sensitive about the thin line where the pursuit of quality destroys value. Writing assertions that are either redundant or unnecessary will definitely consume time and lead to waste. This thought needs to be applied on a case to case basis on projects.

    http://se-thoughtograph.blogspot.com

  11. Thanks for this nice post about using unit testing approach for X code. I want to ask a question about unit testing method is that Is it correct to use Unit Testing procedure with regression testing or Integration testing? Is it provides the good results or not?

Leave a Reply

Your email address will not be published. Required fields are marked *