Episode 167: The History of JUnit and the Future of Testing with Kent Beck

Filed in Episodes by on September 26, 2010 21 Comments

Recording Venue: Skype
Guest: Kent Beck

Host: Martin

In this episode we talk with Kent Beck about this tiny little thing he created many years ago and that has changed the daily work of many many programmers in the world: automated unit testing and JUnit. We briefly revisit the history of JUnit, talk about how things began and what has happened since then. We discuss test-driven development (TDD), talk about when to do TDD and when not, and chat about experiences in the wild. The episode closes with some personal thoughts about the future of testing and software engineering in general.


Tags: , , , ,

Comments (21)

Trackback URL | Comments RSS Feed

  1. Bubba says:

    Your site is totally unreadable in Firefox. OK in Chrome.

  2. marhe says:

    I missed a topic in your podcast: test coverage tools like clover (every source line has to be tested at least once). I’ve worked in a project where this was an dogmatic issue. My feelings were, that we wasted a lot of time on trivialities. Is there a golden mean, a common consensus on that problem?

    • Martin Lippert says:

      I don’t know what the common consensus on that problem is. My experience is: if you really do test-driven development, the test coverage of your code must be quite high (because you implement code to get the test green). If there is no test failing, you don’t work on the production code… :-)

      Having test coverage as a metric without any real meaning doesn’t make much sense to me. Doing test-driven development to automatically get a high test coverage makes a lot of sense to me.

  3. If you are interested in the ‘rules’ feature mentioned in the interview you might be interested in these blog posts. (Obviously this is self marketing, but hopefully of the useful kind):



  4. matt dawson says:

    At one point in the interview the question came up of why teams have tests they don’t run and/or tests that are permanently broken. This seemed like a complete mystery to Martin and Kent.

    This issue comes up all the time in practice especially when your tests are not beautifully designed. You have some big complicated test that is failing after some refactor. In all probability the problem is with the test not the code under test. You are under deadline pressure. You have other feedback that tells you everything is working. Do you stop and spend two days debugging the test or do you just comment out the test?

    • Martin Lippert says:

      Hey Matt,

      I know such situations from practice. And I know that some people tend to comment out the test, if they are under time pressure. But my observation is that this is often not the real problem. Many times its just a symptom of another, often bigger problem: the team is not test-driven and hasn’t an appropriate Definition-of-Done that includes tests. Skipping tests under time pressure demonstrates that tests aren’t an integral part of programming for them. And skipping tests produces technical debt, from my experience. You can do that, but you loose the value that test produced in the past. And you create a little time bomb for the team… If I need to create that technical debt for whatever reason or pressure, I would hurry to fix that before implementing the next feature.

      Just my 2 cents,

  5. John Leger says:

    Great episode. Unexpected bonus hearing Kent Beck use Goldie locks and the three bears as a metaphor for when too many tests are involved. I am looking forward to a discussion on Continuous Deployment. I agree with this being a large topic for future concerns and knowledge. TDD adoption is social and there still exists a mire of fear behind its adoption. Fear of the unknown. Every professional who epitomizes their contribution to the craftsmanship of creating software needs to listen to this podcast. Thanks host and Beck for this talk!


  6. Very interesting podcast for a young developer like me. Often I’m confronted with smaller development tasks where I’m not really sure how to solve them and work more exploratory. I got my confirmation that writing tests in that case isn’t necessary if there is not clear whether to use it in further development. Thanks Mr. Beck and SE-Radio.

  7. Very interesting podcast for a young developer like me. thank good pratik pasta tarifi

  8. Thank you a lot for such really valuable information found in the your blog. You can get info on Web Application Testing as well with some guidelines with different way of thinking.

Leave a Reply

Your email address will not be published. Required fields are marked *