My blog post the other day about giving up on test-first development attracted a lot of attention, not least from ‘Uncle Bob’ Martin, an agile pioneer who wrote an entertaining riposte to my comments on his ‘Clean Code’ blog. He correctly made the point that my experience of TDD is limited and that some of the problems that I encountered were typical of those starting out in TDD.
1. I said that TDD encouraged a conservative approach because developers (at least those who think the same way as me) were reluctant to break a lot of the developed tests. Bob suggested that the problem here was that my tests were too tightly coupled with the code and that if tests are well-designed then this shouldn’t be too much of a problem. Looking again at my tests, I reckon that they are too tightly coupled to the code and they can be redesigned to be more robust.
So, I think that Bob’s right here – this is a problem with my way of thinking and inexperience rather than something that’s inherent in TDD.
2. I made the point that TDD encouraged a focus on detail because the aim was to write code that passed the tests. In fact, one of the things I read when getting started with TDD, was ‘Uncle Bob’s three rules of TDD‘:
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
If this isn’t advocating a focus on detail, then I don’t know what it’s saying.
Bob says that ‘Code is about detail; But this doesn’t mean you aren’t thinking about the problem as a whole‘. But how do you think about the problem? Maybe Bob can keep it all in his head but I think about problems by developing abstractions and denoting these in some way. I don’t like notations like the UML so I do it as a program. So how do we think small AND think big without writing code that isn’t about passing tests? Or have you changed your mind since writing your three rules of TDD, Bob?
3. I made the point that TDD encouraged you to chose testable designs rather than the best designs for a particular problem. Bob was pretty scathing about this and stated unequivocally
“ Something that is hard to test is badly designed”
But we know that systems made up of distributed communicating processes or systems that use learning algorithms are hard to test because they can be non-deterministic – the same input does not always lead to the same output. So, according to Bob, system designs with parallelism or systems that learn are badly designed systems. Bob, I reckon you should take this up with the designers of AlphaGo!
4. I said that TDD didn’t help in dealing with unexpected inputs from messy real data. I don’t think I expressed myself very well in my blog here – obviously, as Bob says, TDD doesn’t defend against things you didn’t anticipate but my problem with it is that proponents of TDD seem to suggest that TDD is all you need. Actually, if you want to write reliable systems, you can’t just rely on testing.
Bob suggests that there’s nothing you can do about unanticipated events except try to anticipate them. To use Bob’s own words, this is ‘the highest order of drivel’. We have been building critical systems for more than 30 years that cope with unexpected events and data every day and carry on working just fine.
It’s not cheap but we do it by defining ‘a safe operating envelope’ for the software then analyse the code to ensure that it always will operate within that envelope, irrespective of what events occur. We use informal or formal arguments supported by tools such as static analysers and model checkers to provide convincing evidence that the system cannot be driven into an unsafe state whatever events occur.
That’s how we can send systems to Mars that run for years longer than their design lifetime. Accidents still happen but they are really very very rare when we put our mind to building dependable systems.
Just a final word about the space accidents that Bob quotes. I don’t know about the Apollo 1 fire or the Apollo 13 explosion but the Challenger and Columbia disasters were not unanticipated events. Engineering analysis had revealed a significant risk of a catastrophic accident and engineers recommended against the launch of Challenger in low temperatures. But NASA management overruled them and took the view that test results and operating experience meant that the chances of an accident were minimal. These were good examples of Dijkstra’s maxim that:
Testing shows the presence but not the absence of bugs
I think that TDD has contributed a great deal to software engineering. Automated regression testing is unequivocally a good thing that you can use whether or not you write tests before the code. Writing tests before the code can help clarify a specification and I’ll continue to use the approach when it’s appropriate to do so (e.g testing APIs). I don’t intend to spend a lot more time learning more about it or consulting a coach because when it works for me, it works well enough to be useful. And, as a pragmatic engineer, when it doesn’t work for me, I’ll do things some other way.
Understandably, TDD experts promote the approach but they do themselves a disservice by failing to acknowledge that TDD isn’t perfect and by failing to discuss the classes of systems where TDD is less effective.
We can only advance software engineering if we understand the scope and limitations as well as the benefits of the methods that are used.