It’s usually uncouth to talk about test-driven development at the dinner table, but I’ve been on a tear with it recently, and my family and team members have heard enough – so I need to write it down. What have I been ranting about? How old TDD is.
The software industry has developed an interesting pattern. As an industry, we’re growing fast, and everyone is always happy to talk about how fast we’re growing. But what doesn’t get talked about is where we came from.
Software companies hire huge swathes of new graduates each year and we always come up with new technologies and methodologies–new fashions–if you will. But most of these methodologies and technologies are actually old. Some are even ancient.
New graduates typically have no idea what came before, because they’re too busy learning what’s here now. And all the experienced programmers don’t have time to explain what came before because they’re too busy teaching the current stuff – or they’re in meetings. And so it is that our industry lies in a perpetual state of inexperience. The new not knowing the history, the old being too busy to teach it. In my view, this is the primary reason we continuously regurgitate old methods and tech ideas. One of those ideas is test-driven development. In my small effort to end the cycle, I’d like to write a bit about its history.
I’ve been searching for quite some time on the history of test-driven development and one of the earliest sources I’ve found is from a 2008 interview with Jerry Weinberg, who worked on project Mercury (the U.S.’s project to put a man in Earth orbit before the Soviets), Jerry recounts a co-worker teaching him to write tests first way back in 1957:
Interviewer: Computers don’t break down as they used to, so what’s the motivation for unit testing and test-first programming today?
Jerry: We didn’t call those things by those names back then, but if you look at my first book (Computer Programming Fundamentals, Leeds & Weinberg, first edition 1961 —MB) and many others since, you’ll see that was always the way we thought was the only logical way to do things. I learned it from Bernie Dimsdale, who learned it from von Neumann.
When I started in computing, I had nobody to teach me programming, so I read the manuals and taught myself. I thought I was pretty good, then I ran into Bernie (in 1957), who showed me how the really smart people did things. My ego was a bit shocked at first, but then I figured out that if von Neumann did things this way, I should.
So programmers have done test-first development since very near the beginning of software. Of course, things in those days were different, and these 1957 tests were manual. If you only had a few minutes with the computing machine, you’d write your program on the punch cards and write your expected output elsewhere. When you got to see your program’s output, you could quickly compare the two and save everybody time. Jerry Weinberg said that this was seen as “the only logical way to do things,” but there are more reasons for this. In my view, the best of these is in an address from Edsger Dijkstra in 1972. Dijkstra lays out the problem beautifully:
Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer’s burden. On the contrary: the programmer should let correctness proof and program grow hand in hand.
How often do programmers today allow a “convincing proof of correctness” to grow with their program? I fear it’s not often, but this is the goal of test-driven development. As Dijkstra said, a bug can easily be proven to exist with testing after programming, but can you prove the absence of bugs with that sort of testing? Not easily. When we write tests first, we provide with our program a proof that each piece works as intended. Of course, this also does not guarantee the absence of bugs related to side effects, but it does guarantee (in theory) the absence of bugs related to the intended behaviors. This is the primary argument in defense of test-driven development – tests written before the program provide more convincing evidence of correctness.
There are more references to old-school test-driven development which I won’t bog down this article with. But I think it will suffice to say that this was in practice in many projects in the early days of computing. It was then in the 90’s that Kent Beck “rediscovered”, and perhaps re-popularized, test-driven development. He describes his discovery as follows:
The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I’d written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, “Of course. How else could you program?” Therefore I refer to my role as “rediscovering” TDD.
Kent Beck’s TDD revival is why most programmers today are familiar with the practice. Even Robert Martin (one of TDD’s most ardent missionaries) learned it from Kent, saying that to learn it, they pair programmed, and Martin was struck by the granularity of the practice. Kent Beck wrote a line of test code, then a line of production code to make it pass. This, to my understanding, is the beginning of the adage “red, green, refactor”. Eventually these steps were codified into rules and The Three Laws of TDD were born:
- You must write a failing test before you write any production code.
- You must not write more of a test than is sufficient to fail, or fail to compile.
- You must not write more production code than is sufficient to make the currently failing test pass.
The Three Laws of TDD, in my view, are the gold standard of modern test-driven development. They put a programmer into a minute-by-minute cycle of writing a test, making it pass, and refactoring. In my experience, following these rules and trying to do so as prescribed by Robert Martin and Kent Beck increases productivity, enables the creation of some ingenious algorithms and designs, and improves code stability and developer confidence. I also think it’s more fun! I won’t be going back to testing after, but I know many developers struggle to follow these rules.
Many developers cite concerns with code quality, working at the edges of systems, and working with legacy code that doesn’t have tests. Indeed, this is where TDD gets difficult or even infeasible. How do we get around these barriers? There is a way, but that’s a topic for another article.