Many, if not most, within the software world have come to think of Test-Driven Development (TDD) narrowly as a process unique to the development of software. Many discussions and debates have centered around the use of specific tools or techniques designed to exercise every corner of code against a predefined list of requirements with the purpose of demonstrating that a given codebase performs as described. As happens all too frequently, it becomes difficult to separate the signal from the noise and the principle from the practice, and so we become lost in the implementation details and lose sight of the original intent. Therefore, let’s take a step back and aim to reconsider the conversation surrounding Test-Driven Development: specifically, what is it, and how can it help your business?
Within the context of a business, the purpose of software is to add value to that business. Because of this, one of the key challenges of any development effort is the mapping of business requirements and objectives into corresponding system requirements. Following this, the question becomes: how do you write software to fulfill these requirements, and as your project and its codebase grow, how do you ensure that your software continues to meet these requirements
Test-Driven Development, or TDD, is an organizational methodology for writing software by writing tests that describe and exercise that software. At its core, TDD is a repetition of three steps: write a failing test, make it pass, refactor.
The notion is that you start with a test that describes a piece of functionality for whatever software feature is being developed. Running that test for the first time should produce a failure, because the code being tested hasn’t been written yet. Naturally, the next step is to then go about implementing the feature, with the objective of making the original test pass. The third, and arguably most important step of the lifecycle, is to then refactor the code.
Writing software is challenging not only because of its intrinsic qualities but also because of its dynamic nature. Business requirements can shift constantly, and as a result, the software requirements need to be able to adapt. The code that’s written today will suit today’s needs, but tomorrow, or a month from tomorrow, something is going to change, and as a result, today’s code may no longer be the best way to solve a particular problem. By introducing a mechanism for comparing what the software is doing to a record of what the software should be doing we now have a powerful tool that will allow us to fearlessly rip apart, rewrite, and otherwise rewire our code to best meet today’s requirements because we can be confident that so long as all of our tests pass, the output of the system should be the same.
A 2014 review conducted by the University of Helsinki found that in many cases TDD yielded positive effects in terms of defects, quality, complexity, and maintainability, but that these improvements can come at the cost of increased development efforts. This isn’t a surprising conclusion, given that writing additional tests will require additional efforts - but what studies of this nature seem to leave out is the key benefit of TDD, which is feedback.
Closing the feedback loop means shortening the amount of time it takes to make a change and learn something. The quicker you can do this, the more agile you can be. Most metrics that try to quantify development productivity relate to units of work over time, and by reducing the time it takes to receive actionable feedback TDD enables more efficient use of that precious resource. It’s been said that time is money, so the economic benefit here is self-evident. If you’ve ever heard someone mention a “left shift” within the context of the software development lifecycle, this is exactly what they’re referring to.
An additional benefit to having a test suite for your application is that you can automate it. Not only can developers run these tests as part of the development lifecycle, but they can be run by other systems as part of smoke testing or a CI/CD pipeline. By automating the steps to deliver code to market it enables more rapid delivery of the product as a whole, which in turn expedites the feedback you can receive from the users, which can then be interpreted and applied to the next iteration.
The synergistic effects of these two points are clear: with a codebase that enables developers to continuously adapt the codebase to the needs of the business, and the ability for the business to quickly solicit feedback from its customers, the result is a system that enables teams to deliver more value from less work.
Test-Driven Development is but one of many different approaches to testing software. The important part, however you choose to implement it, is that it delivers value to your organization. Like any other discipline, you should feel free to use it, adapt it as you see fit, or not use it at all, based entirely upon the needs of your organization. That being said, if you do choose to walk the TDD path, as the prophet Clapton foretold: “It’s in the way that you use it.”
Like any other tool, the effectiveness and value of TDD are dependent on its implementation. Too often in software development various tools or techniques are turned into scapegoats to explain the failures or shortcomings of a project, but it’s a poor craftsman that blames his tools. So let’s focus on how to get the most out of TDD.
The last, and most important step of TDD is to refactor the code - but for TDD to work that’s likely not the only refactoring that needs to happen. Should you choose to implement some form of Test-Driven Development, it’s important that the surrounding culture also adapt to support this style of development philosophy.
In the case of TDD, the most common failure tends to be that the continuous cycle of writing tests, making them pass, and then refactoring is never actually completed, but rather is short-circuited by skipping over the refactoring step. More often than not the issue isn’t TDD, or any other specific technique, but rather the very culture that the software is developed within.
Not unlike documentation, refactoring seems to be one of the first things thrown overboard when deadlines or budgets constrict because after all, someone has to write all of those tests. And so it follows that during crunch time the tests and the documentation, and anything else that isn’t considered to be mission critical are jettisoned. What invariably follows is a steady decline in the overall quality of the codebase and in its place an accumulation of technical debt.
Refactoring is the step of the process that strives to transform code that simply works into a codebase that is cleaner and more maintainable. It’s the part that allows the developers to take a step back and recognize patterns within the overall system and to make better-informed decisions about how they might be adapted to be more versatile or to recognize emerging anti-patterns and take steps to mitigate them. It may even be the difference between recognizing when it’s time to split the overall architecture into discrete microservices or any other of a myriad of potential improvements. Without this step, you can miss opportunities for introspection and incremental improvement, and ultimately your codebase risks a gradual (or not) transformation into a big ball of mud.
To reap the benefits of Test-Driven Development, it’s important that the culture take a long view of the process as a whole, and its potential, and invest the time in applying the complete “red, green, refactor” cycle, and not just the bits that are expedient.
When it comes to system design as a whole, oftentimes too much time is spent planning the specifics of a system without actually writing any code. A “green field” project can be exciting, with all of the anticipatory potential associated with a clean sheet of paper. Engineers and architects alike fall victim to try-this-itis, where they advocate for the use of a flavor-of-the-month technology, ignoring any of the warning signs that it might not be a good fit for this particular project in favor of the opportunity to try something new or cool. Fast forward a few months and the framework-du-jour is no longer meeting the needs of the team, the quality of the code is suffering, and we’re back to the part where the technology becomes the scapegoat for the poor outcome of the project.
At the end of the day what’s important is that the technology, methodology, or framework used to facilitate TDD works for the team and for the business. If you’re hamstrung by your tools, not only are they not adding value, but they’re actively working against you. Take care in thoroughly evaluating whatever framework you intend to use, and take the time to understand exactly how you’ll use it. This approach holds true for all design decisions, not just those related to TDD!
In a nutshell, the idea of something like TDD is to provide a framework for collecting feedback in a more expedient manner and not simply a technique for preventing bugs from appearing in your codebase. Whichever approach you choose, the key should be that it enables your business and its software to adapt more quickly to changing trends and business needs. Along the way, be mindful that TDD and its counterparts aren’t simply something you can ask your development team to implement and then forget about, and that to effectively leverage these techniques care must be taken to embrace and enable the approach at a cultural level before diving into implementation details like frameworks and tools.