Good tools for testing matter because the temptation to skimp on testing is real even for true believers: anything that reduces the friction and pain associated with actually adding tests therefore has a disproportionate effect on adoption and implementation rates.
I think there are several reasons the temptation to forego writing tests seems to be strong for most people.
The first is that code is sometimes implemented correctly without writing any tests. There's no real temptation to miss out mandatory boiler-plate code at the top of a Java program because we know it can never work without it. The fact that it is possible to produce correct programs (and correct analytical processes) without writing tests means that even if you are intellectually convinced, and conditioned by experience, to believe that it is ultimately faster to produce reliable results by following a test-driven methodology, there's always the lingering memory of those rare occasions when things did just work first time without the "extra" work of writing tests.
The second reason it's tempting to skimp on writing tests is that this is a support activity rather than the main task at hand, making the testing part seem less glamorous and more of a chore. (Everyone is familiar with Abraham Lincoln's "Give me six hours to chop down a tree and I will spend the first four sharpening the axe", but how many adopt his principle?)
Perhaps the final reason for skimping on test code is that large programs and processes typically start life as small programs that we may not intend to use repeatedly. This is especially true in data analysis. While no task is so simple it cannot be botched, the benefits of and need for systematic testing grow with project size. The dynamic of "knocking up a script to calculate something", then later finding yourself using and modifying that script regularly, gradually extending it to handle ever more cases, while very typical for some kinds of development and analysis, carries a high risk. The frequent result is that by the time anyone realises that it really needs a test suite, the code has become grown complex, undocumented and hard to back-fill with tests.
The approaches we propose for test-driven data analysis are deliberately compatible with retrofitting, because in practice so much code develops in this organic way. If we ignore this reality, we will automatically exclude a large proportion—perhaps a majority—of analytical processes actually deployed. Given that we believe the test-driven approach to data analysis has much to offer, we are keen to bring its benefits as widely as possible, so we need to have regard to the case of analytical processes developed in the real world without TDDA as a primary focus.
Understanding that there is a natural temptation not to develop tests helps us to realise that anything we can do to make the process of testing less painful to implement and easier to retrofit when it has been neglected is likely to help adoption.
With these thoughts in mind, over the coming weeks and months, we will
have further posts on specific aspects of tooling for TDDA. The broad
plan is to discuss ideas we have already implemented in our own
data analysis suite, some as extensions to Python's built-in
unittest framework, and to start to extract and publish core
functionality from there in forms that are applicable to the broader
Python (and perhaps, in some cases R) data analysis toolsets.