Frequency: how rapidly do we want our feedback ?
Fidelity: how accurate do we want the red/green signal to be?
Overhead: how much are we prepared to pay?
Lifespan: how long is this software going to be around, which is probability as well as time.
Martin Fowler outlined three things we can look at to get feedback on:
- Is the software doing something useful for the user of the software?
- Have I broken anything? I want to see every test fail at least once.
- Is my code-base healthy? So that I can continue to build things quickly.
The other issue is that to understand trade-offs you have to understand the costs, all the talk of TDD has been on the benefits. This neglect of costs is why people cannot comprehend that there is such a thing as test-induced damage. The trade-off continuum is true of other things. Consider the cost of reliability: going from 99% to 99.999% is exponentially more expensive than getting to 99%. We must also consider criticality. High reliability is important for space shuttles and pacemakers, but wrong for an exploratory web site. The rule of not writing a line of production code without a test doesn't fit in with trade-offs around criticality.
David agreed that it was good to mindfully trade-off QA for initial speed, but some have taken programmer testing too far and don't see the value of exploratory testing. If developers think they can create high-enough quality software without QA they are wrong, your tests may be green but when it's in production users do things you don't expect.
Programmers on-call is also a feedback loop. Kent said: "The on-call is the feedback loop that teaches you what tests you didn't write."
Reference:
Is TDD Dead? Video
Is TDD Dead? Transcript