It has been one of my long term sources of frustration that developers and their managers vigorously debate various positions on how to create software better without any shred of evidence or measurement. Usually, anecdotes ad opinion masquerades as facts, and blog posts are cited as evidence. It’s as if the scientific method completely bypassed the software industry. So, when I found this presentation titled Bits of Evidence (What we actually know about software development and why be believe its true) by Greg Wilson, I was constantly nodding in agreement.
Contrasting the situation in medical research, where randomized double blind studies are the norm, he writes:
… even the best of us aren’t doing what we expect the makers of acne creams to do.
While there may be a growing emphasis on empirical studies in academic publications, that emphasis has not been transmitted to the practitioners in the field. Without a wider understanding of the emperical evidence measured in a scientific method, the industry’s state of the art will fall woefully inadequate.
I’ve written earlier about the Lutz Prechelt paper ; why would you want to get into a debate about language productivity without considering the evidence it offers? Similarly, when you estimate, or ask others for estimates, shouldn’t you consider Aranda and Easterbrook’s paper on the effects of anchoring? Or, when structuring software teams, consider Microsoft Research’s Nagappan’s papers showing bug counts does not depend on physical distrance within teams, but does depend on organizational distance.
As Wilson asks, shouldn’t our development practices be built around these facts? There is a better way. Should we call it “evidence based software development?”