Article doesn't tell you how, it's more like a handful of thoughts on how to do so, without any real data (as the author admits)
>But I think you should take more away than a handful of application-wide metrics. You should take away a preference for statistical and empirical consideration. Figure out how to quantify trends you observe and form hypotheses about how they impact code quality. Then do your best to measure them in terms of outcomes with the application. We lack a laboratory and we lack non-proprietary data, but that doesn’t stop us from taking the lessons of the scientific method and applying them as best we can.
Someone has already done this. Read chapters 8, 9, and 23 of Oram and Wilson's <em>Making Software</em>
Good talk and I'm left wondering what's the book's name.
You can find examples of what I'm looking for in making software which contains a metaanalysis of various studies, including studies about TDD.
These days, if a practicionier can't show that his practice works, his words should be dismissed as snake oil. If you don't have the book there's a presentation on the topic from the author.
In this case I'd like to see data that Katas are effective for learning TDD (if that's your claim).
I think most is kind of a rough statement. Granted there were some truly gifted people in the 70's that rocked our world.
I think we have a LOT of research papers that we sift through and the legacy papers tend to stay highlighted in time. I can imagine if you sifted through everything that existed back then, you may be saying something similar.
I've found a few great ones and if you are heavy into software engineering, you may share the enthusiasm towards a book that came out several years ago: Greg Wilson - Making Software: What Works and Why we Believe it
He also has a talk where he references some of those papers: Greg Wilson talk
He comes off strong, but he backs it with research which I appreciate. It's more about bringing data to the table if you have something you'd like to discuss. Somewhat heavy handed, but there are good papers that I've read referenced there.
The continual evaluation of Conways law and it's research still holding true today is something that I continually enjoy (Although originating from a 1967 study reinforcing your point of seminal papers from that era)
http://www.amazon.com/Making-Software-Really-Works-Believe/dp/0596808321/ref=ntt_at_ep_dpt_2
Bok skriven av samma person.
There are other ways to solve the problem you're talking about, but if you want to hold some simplistic beliefs about a silver bullet then go right ahead, I won't bug you anymore.
Honestly it's better that you keep believing all of this stuff -- keeps my salary nice and high when I consistently lead highly productive teams at my org when other teams don't test their assumptions, believe in fairytales like "strongly typed languages are always better," and end up killing their own output because of it!
If you want to open your mind, I suggest this book: https://www.amazon.com/Making-Software-Really-Works-Believe/dp/0596808321
Really makes a strong case for believing evidence-based arguments instead of just intuitions that people agree on (that are often wrong). There's a whole chapter in there about this exact topic!