Process stability and Software Engineering
I’m reading some of Edward Deming’s writing and I can’t help but try to draw parallels between manufacturing fundamentals and statistical process control and the software development lifecycle.
Deming was able to show scientific ways of managing hidden variables without trying to make wild guesses about systems up-front. I have seen management systems come and go in Software Engineering and seen backlash against things like the Capability Maturity Model and various ISO certification systems. We ended up coming back to simple things like Agile. Why?
I think agile has become a sort of cargo cult, but some parts of it fundamentally address the right questions about process viewed through the lens of statistical process control. The concept of measuring velocity is a key part, as is tracking estimation errors. I think where the cargo cult comes in is the assumption that the goal is to have the maximum velocity for a given sprint. I’d argue that the more valuable goal is to achieve a stable velocity over time, even if it seems slow.
Complicating matters a bit is that software development is a double-production process. Artifacts are produced which in turn produce other behaviors and further artifacts. Now we are talking about another concept that is reliability. How does reliability relate to quality and to process stability? I think that there are statistical answers to this question.
I was listening to the CyberWire Daily podcast where one of the hosts was discussing cybersecurity insurance. The thesis is that threat models are so complex now that they can’t be calculated classically/deterministically. However, statistical models are very powerful and are able to find very good solutions to otherwise intractable problems.
This is going to be a short post, but hopefully it sows some seeds for me to return to in depth in a later series of posts.