top of page
Search

What we talk about when we talk about Waterfall vs Agile

  • Writer: Russell Andrews
    Russell Andrews
  • Jul 25
  • 3 min read

"Either once only, or every day. If you do something once it’s exciting, and if you do it every day it’s exciting. But if you do it, say, twice or just almost every day, it’s not good any more.”

―Andy Warhol

ree

For a long time now there has been a debate around waterfall vs agile. Usually this centres around the idea that waterfall is suited to projects where the outcome is certain and we can be confident of our designs, and agile of for more free-flowing environments where the outcome is not so certain.


While entertaining, these ‘framework wars’ are not particularly useful for management and executives trying to decide on a delivery model. Often we see this confusion leading to the so called ‘dual delivery mode’ where management essentially hedges their bets and decides to do both. While the instinct behind this is understandable, it can often lead to a ‘worst of both worlds’ approach, where we lose some of the diligence of waterfall but do not gain any of the benefits of small iterative delivery.


To help clarify this it is probably useful to clarify exactly what we mean by ‘waterfall’ and ‘agile’. I’m going to propose a simple model which I think will clarify what we are talking about here. After all, if you look at scrum, it does look an awful lot like really small little projects n two week increments. Which it is.

So here is my suggested definition: Waterfall is the belief that change is hard, and therefore we should it as infrequently as possible, whereas agile is the belief that change is hard, and therefore we should do it as often as possible.


With waterfall, we focus on analysis and preparation and adding additional layers of documentation, testing and risk management to ensure our large batch change is successful. In agile, we try to strip down the process so we can do the change frequently, in order to get better at something by doing it very often.


I’ll be clear - I’m firmly in the agile camp. I believe that most of the risk management activities we do in large releases are not only ineffective, but probably actually increase the risk of successful change.


However, we have to be sensible in that not every change can be executed frequently. Some things are simply naturally expensive in both time and money. A frequent observation in larger organisations using scrum is that they simply are unable, due to organisational constraints, to deliver value every two weeks.


So how do we address this? Firstly we need to be pragmatic and start where we are - Donald Reinertsen in his excellent book ‘The Principles of Product Development Flow’ describes the batch size of a piece of change as a function of the combination of holding cost (the cost of keeping things the same) vs transaction cost (the cost of changing the status of something). Reinertsen proposes most organisations have a batch size around twice as large as it needs to be. 


So we have two options;

  1. Shock therapy Reduce your batch size in half. Halve the size of your projects. release twice as often. Use this as a forcing function to force yourself to cut transaction costs. This is like signing up to a half marathon to force yourself to lose weight and get some exercise.

  2. Easy does it Work to reduce your transaction costs and as you do so, work to gradually deliver more frequently.


This framing of waterfall vs agile can also be useful to explain why some organisations struggle to get value out of agile - especially if they have taken (either deliberately or accidentally) a hybrid approach.


This usually takes the form of a large batch business case encapsulating all of the scope of work, which is then decomposed into smaller units of work delivered in small batches. These small batches are then aggregated up into a large release, which goes through a large batch test and release cycle. In these instances, organisations gain the benefits of short, partial learning cycles, but not the full end-to-end customer centric learning cycles they might get otherwise. Better than nothing though.


In this situation, the goal is to look at the large batches in the lifecycle of the work, understand the transaction cost driving that batch size, and reducing it to make the batch size smaller. This means - deployment pipelines, devops, automated testing, lean business cases, project-to-product, etc.


If you’d like to learn more about how you can make the evolution from large batch to small batch and get the benefit of iterative delivery in a pragmatic way, reach out to russell@flowspring.nz

 
 
bottom of page