Copy
View this email in your browser

Forecasting and Data Newsletter by Troy Magennis
Six Reasons Forecasting Fails

In this newsletter:

  1. Coming workshops and events you might be interested in.
  2. Article:Six Reasons Forecasting Fails 
  3. Myth of the Month: You Needs Thousands of Samples
  4. Tool of the Month: Throughput Forecaster
  5. About Focused Objective and Troy Magennis
If this newsletter was forwarded to you, consider Subscribing here  If you like this content, please forward it to a colleague. If you didn't like this content, please email me why: Troy Magennis.
Got Metric or Forecasting Questions? Contact Me

Coming Workshops and Events


Forecasting + Metrics Combo  - San Francisco, Chicago, Toronto, London
San Francisco: 6-7th Feb next week!, Chicago: 9-10th Mar,
Toronto: 25th-26h Mar, London: 30th Apr.

With partners: Stockholm (Crisp): 4-5th May    Hamburg (it-Agile) 7-8th May

Flight Levels Architecture Atlanta: 12-13th Mar

Lean Agile London Conference 27-28th April

If you want a quick call to discuss the content, or talk to your boss, then just email me.
Free Months Call on Metrics
Training Schedule
Zoom Call: Ask Me Anything on Metrics & Forecasting
Up-coming Workshops on Agile Forecasting, and Metrics

Article: Six Reasons Forecasting Fails

Over the years, I’ve had success and failures forecasting IT work and software projects. I’ve also had the chance to observe other people’s disasters and successes. This list is my top six reasons after collecting stories. There are other reasons, but these are very common and worth understanding.

1. Imaginary Start Date

Planning new ideas means assuming a start date for that work. Often this start date conveniently aligns with the beginning of a month or immediately another project is scheduled to ship. Starting immediately following another project or just because the calendar moves to a new month or quarter should raise suspicions.

Key points

  • Be suspicious of calendar aligned start dates - what if the prior work runs long?

  • Started needs to include at a minimum the following constraints

    • Do we have a team in place, and do they have the equipment they need?

    • Do those team members have the skills and training needed for the new work?

    • Do those team members know what they need to build?

  • New teams take time to form and ramp-up to full speed - you need to account for this

  • Existing teams might need recovery time and time to stabilize prior deliveries

  • Compare options using duration, not delivery date, UNTIL you start

The impact

  • Every day after the assumed start date is a day late

  • Once you go on the record with a delivery date, people make assumptions using that date no matter the caveats you give (ballpark, rough estimate, about as examples.)

The remedy

  • Delay assuming a start date by forecasting and comparing duration during planning.

  • Once you have started development, you can communicate the delivery date based on probabilistic forecasting discussed later in this book.

2. Scope Expansion or Explosion Growth

Planning undeveloped ideas means there are unknowns. Depending on how innovative a feature or project is (see Five Levels of Innovation Uncertainty later), difficulty and learning can be estimated. If the new work is a continuation of known work, growth minimal, otherwise it could be significant. Growth also has a relationship to delivery frequency, with higher growth expected for longer release cycles (people try and squeeze something new into this release because they have to wait too long for the next one).

Key points

  • More innovativeness means more discovered growth

  • More extended periods between delivery frequency means more discovered growth 

  • More abstract feature descriptions mean more growth than those with concise descriptions

The impact

  • Some work is forecast better than others (due to innovativeness and newness)

  • Significant scope change and added unplanned work forced into “this” release

The remedy

  • Account for innovativeness by multiplying the backlog size -

 

Complexity / “New” ness
(base on Liz Keogh’s work covered below)

Multiply

Just about everyone in the world has done this.

1 x

Lots of people have done this, including someone on our team.

1.25 x

Someone in our company has done this, or we have access to expertise.

1.5 x

Someone in the world did this, but not in our organization.

2 x

Nobody in the world has ever done this.

?

 
  • Account for delivery frequency (and work towards delivering more frequently) by multiplying the backlog size -

 

Release Frequency

Technically Easy

Technically Hard

Continuous - Every 2 weeks

1 x

1.25 x

Every 3 to 6 weeks

1.25 x

1.5 x

Every 7 to 12 weeks

1.5 x

1.75 x

Every 13 to 26 weeks

1.75 x

2 x

26+ weeks

2 x

4 x

 

3. Split rate adjustment

This scope growth newly discovered work; it is a conversion between the level of detail we plan/talk about work versus the level of detail we need to develop work. When teams pull work into a sprint, they often split that work into multiple smaller steps convenient for building. When we plan work, those items haven’t yet been analyzed and split. Here is the problem: historical throughput data or velocity data are calculated using post-split work, and using it without correction would make it appear (when forecasting) we are delivering faster than we are. I’ve seen this time and time again. Teams are in trouble before they begin.

Key points

  • Delivered work is often split to fit development sprints and manage dependencies better

  • Planned work isn’t split for development when we forecast

  • It appears using historical throughput that we deliver 2x to 3x faster than we do

  • Unless you KNOW backlog work is fine-grained and unlikely to split or have defects, start with a split rate range of 1 to 3x

  • This splitting is essential and healthy - don’t try and reduce this, but you need to account for it

The impact

  • Appearing to be delivering 2x faster than you are

  • Low impact if the backlog of work is groomed ready for development by a team (not saying this is a good use of time; I’d adjust the rate rather than make teams prematurely refine and split)

The remedy

  • When forecasting using historical delivery rate (throughput or velocity), correct the amount of work by ⅓ work not splitting, ⅓ work splitting twice, and ⅓ work splitting twice. 

  • A rough guide is to multiply scope by 2 (or if your forecasting tools allow a split rate range adjustment of 1 to 3).

  • Compute your actual work split rate adjustment using historical data when you can.

4. Excessive Utilization (Congestion Collapse)

Work flowing through delivery systems has a lot in common with vehicle and internet traffic. At high utilization of the road or network connectivity, travel takes longer.  The insidious thing about the impact of high utilization is that it has a minimal effect up until the point it grows massively. 

High utilization alone isn't enough to cause significant delays; the uncertainty of the work arrival rate and cycle-time (development time) of work plays a supporting role. Higher levels of uncertainty multiply the impact of utilization even further. Having high utilization and high uncertainty is a recipe for disaster. During development, we step into this high impact zone reasonably often, making it one of the major causes of delay.  

Key points

  • Delays occur at ever-increasing severity as utilization grows.

  • Delays increase even more with uncertainty in arrival rate and cycle-time.

  • Even modest uncertainty at high utilization causes 2-10x increases in delay time.

  • Your historical data contains work subjected to these high delay conditions.

The remedy

  • It would be easy to say don't allocate teams to 100% utilization, but that's probably not going to stop companies doing that. 

  • Help teams understand the economics are in favor of adding capacity to avoid high utilization conditions.

  • If you can't add capacity, try to reduce the uncertainty for arrival time and cycle-time.

    • Define and honor work in progress limits

    • If the team is pursuing highly innovative work, reserve some capacity

    • Find the reasons unplanned work hits a team and look for ways to make that work planned or reserve capacity for it.

The science: Kingman's Formula is a proven good predictor of waiting (queue) time for different utilization and uncertainty values. Software delivery systems encounter higher delays at higher utilization and higher variability, causing items estimated to be of similar effort producing very different lead-times. Explore and play with various utilization and uncertainty inputs using this calculator -

https://observablehq.com/@troymagennis/how-does-utilization-impact-lead-time-of-work
 

5. Misunderstood Parallel Scaling (adding people or teams)

Life would be simple if doubling the number of teams doubled the amount of work delivered. We know from the work in computing power that doubling the number of processors that performance improvement doesn't scale linearly. The amount of integration work needed to combine multiple streams of effort limits performance improvement. If very little integration work is necessary between parallel teams, the performance is optimal. If much integration is needed, the performance improvement is minimal. 

Key points

  • Doubling the number of teams doesn't double delivery throughput

  • The more integration work needed to combine the parallel teams' work, the less improvement.

The remedy

  • Don't assume doubling the number of teams doubles the delivery rate

  • Minimize the amount of integration work required to combine work from multiple teams (for example, add a continuous integration process).

The science: Amdahl's Law predicts performance improvement by adding parallel computing power. It defines the relationship between parallel work streams and the amount of parallelizable work versus sequential integration work. 

https://observablehq.com/@troymagennis/how-much-improvement-do-i-get-by-adding-more-teams-or-people?collection=@troymagennis/agile-software-development

6. Risks and Consequences

Doing new stuff has risks. Risks concerning forecasting error mean delays due to rework or having to do more than anticipated to get something to work. Some risks are knowable in advance; others catch teams by surprise and are unknowable in advance. The real world has taught me that 8 out of 10 risks are knowable in advance, known by someone on the delivery team who may or may not have discussed that risk early in the planning process (they just weren't listened to).

Key points

  • The more innovative and new to the team the higher chance of something surprising happening to delay delivery

  • There are knowable and unknowable risks in every project

  • 80% of risks are knowable (and known by someone) before they occurred

  • You won't hear these risks if you don't ask or it's unsafe to say them.

The impact

  • For highly innovative work the impact of risks is often enormous (the number one reason for delays)

  • When something (a risk) delays work, acting irrationally and emotionally blaming others cause people to hide other risks.

  • Risks have a likelihood and an impact. The high likelihood and high impact risks need action.

The remedy

  • Ask, "what could go wrong and delay the smooth completion of this work?"

  • Capture both the likelihood of occurrence and the impact of risks, for example, "80% chance it may not perform as needed and we'll need to tune it in another sprint worth of work."

  • Discuss ways to reduce the likelihood of risks starting from the highest impact risks first

  • Reward bad news as well as good news to get risks raised earlier (create safety)

  • Risks are an accepted and essential part of doing innovative and valuable work.

Myth of the Month: You Need Thousands of Samples To Forecast Properly

No. You need about 7 samples. You are balancing recent and relevant data versus enough data to be a reliable predictor of the future. Initially, we need to use the data we capture as a check on our cognitive biases and wishful thinking. Think of data as that nagging voice you should hear when making a significant decision.

Here is some general guidance?

  • Less than 3 samples, you are better off guessing (estimating the likely range).

  • 4 to 6 samples, use the data, but be skeptical and look at the data to confirm your range and widen if necessary.

  • 7+ samples, use the data, but be cautious of system changes that might mean that data no longer counts.

When we use prior data samples, you can never be sure that the next sample isn’t something “new.” The more samples you have, the lower the chance of something new surprising you, but it is ALWAYS possible, so look and guard against decisions where an outlier would flip your decision. 

In traditional statistics, it's not uncommon to use 1000’s samples to gain statistical significance and reduce risk. In the software world, the system generating data (our development process) changes fundamentally, reasonably often - so we need to take our chances on using the most current and reliable data we have. I go with the most recent 7 samples. 

I'll cover the math behind this in an upcoming newsletter.

Tool of the Month - Throughput Forecaster

Monte Carlo forecast using range estimates, or team velocity or throughput data. Just needs a starting date, a range estimate of the number of stories, and an estimate or data of team throughput/velocity. Optional inputs are risk factors and work splitting rates. This has been used by hundreds of teams successfully, and you could be another one of them! 
Download the throughput forecasting spreadsheet (free)
See all of the free tools and stuff

About Focused Objective and Troy Magennis
I offer training and consulting on Forecasting and Metrics related to Agile planning. Come along to a training workshop or schedule a call to discuss how a little bit of mathematical and data magic might improve your product delivery flow.
See all of my workshops and free tools on the Focused Objective website.

Got Metric or Forecasting Questions? Contact Me
Twitter
Website
Email
Copyright © 2020 Focused Objective LLC, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.