Copy
Logo

February 2022 Agile Data Newsletter

article: Finding Leading Indicators | news: New On-Demand Trainings

Find Leading Indicators

For all the talk about leading indicators, there often is little action in dashboards that I’ve seen. It makes sense that dashboards plot the data we have, and there is a prevalence of lagging data available. All is not lost though, to find leading indicators, we actually need to start with lagging ones.

Turning a Lagging Indicator into a Leading One

To make this point, I’m going to use an example. Let’s start with a quality metric.

The lagging indicator we have: Production Rollbacks and Escaped Defects

A common set of quality metrics that get put forward when asking “how will we measure quality” are the typical defects and rollbacks. They are relatively easy to measure and obtain from tools and systems. But as you can understand, they need failure to occur before they change. A defect has to escape, a rollback has to occur. If we want to improve future performance, metrics that measure quality BEFORE they impact production or customers are obviously superior. These metrics aren’t bad, they just are post-event.

A leading indicator is BEFORE EVENT. And this gives us the ability to avoid the event. Or change the path and severity of the event. A key skill for coaching teams using data is to go from these lagging measures to leading ones. It comes back to the questions you ask, and here are a few that might help you.

”Do releases that have rollbacks or defects have anything in common?”
”Do releases that have rollbacks or defects skip any steps or processes?”
”Do releases that have rollbacks or defects add any steps or processes?”
”What conditions are present or absent when we have rollbacks or defects?”

What we are trying to find is something that leads to failure.

Some example responses might be -

”We skip integration testing.”
”We skipped code-review.”
”We had no comments in the code review.”
”The changes are unusually big.”
”The changes are in the legacy code with no unit test coverage.”

These responses give you some process changes and some ability to measure the indicators that an increased occurrence rate is LIKELY.

What leading measure would you rap around the responses given?

Some examples might be:
1. # Comments in code reviews per change/commit
2. # Files in change/commit
3. # tests for code areas

All of these metrics give an indication of the increased likelihood of rollbacks and defects even if teams are lucky. That is the key point. Lagging indicators hide the fact that sometimes you are lucky. Leading indicators help show that you dodged a bullet (which is why aviation tracks near misses and accidents, a near miss is still a failure).

So, go find metrics of near-misses. They will be leading indicators. And it starts by simply asking, what conditions are present, what do we skip, or what is common when these bad things have historically occurred. Measure those.

On-Demand Training Launched

I just made the first couple of on-demand training offerings. I’ve prioritized making on-demand training for courses I want to run but are short and get booms and busts of attendees. My power-sessions are a classic example, those are short bursts of targeted information on how to use the spreadsheet tools I offer for forecasting or team flow metric dashboards.

I took the experience of running them via Zoom and recorded the sections and exercises in a way that is suitable for self-paced training. The result is better than I expected. My presenter problem is often me wanting to over-explain irrelevant things, and by recording these topics self-paced, you can skip those lessons :)

The first two I have are:

Using the Monte Carlo Forecasting Spreadsheets
Using the Team Dashboard Spreadsheet

As subscribers to my newsletter, take 50% off the list price. Use the coupon code NEWSLETTER when purchasing.

Go to the on-demand training portal - learn.focusedobjective.com