How to harness time to deliver predictably to your customers.
If you believe in continuous improvement to strengthen your team’s agility outcomes, metrics will play an active role in your team’s agile practices. Metrics provide you with feedback or signals derived from your team’s delivery tools, providing insight into your team’s predictability and progress as they deliver software.
It will come as no surprise that every teams practices agile differently. They will have their own area of focus for improving their agility outcomes and will want to track metrics that they decide are relevant to their own performance and outcomes. Not only is it not uncommon to see teams tracking against different metrics, but also to see different behaviours on shared metrics across multiple teams.
Among the many agile metrics used by teams to improve agility outcomes, cycle time and lead time are often cited as two of the most important. This is because they provide insight into how efficiently and predictably a team is delivering their software to customers. Lead time and cycle time measures the flow of work into and through a company’s delivery system. Having an estimate on your team’s delivery speed helps a team plan better, create achievable delivery goals and set realistic expectations with stakeholders. Simply, cycle time tells a team how quickly a team can process a piece of work. It’s the time gap between a ticket being moved to in progress (i.e. time somebody starts working on the ticket) to when a ticket is resolved (time the work belonging to the ticket is completed). Ideally, a team’s average cycle time should be well under their sprint time. If not it’s an indication that the stories should be broken down to smaller tasks.
On the other hand, lead time tells how a team processes their work based on the number of requests coming into the system. It is a function of the efficiency of the system.
Wait Time + Cycle Time = Lead Time
* wait time is the time between the creation of a ticket and the time someone starts working on the ticket
This conversation revolves around the patterns and behaviours that we could see in our team on cycle time, wait time and lead time by analysing data for our own practices over a period of 2.5 years.
Case in point: The power of understanding the impact of Lead Time and Cycle Time for ourselves
We calculated average lead time, wait time and cycle time for each sprint per story point and per ticket regardless of their size, and considered multiple different factors to see whether there were any relationships between them and cycle time/lead time and most importantly, what changes or improvements we could make to our delivery system. Here’s what we found.
Team insight: Cycle time of a ticket increased with the number of story points assigned to each ticket, which was expected. But interestingly, we noticed that tickets that had 5 story points assigned always had a higher cycle time than that of 8 story points. Which we have been noticing as we tend to assign 5 when we are unsure of the work that needs to be done for those tasks.
Action taken to improve: call out when we estimated a work item as 5 Story Points, break it down to subtasks so we can deliver more consistently.
Team insight: Cycle time/Lead time of a ticket relative to its type didn’t have much of an impact except for bugs. Bugs, in general, had a shorter wait time and a shorter cycle time which indicated that our team was quick to address and fix defects. This was fine, but we did note that this was at the cost of feature development. Like everything in life, we were happy to long as it was a conscious trade-off intentionally made.
Action taken to improve: Apply a bug triage system that allowed us to be more consistent in reacting to bugs that were high customer impact mid-sprint, and help keep our feature development on track.
Team insight: Over
the past 2 years, our team size has varied between 4-8 team members. Interestingly, When looking at cycle time relative to team size, there was no significant relationship. But when we had a steady team for a longer period of time our average cycle time halved, dropping to 3 days per work item.
Action taken to improve: Don’t underestimate the importance of keeping team composition stable, and be realistic in communicating the impact with stakeholders when change occurs.
Wait Time vs Cycle Time
Team insight: One of the significant factors that we noticed was that if the wait time was less than a week, our cycle time halved to 3 days compared to the older tickets where cycle time on work items averaged at 6 days. i.e. we tend to finish the new tickets faster compared to same-sized tickets that have been sitting on the backlog for a while.
Action taken to improve: re-emphasise the discipline of Backlog grooming as being a process for building a queuing system for relevant work items aligned to OKRs that were ‘need’ to have, rather than the backlog being a parking lot of items that may have been ‘nice to have’.
Stable Cycle Time
One pattern we noticed was that there were periods of high variance in cycle time and periods of low variance in cycle time, but what was underpinning our practice when we were stable? We could see that low variance in cycle time meant we were consistent in completing the tasks within a given time and which in turn meant we could plan accurately and ultimately be more predictable in delivering to our customers. We wanted to repeat those practices!
Team Insight: In addition to team stability, There are two factors for our team that underpin our stable cycle time.
Backlog grooming is an important agile practice where team members get to discuss the work that goes into each ticket and size. When looking at the data, it was evident that during the periods that we were consistently doing backlog grooming we had steady cycle times
Our sprint length had been alternating between fortnightly and weekly sprints for a number of different reasons. Most of the time we chose to do weekly sprints when there were urgent deadlines. We had more consistent cycle times (i.e. cycle time was less varying) when we were doing weekly sprints.
We also considered other factors such as percentage of inherited tickets in a sprint, or ticket carry-over on the stability of the sprint and number of story points allocated per day of sprint vs average cycle time of the sprint. We didn’t find any significant relationships.
So, when will it be done?
By understanding the patterns of our team’s behaviour we were able to identify that backlog grooming and weekly sprints drove our predictability. After that, we switched to weekly sprints which lead us to plan better with achievable goals in each sprint.
Time is a critical marker of team success. Great teams set themselves apart by understanding the unique drivers that impact their time, harnessing these drivers to their advantage so they can predictably deliver value to customers, time and time again.
Written by Dr Thamali Lekamge, Data Scientist at Umano.
Umano is on a mission to help self-directed teams succeed by providing real-time feedback with data-driven insights that help agile teams to continually improve and stay ahead.
Sign up here to access your complimentary Umano account and see how your team’s agile sprint practices are tracking.