Menu
Last updated on
Plutora Blog - Deployment Management, Release Management

18 Key Release Management Metrics

Reading time 10 minutes

Every business needs to ensure cost-efficient and effective allocation of its resources, and one way to accomplish that is through metrics — measurements that help you understand how your organization or department is performing. In driving continuous improvement in your release management, they will be your compass and show whether you are moving in the right direction to meet business goals. 

It’s crucial that release managers understand the importance of context when evaluating performance through metrics. They should not be viewed as absolute numbers in isolation; rather, it’s better to discuss with team members to understand the trends they illustrate. This way, you have a living and breathing view of release performance rather than a static number suspended in time. 

What Makes a Metric Useful?

Businesses are inundated by all sorts of data, but it’s important to distinguish signals from noise by recognizing what is most important.

Quality software releases at scale with Plutora

Plutora provides a complete toolkit for application delivery. Schedule releases , track dependencies and maintain compliance while accelerating change.

Learn More

When examining a metric, you should consider the following questions:

  • Can it be accurately quantified?
  • Does it connect to goals and objectives?
  • Is it absolute and incorruptible (i.e. it cannot be gamed)?
  • Are we able to act on this metric?
  • Will this metric be relevant in the future? 

If you can safely answer “yes” to a majority of these questions, then it’s a good sign that the metric is worth measuring and could potentially yield transformative results. 

Metrics to Avoid

At the end of the day, the wrong metrics (or metrics that are badly applied!) can be detrimental to organizational goals. In An Appropriate Use of Metrics, Patrick Kua warns:

Strong incentives tied to strong metrics force people to concentrate on just one part of the work, neglecting other contributing factors that might make a goal more successful. Organizations must be wary of this actively destructive focus that leads people to neglect other important factors.

It’s equally important to filter out the metrics that fail to deliver actual value.  

  • Outdated metrics that haven’t kept pace with your organization’s business goals
  • Competitive metrics that foster unhealthy rivalries 
  • Short-sighted vanity metrics 
  • Unattainable metrics that foster a sense of helplessness and defeatism

As always, keep in mind that metrics should change as the needs of your organization change. A metric that is useful now won’t always be useful later. It’s important to keep evaluating and to make sure that you’re not blindly adhering to outdated measurements.

18 Key Release Management Metrics

1. Percent of release success rate

This metric simply shows the percentage of planned releases that are deployed on time. There are many factors that can prevent releases from happening on schedule, some of which are difficult to avoid (such as changing priorities). For example, the 2020 State of Code Review survey cites changing requirements as the single biggest reason for missing release deadlines. Another common cause of delays is miscommunication when handing off work between departments, which can be improved by streamlining release processes and strengthening collaboration via culture and tools.

When pinpointing bottlenecks in your release process, it’s helpful to have a quick, all-inclusive view of where you are spending your time. Plutora’s dashboard below breaks down releases into phases and types, providing a granular view of the average phase and gate duration, bringing to light where constraints are in the release process.  

phase gate duration

2. Percentage of Escaped Defects

This measures the defects that make it past production and are found by customers post-release. Naturally it’s impossible (and perhaps even undesirable) to catch all bugs, as that would involve laboriously checking and rechecking features at the cost of velocity. For example, NASA was able to achieve zero defects for its Space Shuttle Software, but at great expense — thousands of dollars per line of code. Most projects won’t require that level of scrutiny; however,  it’s still an important measure that directly impacts customer satisfaction and maintenance costs. Ultimately, it’s up to the business to find that “sweet spot” of balancing quality and speed to best serve customers.  

Percentage of escaped defects is generally used as one of the key indicators of QA performance, but keep in mind that all bugs are not equal — some are mission critical, while others are merely cosmetic. Since this metric does not weigh for importance, it’s helpful to discuss with your QA team for context.

3. Defect Density

The number of known defects per module or X lines of code. Defect density is a useful indicator of quality, and 1 defect per 1000 lines of code is typically seen as the standard for “good” quality. 

As always, keep context in mind. If your development team is putting forth substantial effort into optimizing code, defect density could increase even while the organization is improving behind the scenes. 

4. Number of Releases

The number of releases over a given timespan. It gives you insight into whether your release frequency is increasing, inconsistent or decreasing. Similar to the other metrics, context is helpful here. Perhaps in August there were several major releases, as opposed to minor ones in September. That’s why this data is best broken down by type or portfolio for additional detail. 

Refer to this sample dashboard from Plutora, which gives a quick and simple view of the types releases over time:

releases over time

5. Deployment Duration

Usually measured in hours, this is the time it takes to execute a deployment. Deployment duration is useful for understanding whether deployments are causing delays. Many businesses are turning now to automation, which takes the load off of developers, reduces manual errors, and aids in pushing small changes to production. 

6. Release Duration

Usually measured in days, this is the time it takes to execute a release. This number measures how quickly a business can serve its customers, and can be useful for roadmap planning. Like other release metrics, this metric can be broken down into type to provide better context.

For instance, have a look at Plutora’s release duration dashboard: 

release duration over time

7. Number of Release Backouts

Also known as a rollback, this counts the number of releases that fail to meet expectations and have to be reversed. If a workaround exists, a Request for Change (RFC) should be created to deploy the fix into production. Tracking the number of release backouts indicates the quality and thoroughness of development and testing.  

8. Proportion of Automatic Release Distribution

Usually a percentage, this metric refers to the proportion of new releases that are distributed automatically. A high number here indicates that your ITSM environment is operating effectively.

9. Downtime

Usually measured in minutes, downtime refers to the cumulative amount of time that users cannot access your software. Downtime is not always avoidable, but should be minimized as much as possible.

10. Number of Outages Caused by a Release

The number of outages, or interruptions to your service, that are directly caused by a release. Depending on the length of the outages, these can be very serious — virtually incapacitating businesses, causing financial loss, and eroding customer trust. This metric helps business stakeholders to get a clear picture of the costs associated with any given release.

11. Number of Incidents Caused by a Release

The amount of incidents caused by a release. Incidents are also referred to as defects or bugs, and can be characterized as unexpected deviations in system behavior. This number is often used as a key metric in evaluating release team performance. 

Take a look at Plutora’s breakdown of releases by risk level to understand the risk profile of your releases. This can then be correlated with incidents in production.

releases by risk level

12. Percentage of All Changes Causing Major Incidents

Similar to the number of incidents caused by a release, this measures whether the change management team is doing their job effectively. 

In the dashboard below, you can see the breakdown of different types of changes over time. Simply overlay incident data over this chart to get a quick sense of where and when incidents are occurring.

changes over time dashboard

13. On-time Delivery

This measures the timeliness of delivery. When deadlines are consistently missed, it doesn’t necessarily mean that your team is doing a bad job — perhaps the problem is a dearth of resources, or the scope of the project needs to be cut. 

14. Releases Delivered on Schedule by Application

Provides a breakdown of timely and successful releases by application. If an application is consistently behind schedule, that impacts the business when it comes to market share, security, and compliance concerns. 

For example, this dashboard shows the various releases by portfolio:

releases by portfolio

15. Mean Time to Repair (MTTR)

MTTR represents the average time required to repair a failed component or device. It calculates the time from the very beginning of an incident until the moment it’s solved, and provides a strong indication of how fast your organization responds to problems.

To get this number, find out the total time spent on unplanned maintenance for a specific asset and divide that number by the number of failures that happened with that asset over a period of time. In general, a MTTR value of five hours or less is considered to be quite good.  

16. Average Lead Time

Also referred to as Time to Value, this measures the period of time between accepting a work item and pushing it to production. It tells you how long it takes for you to deliver on customers requests. When examining lead time, it’s important to know exactly what you’re measuring. You probably don’t want to include low priority tickets in this metric because it will lead to unreasonably high numbers; keep it limited to high and medium priority tickets. 

Average lead time is an essential metric for ensuring business continuity and planning, as it enables the PMs and C-suite to plan accordingly.

lead time dashboard

17. Average Cycle Time

Cycle time is how long it takes your team to deliver something after starting work on it. It provides a lot of insight to engineering leaders as to how their team is allocating its time. For instance, if features are sitting for days waiting for QA after developers are done coding, that’s a useful observation that can generate real change. A good dashboard here is invaluable, as it can help you to drill down deeper and see the exact distribution of how time is spent.

cycle time dashboard

18. Average Cost Per Release

Calculate man-hours and salary per release to arrive at this number. This will give you an idea of the ROI of each release, resulting in better business decisions.

Are You Ready to Improve Software Factory Performance?

With a multitude of ways to ingest, organize, and present data, Plutora provides a 30,000 foot view of your organization. By investing in Plutora, businesses can transform their releases processes and cut down on production time, leading to happier stakeholders, leaner deployments, and delighted users. For more information on the easiest way to get crucial insights and analytics, take Plutora for a spin today.

Dalibor Siroky

Dalibor is the Co-founder and Co-CEO of Plutora. He has 15 years of leadership, consulting, enterprise product, and operations experience across Australia, Asia and Europe. He has proven ability to build high performance teams, turn around situations, develop innovative products, and create lasting value. Prior to Plutora, Dalibor was founder and managing director of Finotaur, a leading provider of independent management consulting services. Before that he served as CIO of financial advisory software at Macquarie Bank, head of solution architecture at Commonwealth Bank of Australia, and management consultant at PricewaterhouseCoopers. Dalibor got his MBA from the University of Chicago Booth School of Business. Follow him on Twitter @DaliborSiroky.