Skip to main content

So, you started adopting DevOps in your company. But how can you tell if it’s improving your processes? You have to measure the success somehow—you can achieve this by monitoring some key DevOps metrics.

There are many ways to measure the quality of a system or application, but in this article, we will discuss what we can track to measure the quality of our process. This way, we will see where we have strong and weak points and better understand how we can improve our DevOps practices with the right tools and software.

Let’s go!

DORA Metrics

DORA is a DevOps term that stands for DevOps Research and Assessment. DORA is a team purchased by Google in 2018. DORA employs data-driven insights to promote DevOps best practices, with a focus on assisting organizations in developing and delivering software more quickly and effectively. DORA is still working with the Google Cloud team to offer DevOps studies and reports to help enterprises improve software delivery.

Deployment Frequency

An important metric for DevOps success is the number of deployments in a given timeframe. A high deployment frequency indicates that new business value is delivered more frequently and in smaller increments.

Frequent deployments mean that the errors associated with failed deployments are reduced. In turn, this will increase overall customer satisfaction. 

In the 2021 State of DevOps report, DORA researchers revealed that elite teams deploy multiple times per day, high performers deploy between once per hour to once per day, and low performers have the deployment frequency between once a week to once a month. If you are on the lower end, you might want to consider increasing the frequency with which you deploy.

Related: 10 BEST DEVOPS DEPLOYMENT TOOLS FOR QA TEAMS

How to Measure Deployment Frequency

To measure deployment frequency, it’s enough to collect from the pipeline tool (Azure DevOps, Jenkins, etc.) the number of builds and the number of builds deployed to production successfully and divide (Deployments / Total Builds * 100). The higher the number, the better.

Discover how to deliver better software and systems in rapidly scaling environments.

Discover how to deliver better software and systems in rapidly scaling environments.

  • By submitting this form you agree to receive our newsletter and occasional emails related to the CTO. You can unsubscribe at anytime. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Lead Time for Changes

Another DORA metric is the lead time for changes. One of the main advantages of using DevOps in software development is the ability to release quickly, so it’s a good idea to measure how long it takes for a work item to get from the implementation stage to be pushed to production. This means the entire cycle of an item—development, testing, delivery.

A shorter lead time is usually better, so the objective is to reduce the total deployment time. This can be done by improving test integration and automation, for example.

Related: WHAT IS DEVOPS RELEASE MANAGEMENT AND 4 BEST PRACTICES

How to Measure Lead Time

Like any other metric, it brings little value until we have enough data to refer to. Average the lead time for changes across a period of time for multiple commits to compute the lead time for changes. Because no two modifications are the same, and lead times will vary depending on the scope and kind of change, calculating the meantime is essential.

Mean Time to Recovery (MTTR)

This metric refers to the time it takes for the organization to recover from a production failure. 

As much as we hate it, unplanned outages or failures are a natural part of any system’s life. And since they are inevitable, what matters is the amount of time it takes to restore the system or the application. 

The metric is significant because it helps DevOps engineers to create more reliable systems. 

How to Measure Mean Time to Recovery

MTTR can be calculated by keeping track of the average time between when a defect was reported and when the fix was deployed to production. It is done as part of the continuous monitoring activities performed by the DevOps teams.

According to DORA’s report, elite teams have a mean time to recovery of below an hour, high performing teams less than a day, and it may take medium and low performing teams between a day and a month.

Change Failure Rate

Last but not least of the DORA metrics, the Change Failure Rate is the number of code changes that resulted in incidents, defects, rollbacks, or any other sort of production failure. The change failure rate looks at how many deployments resulted in failures when released into production. 

It determines how stable and efficient your DevOps processes are. The objective of tracking the rate of change failure is to automate additional DevOps operations. Increased automation results in more consistent and dependable software that is more likely to succeed in production.

As opposed to the previous metrics, Change Failure Rate truly measures the quality of the software. A lower change failure rate means that less failures are pushed to production. As a result, customer satisfaction should increase.

How to Measure Change Failure Rate

Of course, the aim is to have the Change Failure Rate as low as possible. To calculate it, divide the number of deployment failures by the total number of deployments. 

Other Notable DevOps Metrics

Apart from the DORA metrics, there are other important metrics that can give insights into the performance of the DevOps team.

Passed Automated Tests

It’s good to strive for good test coverage, especially automated tests. And here I’m talking about unit, integration, UI, and end-to-end tests. But good coverage is not enough to ensure the quality of the software. What matters is the percentage of these tests that actually pass.

Of course, the goal is to have a percentage of passed tests as close to 100% as possible. Monitoring this metric can also reveal how often new developments break existing tests.

How to Measure the Percentage of Passed Automated Tests

The calculation is a simple percentage: multiply the number of passed tests by 100, then divide by the total number of tests. You can get this information from the pipeline tool that runs the builds (Jenkins, Azure DevOps, CircleCI, etc.).

The number can be a good indicator of the quality of the product, however, it can also be tricky if you have flaky or unreliable tests.

Defect Escape Rate

In a utopian world, all our apps would be defect-free. However, that’s rarely the case. But ideally, the defects are caught during the development and testing phases of the DevOps process, and not in production.

This metric helps to determine the efficacy of your testing processes as well as the overall quality of your program. A high defect escape rate suggests that procedures need to be improved and that more automation is needed, whereas a low rate (ideally near zero) implies a high-quality application.

How to Measure the Defect Escape Rate

To measure this, you can use your bug tracking tool and, for each open defect, track where it has been detected—whether the testing or the production environment (or any other environment you might be using, such as UAT).

Customer Tickets

Customer happiness is a driving element for innovation, and with good reason: a flawless user experience is good customer service, and it typically corresponds to a rise in sales. As a result, client tickets are a good indicator of how well your DevOps transition is going. 

Customers should not be acting as quality control by reporting defects and bugs, hence a decrease in customer tickets is a good sign of good application performance.

Final Thoughts

Like any other methodology, DevOps is only successful if it’s implemented correctly. And you can’t know the success unless you measure it. If you keep an eye on the metrics described above, and continuously work to improve them, then your application’s quality will always be in top shape.

And speaking of quality, if you want to stay up to date with news and articles on the subject, subscribe to the QA Lead newsletter!

Also Worth Checking Out:

Andreea Draniceanu

Hi there! My name is Andreea, I’m a software test engineer based in Romania. I’ve been in the software industry for over 10 years. Currently my main focus is UI test automation with C#, but I love exploring all QA-related areas 😊