For an organization of any size, DevOps is a journey of continuous improvement to shorten software development cycles while ensuring the quality of the software being delivered. It’s often challenging to demonstrate how DevOps maturity is progressing within the organization and the value it brings to the application teams that embrace it. A good collection of DevOps metrics can help identify areas of improvement and changes that need to be made to enhance DevOps adoption. 

There are several DevOps metrics that can be measured. It is the responsibility of the DevOps Center of Excellence (CoE) or Community of Practice (CoP) to define, track and report the metrics that are relevant to the organization’s goals and objectives. The key lies in standardizing the derivation of metrics by collaborating with the stakeholders to whom they are being reported: e.g., business leaders, senior technology executives and application teams.   

DORA Metrics 

DORA (DevOps Research and Assessment) organization has attempted to simplify the metrics definition to 4 key metrics that measure time to market and the quality of software delivered. The following table defines the metrics and provides outcomes from Elite IT to Low IT, based on the DevOps adoption scale. Participants were asked to base their responses on the primary application or service they work on.  


In 2021, the DORA team added a fifth metric: Reliability. This measures how well the software DevOps teams develop meets user expectations, such as availability, latency, scalability and performance. The Reliability metric has no corresponding low, medium high or elite ranking, but. DevOps teams can use the metric in various ways, depending on their service-level indicators or service-level objectives (SLI/SLO). 

As you can see: 

  • The top two metrics, Deployment Frequency and Change Lead Time, help measure how fast upgrades and new features can be released.  
  • Mean Time to Restore (MTTR) measures the built-in resiliency of the software product being developed. 
  • Change Fail Rate measures quality of software delivered to production. 

Even though at the outset these metrics may seem simple to capture and report, it generally takes a lot of effort to gather data from different tools, collate them and create dashboards to deliver the metrics. There are a variety of tools and techniques that can be used to collect and track DevOps metrics. Some of the most common ones include: 

  • Build & Deployment tracking tools: These tools can be used to track the progress of builds and deployments to identify any problems. Some examples are Jenkins, ADO and Octopus deploy. 
  • Requirement & Incident management tools: Tools such as JIRA and ServiceNow can be used to track the progress of incident investigations and repairs.  
  • Application performance monitoring (APM) tools: These tools can be used to collect metrics such as response time, availability, and error rates. One example of an APM tool is Dynatrace. 
  • Infrastructure monitoring tools: Tools such as Nagios can be used to collect metrics such as CPU utilization, memory usage, and disk space utilization.  

The application teams need rigorous discipline to capture progression of software artifacts in the application life cycle from commit to post-production. For example, when a production incident occurs, the data to be captured should include: 

  • Artifacts that caused the issue  
  • Time of recovery  
  • Actions or corrections needed to prevent such incidents in the future  
  • Tracking mechanisms to assure those action items are being closed.  

Once the requisite data have been collected and tracked, they can be used to improve maturity and increase adoption across the organization. Here are some steps that can help:  

Socialize and Standardize: It is very critical to collaborate with key stakeholders, such as leaders in product/app development, Quality Engineering, release management, infrastructure, SRE and other areas, to standardize the process of deriving the metrics so that they are computed the same way across the organization.  For instance, one of our clients wanted to derive Lead time for Changes from the time the development story is completed till the story was released in Production.  We needed to ensure that the change was socialized and accepted by all the stakeholders before we added the metric to our dashboard.  

Setting goals and baselines: Application teams can baseline their maturity score, set goals to improve it and create a roadmap to achieve the improvement. Please refer to our white paper for help with determining the maturity score. Examples of goals in the roadmap are reducing lead time for changes or increasing the deployment frequency. The roadmap can be used to track progress and measure the effectiveness of changes.  

Broadcasting metrics with dashboards. The metrics that are tracked should be communicated in a regular cadence to the application teams, technology leadership ad business stakeholders. This increases the visibility to the effort put in by the application teams to improve the DevOps adoption. 

Metrics-based gamification: Concepts such as badges, awards and recognition can be rolled out to reward the teams that are showing demonstrable improvement in their DevOps maturity and adoption. The information can be used to motivate the teams and create friendly competition to improve DevOps maturity across the enterprise. 

Meet The Author – Shri Nivas

Shri Nivas is an avid DevOps Evangelist and Practitioner helping clients drive transformation across their organization. A passionate advocate for accelerating time-to-market while improving quality and predictability, Shri takes an automation-first approach that leverages unique tooling and methodologies to create an empowered culture of continuous improvement, essential for DevOps maturity and success. 

Connect with him on LinkedIn here.


quality engineering free assessment