Looking for prompt engineering tips tailored for QA? | Jump in

Measuring Testing: How to Know If Code Quality Is Improving

When it comes to software quality, many people refer to it as a quasi-mythical, unmeasurable property. On the one hand,…

By Testim,

When it comes to software quality, many people refer to it as a quasi-mythical, unmeasurable property. On the one hand, I see the truth in acknowledging the nuance and subjectivity in defining a complicated factor such as quality. On the other hand, there must be some objective way to assess the quality of code. Otherwise, we’re doomed to only talk about quality in fuzzy ways without any hope of improving it.

The good news is, there are ways to verify whether software quality is improving. You can leverage valuable metrics that tell you useful things about your development process without hurting the team’s morale and culture. The bad news is that there are many useless or even harmful metrics out there, and telling the good ones apart from them isn’t the most straightforward task ever.

Lucky for you, we’re here to help you not throw the baby out with the bathwater. In this post, we’ll walk you through the importance of measuring testing so you can improve your QA strategy. By the end of the post, you’ll have learned about the most valuable metrics you should start tracking.

Measuring Testing and Why It Matters

As promised, let’s start by explaining what test metrics are and why you should care about them.

What Are Test Metrics?

As the name suggests, test metrics are measurements you can make related to your testing strategy—and, more specifically, your test automation strategy. Sure, manual testing is still a thing in many scenarios, and you can undoubtedly track metrics related to that as well.

But since we believe test automation is vital for DevOps and CI/CD, this post focuses only on automated rather than manual tests.

Why Is Measuring Testing Important?

Why is measuring testing so vital? I could take an ideological stance, saying that if you’re passionate about software quality, you must measure to understand how you’re doing. Or maybe I could take a cynical viewpoint, saying that you must measure, so you have nice graphs and charts to show at the next meeting with the C-suite. Finally, I could go on rambling about how “you can’t improve what you don’t measure” or another such truism.

At the end of the day, though, it’s all about money.

Test automation is an investment and should be treated as such. Like any other investment in your portfolio, you should care about and track its ROI.

By measuring testing through a collection of valuable metrics, you can verify whether the investment you’re making in test automation is reaping you the benefits you expected. If they’re not, why not? Maybe your tests are too fragile. Maybe your team spends too much effort on test maintenance.

Whatever the reason, you must be pragmatic about test automation. If it isn’t working, change your strategy. That’s what test metrics help you do.

What Should We Measure During Testing?

As we’ve mentioned, there are many test metrics out there, some more valuable than others. Let’s now walk you through some examples of useful metrics you can add to your portfolio.

Duration of Test Runs

Test execution should be as fast as possible. Of course, the meaning of “fast” is entirely contextual, depending on the types of tests and the resources they need. That’s why a mental model such as the test automation pyramid is so valuable to help us evaluate how to distribute our testing efforts throughout our app.

When test runs are slow, that creates many undesirable consequences:

  • Tests become a bottleneck in the CI/CD pipeline. Ironically enough, this could delay manual processes—e.g., exploratory testing—that need to wait for the pipeline to deploy the code to a QA environment.
  • Developers might not integrate quite as often. Suppose it takes a lot of time for engineers to see the result of their work. In that case, they might start integrating with larger batches, generating larger and riskier integrations and denying the benefits of CI.
  • Feedback cycles become longer, creating inefficiencies and waste.

Slow tests aren’t only a cause of problems; they’re also a symptom of other problems. Why are the tests slow? Maybe the application has issues with its architecture.

The duration of test runs can give you valuable feedback regarding the health of your app. Improving this metric will bring benefits to your project.

Percentage of Tests Passed

As the name suggests, the percentage of tests passed indicates the ratio of test cases that have passed over a given period.

What is the ideal number for this metric? This might surprise you, but it’s not 100%. Why is that?

Nobody writes perfect code. Yes, not even your engineers, despite being awesome. So, statistically speaking, developers are bound to make mistakes often enough. If your rate of tests passed is always 100%, this could mean either of the following:

  • Your engineers are perfect and never make mistakes
  • Or your test suite is faulty/incomplete and is letting defects slip by

Which of the two do you think is more likely?

Code Coverage / Test Coverage

Code coverage and test coverage are two things, but I thought it’d make sense to group the two.

As you’re probably aware, code coverage is a metric that indicates how much of your code is covered by automated unit tests. Sure, you can have code coverage for other types of tests, such as integration testing, but most people consider it a unit testing metric.

Code coverage is divided into several subtypes: line coverage, statement coverage, and branch coverage. Of those, the most valuable one is certainly branch coverage: it can give you an accurate notion of how much of the logical branches inside your code are covered by tests. Because of that, code coverage links tightly to a very valuable QA metric: cyclomatic complexity.

Test coverage, on the other hand, is less focused on code and unit testing. Test coverage is a higher-level metric, measuring how automated testing covers the application as a whole.

Measure Your Testing Strategy So You Can Improve It

Measuring testing efforts is vital for many reasons, as you’ve seen through this post. Some of the metrics you use can give you valuable insights into the current quality of your application. For instance, the fact that your tests take forever to run is a sign there might be deeper problems. At the same time, speeding up the tests brings many benefits to the development team, which, in turn, is likely to bring the execution times down again in a virtuous cycle.

As any Google search will tell you, there are many more test metrics out there besides the ones we’ve covered. However, if you’re new to measuring testing, I think it’s essential to start small and simple and grow from there.

It’s also crucial to leverage tools that can help you make the small and straightforward as efficient as it can be. For instance, Testim’s Managerial Test Automation Reports provide a centralized view of test metrics for the current week, helping you see the number of tests authored or updated, how many of those are active, the percentage of tests passed, test coverage, and much more.

What to read next

Test Automation Metrics 101: A Manager’s Cheat Sheet