Software testers and software developers in the library reading a book

Are you monitoring the right QA metrics?

When I started looking into metrics a few years ago, I didn't fully understand what they meant and how I needed to use them. But I was so excited that I tried to convince my then colleagues to come up with things they wanted to measure so we, as a team, could track that information and get some answers.

I was so sad that everybody failed to see what a great thing I wanted to do that I got mad: Why didn't they want to do it? After all, it would have helped us all, and in my head, I was contributing to the growth of our product. Time passed, and at some point, this situation popped back into my mind. I stopped for a second and realized that I had missed a few steps. Instead of explaining "why" I wanted to do it, I jumped directly to the "what." The "why" was clear enough for me, but it seems like it was not as clear for my colleagues. To answer "what" metrics you need and if you need them at all, you must first understand "why." Metrics are not our ultimate goal. We don't have to do metrics just because everybody is doing them. We have to use them to learn more about our product, our processes, our development activities’ progress, and how we can improve.

We can compare this situation with a medical health check-up: The doctor recommends a set of tests but not before knowing "why": maybe we have a stomachache, perhaps we've been in a car crash, or maybe it is just a regular check-up. Based on that, the doctor adjusts the "what": what exactly do we need to know? Data about blood tests or maybe about cholesterol and so on. Then the "when" and "how" come into place: you need to do a blood test in the morning, and you can't have any food or drinks before that. Continuing the analogy, there is also a context. Before giving us any treatment, the doctor makes sure that we are not allergic to that particular medicine or we aren't already on a different treatment that will conflict. As we do for the products we work on, so does the doctor. He or she has a business goal: for the patient to be healthy. It's the same situation for a software development context. We want software that satisfies our customers' needs and has the desired quality level.

Many professionals in this industry fail to realize that metrics should always support them in finding out if, in their context, they are taking the proper steps to achieve the specific business goal. Also, another essential thing to mention is that quality is everybody's responsibility. Even if the title says QA metrics, that does not mean that only the test team should develop and monitor such metrics, but rather that the whole team is responsible for it.

Livia Fotache

Software Tester

In an Agile setup, together with the product owner, scrum master, and the development team, we all develop functionality that will bring value for the customers while balancing the costs and quality. The critical part is making sure our vision matches the customers'; otherwise, regardless of how reliable our software is, we will not have achieved our mission. In this context, metrics should support us to understand better our software and our customers, and provide the right answers, so we know if we are heading in the right direction: if we improved or have problems.

Let's analyze a simple example of identifying what metrics we need to monitor according to our context, and in order to support our business goal.

Business goal

  • Improve customer satisfaction.

Context

  • The customer provided some flows they follow in the current implemented version and the needs that lead to some adjustments

Why?

  • We need to see if the implementation is going according to planning, if we have progress, and if we can deliver in time something that the customer will find stable, with no work-impacting defects and satisfying the initial need.

What?

  • The in progress tasks and bugs;

A task in progress for a long time might mean that there could be blockers, or it is underestimated, or it implies relationships with 3rd parties, and so on.

  • Unassigned bugs;

If we have unassigned bugs that might mean that we didn’t have a talk with the PO about it and we have to do that/ or we have to picked up something else besides this that is more urgent/ or the bug is not severe/or a priority and so on.

  • Nightly automated test run passed/failed test cases;

If we have failed tests, that might mean the current implemented functionality broke existing code, and we need to fix it. When?

We work from 8-5, so we need to know when we start the day, so we can adapt based on the information we get. Also, we could use this information in retrospectives or to give updates to stakeholders..

How?

By using the charts integrated into the application lifecycle management system used by the company.

The most important thing when it comes to QA metrics is to have the right questions. Determining exactly what to monitor is not the hard part. The hard part is to fully understand: why do you need it? Not that long ago, I started doing a regression report with all kinds of information on it: test execution, bugs found, etc. I had in mind that the entire team must work towards improving customer satisfaction. Our context? Well, the team composition changed very often during the years, including the product owner. I wanted to know if that had any impact on the way we developed the application. Of course, I wanted to know if we are efficient. Do we have a suitable dev/test process?

I grabbed the things that were the most accessible to me at that point: the bugs we log. Soon, the metrics told me that for one of our application modules, it takes a lot more time to fix issues that are low in severity. Why was that? Probably the code was less maintainable or harder to understand and needs refactoring. Of course, the context component also came in place here: is that a module that we usually add adjustments to? When were those bugs introduced? And so on.

Also, what do we do after we discover some information is essential. Only adapting a metric and not following up will not bring additional value.

Sometimes metrics will tell us things that we did not suspect, or other times will prove our theories and push us into making confident decisions. If you and your team don't know yet the right questions to ask, then I suggest tracking your process and observing:

  • if your work is going according to the planning,
  • the quality state of your product before a customer release (issues, performance, security),
  • and of course, the aftermath of a release: are your customers happy? Is the software satisfying the customers' needs?