(416) 238-5333

White Paper: Are you Using the Right Testing Metrics?

It is not enough to perform testing on every project. Firms need to make sure they are doing it effectively and efficiently. The way to ensure this happens is by using and tracking the right metrics. However, most testing leaders do not spend much time considering which metrics are most relevant and useful as opposed to focusing their efforts on important steps like the test design, budget and lead time. Neglecting consideration of the right metrics can hamstring the value of the entire testing effort.

Many testing activities utilize a basic pass or fail assessment. This works for some, simple tasks and applications but is clearly unsuitable when performance and user experience matters. For example, the difference between a great and poor mobile app is much more than whether it works or not. In our client work and research, an App that does not load in less than a few seconds across multiple devices and platforms can result in a poor user experience, lost sale or permanent user abandonment. Unfortunately, less attention is paid to the granular or system performance and experiential considerations. This is not surprising considering testing is often shortchanged in the software development life-cycle because of lack of attention, budget or time.

Fortunately, getting the right metrics is as much an organizational and test strategy fix as it is about securing more resources and time. Below are some of our best practices around selecting the right metrics:

1.Align to business needs

Identifying the right metrics requires test managers to first align with the ‘business’ on their user experience, brand and financial needs. These needs should link with testing metrics that measure the relevant product or application features and specifications. This alignment should be codified in an agreed and detailed set of requirements.

2. Test like an end user

Whether a feature or an application works or not is just the beginning. The testing team needs to evaluate the product like they are actual users. They need to consider other vital issues like performance, response times, data transfer and stability under a variety of everyday conditions and use.

3. Test across the full usage continuum

It is not enough to test ‘speeds and feeds’ of the application during a given action or command. You need to test the entire user’s journey (i.e. from log in to confirmation) to make sure the desired experience or transaction consistently performs to expectations and requirements. For example, an e-commerce application can be problematic if it takes too long to process a payment, even if it initially loads quickly and accurately.

4. Choose metrics that matter

To get beyond pass/fail assessments, testing leaders need to consider metrics that impact the user’s actual experience and application such as:

  • Application response time
  • Application speed
  • Data usage
  • API performance and integration
  • CPU usage
  • Interface factors like sounds, microphone, display (graphic presentation)
  • Location usage for GPS functionality

The special case of system performance

Looking at CPU and API performance metrics cannot tell the entire application or product story. Testers have to look at how the entire system performs as well. To do this, they need look at system response metrics. Some metrics to consider include: client compute time (the time it takes for the application to render client-facing features like HTML, scripts etc) and; concurrency (the number of simultaneous requests an application will make for resources).

Careful deliberation should also be paid to what kind of data is being collected by the system performance metrics. To get a true reading on system performance, the metrics need to be sufficiently granular and be able to be accurately collected. For example, average response speed tells you much less than by how different speeds are ranked by percentiles. This provides a truer picture of what users are experiencing at what speeds. Moreover, not all system performance measures are equal nor should they be aggregated. For example, response time at log in and initial load times is not the same thing and could have different impacts on the user experience. Cumulating these different times could be hiding problems and different causes.

It is often said in business that you cannot manage what you cannot measure. When it comes to improving quality assurance and application performance testing, testing managers need to heed this axiom. However, we are not suggesting testing everything the user experiences or the product does. That would not be realistic or cost effective. Test managers should always adopt a risk-focused approach. Aspects of the application are not real world focused or do not impact business outcomes (e.g., increase revenue risk, hurt the brand) should not be prioritized as high. Finally, Einstein sums the issue of testing metrics nicely when he said: “Not everything that can be counted counts, and not everything that counts can be counted.”