There are many varying definitions for different types of non-functional tests. While we are not attempting to be definitive, Testing Performance would like to state what we mean by different types of non-functional tests.

The Performance Test

This is a test which measures, or determines the performance of, an application or an application component. Of all non-functional testing, this is probably the most the most commonly executed type of test.

The overall purpose of a performance test is to determine if the application will be functionally correct even at high workloads.

The objectives of a performance test would be something along the lines of;

  • Determine if the application can support the expected workload.
  • Find and resolve any bottlenecks.

It is very difficult (i.e., time consuming and expensive) to build and replicate in a test environment an exact simulation of the workload that the application will be expected to process in production. It is much easier (i.e., quicker and cheaper) to build an approximation of the workload. Often the 80:20 rule is used to persuade project managers that an approximation makes more sense. That is, 80% of the workload is generated by 20% of the functionality. Of course, no two applications are the same, in some we can easily achieve 90:10, in others it is more like 70:30. Careful analysis by the performance tester will help determine the volumetrics for the application and therefore which functions will be included in a performance test. Using the 80:20 rule is in essence compromising the testing effort. While some or most performance issues will be detected, performance issues associated with functionality not included in the performance test could still cause problems on release to production. Further steps can be made to minimise this possibility, including:

  • Manually key functions while a performance test is executing.
  • Observe and measure performance, especially database performance, in functional test environments.

Once an approximation of the production workload has been determined and agreed, the performance tester works towards building the automation together into a workload that can be executed in an orderly and controlled fashion. The work early on in the performance testing process becoming a good foundation on which to analyse and publish results, ultimately determining if the application can or cannot meet the specified objectives. Performance tests usually need to be run multiple times as part of a series of test tune cycles. Where a performance bottleneck is detected, further tests are run with an ever increasing amount of tracing, logging or monitoring taking place. When the cause of the problem is identified, a solution is devised and implemented. Again, the performance test is re-run to ensure the performance bottleneck has been removed. It is of course quite difficult to determine how many performance issues will be detected as part of a performance testing exercise. The table below is a simplistic guide to the number of performance testing cycles that may be executed depending on the origins of the application.

Applications origins

Number of test tune performance testing cycles required for first release

Number of test tune performance testing cycles required for a maintenance drop in the first 6 months after first release

Number of test tune performance testing cycles required for a maintenance drop more than 6 months after first release

An off the shelf package with a minimum of customisation

4 3 2

An off the shelf package heavily customised.

6 3 2

A bespoke application

10 6 3

From time to time this article will be updated and enhanced, should you be interested in learning more or would like <a rel=”nofollow” onclick=”javascript:ga(‘send’, ‘pageview’, ‘/outgoing/article_exit_link/1036732’);” href=”http://www.testingperformance.org/index.php”>Testing Performance Ltd </a> to undertake performance testing of your Contact Centre please feel free to contact us.



Make Money


Source by Sascha McDonald