Skip to main content
 
nz
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

Amplify
DoT
 
       

INSIGHTS / Articles

Performance Done Properly

 9 Aug 2016 
INSIGHTS / Articles

Performance Done Properly

 9 Aug 2016 

System performance is often in the limelight. Without knowing it, everyone is aware of the consequences of poor performance. Some of these, like the Myer Boxing Day Sale and the ABS Census Online failure are high profile and get media attention, however these are just the tip of the iceberg for performance failures. This begs to ask the questions: ‘Why do we still see high profile performance failures?’ and ‘Why do so many performance test engagements fail?’

Icons representing Risk, Requirements, Results and Response

In many instances the answer is simple: because testing wasn’t considered or was executed poorly. However, in many high profile instances the answer is not as cut-and-dry as this and we need to look deeper at the performance lifecycle to identify where these issues occur. Much like any software development lifecycle, issues can be introduced at any stage of the performance process. Issues introduced at varying stages can have a variety of consequences.

Performance testing engagements can fail for myriad reasons. This article considers the 4 R’s of performance testing and outlines issues that can be introduced at each stage:

  • Risk – Do we need to worry about performance?
  • Requirements – What performance solutions do we need?
  • Results – What happens when the system is under load?
  • Response – What are we going to do about it?
Risk – Do We Need to Worry About Performance?

This may seem like a strange place for an article about performance testing to start, but is it really required? The answer is, it depends. It depends on the level of risk associated with the project and the user profile of the system. If I am building a system to be used by 1 or 2 internal users, my approach to performance should be very different to if I am launching an application to be used by the entire Australian (or world) population.

At a recent conference, a question was asked about how to define the Return on Investment (ROI) for performance testing. After much discussion it was concluded that performance testing was not necessarily about an ROI, but more about risk reduction and the power of knowledge. Can you afford not to know what is going to happen when this system goes live? What are the consequences of the system not performing as expected?

By accurately and quantifiably establishing the risk profile of a project, informed decisions can be made about the level of performance testing required. If there is no risk, there is no test. A lower risk profile will require less comprehensive performance testing than a major website or application launch which has a catastrophic impact if it fails.

The key things to consider with the project are:

  • The number of users
  • The type of users (internal, external, public)
  • The technology being used
  • The type of system
  • Is this a new system or a change to an existing system?
  • What will be the impact of failure, who will be affected and what will it do to the brand?

If a project gets this wrong, they may invest heavily in performance testing a system with little risk, or worse under-invest in a high risk application.

Requirements - What Performance Testing Do We Need?

Following on from our risk assessment we need to tailor the performance engagement to meet the specific requirements of the project. We have established that we need to do some performance testing, but what precisely do we need to test?

Requirements, specifically Non-Functional Requirements play an important role here and are almost universally poorly-defined or untestable. As with any requirement, a project must test their NFRs to make sure they meet the following criteria:

  • Correct
  • Unambiguous
  • Consistent
  • Feasible
  • Testable

As performance engineers, we must take responsibility for guarding the quality of these NFRs. With our wealth of knowledge and prior experience, it can be obvious to us that the workload models defined in the NFRs do not accurately reflect how the system will be used in production. We may build our tests to test these NFRs, and we may do that exceptionally well, but if these NFRs are not accurate and don’t match what will happen in the real world, then what is the value of our testing? We may be able to prove we can support X million submissions over 8 hours, but if all those X million submissions will actually occur in 1 hour then our testing is of little value. Making sure our testing is relevant to the real world is essential to achieve quality performance outcomes.

Results - What Are The Results?

Most projects spend too much time on this stage due to underinvestment in establishing Risk profile and Requirements. Our results should give us a binary outcome for each requirement – pass or fail. We shouldn’t need to invest hours of effort to interpret and understand the results. If we establish proper non-functional requirements then each one is either met, or not met, and as a result our exit criteria for the performance test execution phase are either met or not met. That isn’t to say we don’t need to spend time diagnosing failures, but we shouldn’t need to spend our time determining if a requirement was met or not.

Information, Not Data

What is communicated from the performance consultants to the project team must be useful, meaningful information, not raw data. Performance tests generate gigabytes of data, but this isn’t the output of a performance testing. The output is something that is meaningful to the business stakeholders so that they understand exactly what test was run and what happened. The information must be managed, tailored and communicated with each key stakeholder in a method appropriate to their role and technical knowledge. Again, this should be linked back to requirements.

Response - What Are We Going to Do About the Results?

This is the most important part of this. If the project is not going to do anything about the performance test results, why run the tests? The project response to the performance results is critical to getting value from the performance test process. There are too many instances of projects deciding to progress to production even with the knowledge that performance is poor and then dealing with the consequences, namely bad publicity and lost revenue.

In instances where all the performance requirements are met on the first cycle of executions, the acceptable response is to do nothing. However if there are deviations from the requirements, then something must be done.

There are a number of potential responses to poor performance test results.

  1. Resolve the issues – either by enhancing code or leveraging additional infrastructure, the bottlenecks identified can either be removed or reduced.
  2. Reduce the rollout – limit the scope for the launch, which may mean focussing on a smaller pilot launch, or doing a phased rollout.
  3. Manage the issue – if it can’t or won’t be fixed, consider implementing Application Performance Monitoring tools in production so errors can be spotted before they affect users if they occur in production.

Each response listed is valid, along with a number of others. The only response which should not be considered is “do nothing”. If a project is going to invest the money in getting performance test results, then make sure that the project uses the assets it has paid for to achieve the benefits of running performance tests.

Conclusion

While in most instances engaging performance testing is better than doing nothing, in order to realise the true benefit, we have to consider 4 key things:

  1. What is the risk of this system not performing?
  2. What are the requirements for my performance testing?
  3. What are the results of the performance testing?
  4. What is the project response to these results?

By asking these questions, the project can better position itself to get value from the investment in performance testing. By ignoring the questions, or focussing only on the results, the project is leaving the door open for an expensive and embarrassing failure.

Speed as a Key Asset

At Planit, we can help you make performance an asset, not a liability. Our expert consultants can provide testing, assessments and advice to mitigate performance risks and achieve peak results.
 
Find out how we can help you navigate these challenges, achieve your performance goals, and deliver a rapid, responsive, and reliable experience that delights your customers.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.