Skip to main content
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

INSIGHTS / Articles

Performance Matters: Boosting the Bottom Line by Decreasing Response Times

 24 Oct 2019 
Performance Matters: Boosting the Bottom Line by Decreasing Response Times Performance Matters: Boosting the Bottom Line by Decreasing Response Times
Performance Matters: Boosting the Bottom Line by Decreasing Response Times
INSIGHTS / Articles

Performance Matters: Boosting the Bottom Line by Decreasing Response Times

 24 Oct 2019 

Successful business transformation is defined by an organisation’s ability to rapidly deliver high-quality, affordable products that customers love. Users also expect an excellent experience when interacting with an application, and this means a slick, responsive and stable interface.

Poorly optimised applications reduce revenue for businesses - the longer a page takes to load, the less likely users are to progress further with the application. Higher bounce rates have also been proven to directly reduce sales.

Consider that the average bounce rate for pages loading in less than 2 seconds is 6.5%. This almost doubles when load times increase to 3 seconds and blows out to 38% at 5 seconds.

In many cases, poor performance can lead to a lost customer. Once they’re gone, the cost of attracting a new customer will be several times higher than if the existing one had been retained.

Inefficient systems also cost more to run, mainly due to additional compute costs in the cloud, larger instance sizes of customised SaaS products, and longer usage of pay-per-execution serverless functions. Tuned applications have less computation requirements and can therefore be hosted more cost-effectively.

Traditionally businesses have mitigated these risks via performance testing at the end of the lifecycle. However, this was an expensive and time-consuming exercise, with little time to react to any findings. This meant that any issues were discovered just before or in production, where the cost of fixing was orders of magnitude higher than earlier in the lifecycle.

So, how can mitigating performance risks contribute to an organisation’s ability to succeed in business transformation programs?

The need for speed

Previous articles in this series have discussed how to implement quality engineering, continuous testing and DevOps as mechanisms to ensure quality applications are delivered, and can be changed at speed – key contributors to successful business transformation.

However, businesses often struggle with implementing continuous automated testing, especially in the area of performance testing, with less than one quarter reporting that their performance tests are automated.

To compound this issue, the rise of multi-cloud, SaaS, containerisation, serverless and micro-service architectures means application complexity is increasing. Most transformations have a digital component, which means new modern technologies being used, such as React and Angular. With increased complexity, the risk of failures or performance being impacted unexpectedly increases.

A recent survey found that 74% of CIOs are now concerned that such complexity will soon make them unable to effectively manage the performance of their systems. A further 44% believe that this is a real threat to their business.

So how do we ensure that applications meet high user expectations for performance, both before deployment and then continuously in production? In addition, how do we ensure that performance risks are mitigated when the release cadence is high?

Benefits of performance

Performance Engineering is the process of managing application performance risks throughout the entire delivery lifecycle. This ensures software products perform and scale in line with user demands and expectations.

Businesses are also able to boost profits by:

  • Maximising conversion during peak periods.
  • Increasing satisfaction by creating a slick user experience.
  • Minimising operational costs.
  • Eliminating system failure due to load.

The last point is particularly notable, as applications that are not designed with scalability and performance in mind often fail under load at busy periods. This represents not only lost sales, but also pushes your customers into the hands of your competitors.

It has been estimated that the average cost of applications downtime for Fortune 1000 companies is approximately $100,000 per hour. Although each application will be different, every business must take steps to understand the risk and potential cost of downtime and failure for their applications.

Using Performance Engineering techniques, alongside Quality Engineering and Continuous Testing, organisations can manage performance risk without impacting speed of delivery. This is achieved by:

  • Creating non-functional requirements tailored to your users and business.
  • Understanding and managing performance risks.
  • Designing architectures for performance.
  • Integrating automated performance tests into Continuous Integration and Continuous Delivery (CI/CD) pipelines to actively highlight bottlenecks as they are introduced.
  • Implement effective Application Performance Monitoring (APM) in production and non-production environments to enable fast feedback for product teams.

Diagram showing the role of performance on development, test, and production

Creating Non-Functional Requirements

All applications are unique and serve different purposes for their business. But what are the business risks of slow performance and downtime to an organisation?

Not all applications need to scale to thousands of users or need to have sub-second response times. As stated in the Quality Engineering article, it’s best to start with clear, concise and testable non-functional requirements during the planning phase, with testable non-functional acceptance criteria stated within each user story.

Designing for Performance

Once performance targets are specified as acceptance criteria at a user story level, this allows business workflows to be designed with those criteria in mind. With the average web and mobile transactions now spanning 37 technologies, platforms or components on average, if key transactions are not architected efficiently from the start, no amount of expensive tuning will bring them back to within their targets.

Integrating Performance Tests into Continuous Delivery Pipelines

The cadence of application releases is constantly increasing, with the number of businesses who release at least once per day growing from 11% to 26% in the last year alone, with 35% conducting performance testing as part of each deployment. This means if companies want to effectively mitigate against releasing poorly performing software to customers, they need to ensure that performance testing is embedded into their delivery pipeline and executed regularly and automatically.

Performance tests should be broken down into manageable components and embedded into CI/CD processes, just as functional automation tests are. These tests should be targeted at providing teams with enough information on the performance characteristics of each build relative to previous versions, allowing them to decide whether it is safe to release.

By testing frequently and narrowing the scope of performance tests to specific components, APIs or subsystems, teams can isolate the point at which any degradation occurs, making it much easier to identify and fix performance bottlenecks.

Effective APM in Production and Non-Production Environments

It’s simple: you cannot improve what you cannot or do not measure. Therefore, organisations must implement tools and processes that provide observability into their systems.

This needs to be done across the full user experience. For example, modern applications like Angular and React are shifting more compute away from the backend to the user side.

Teams also need to have tools and processes in place to monitor the entire flow. This includes defining and gathering metrics that allow teams to understand the impact, both positive and negative, of system changes on performance.

Data gathered from production monitoring can also be reused to validate feature usage and help with value steam mapping. By making this data visible, it allows product teams and senior management to see the impact of performance on end-users.

Ensure you have observability in place in both test and production environments to allow your teams to quickly identify where issues exist before they affect customers. This reduces MTTR (Mean Time to Resolve) and any downtime that may otherwise occur from performance problems.

Prepare for your transformation

Whether you are at the start of your digital transformation journey, or in the process of undergoing one, the above insights should provide a high level view of the needs, challenges and activities for well-executed lifecycle practices of performance, agility, quality engineering, and continuous improvement as an outcome of business transformation.

If you are facing some of the challenges mentioned above, contact us to discuss a delivery optimisation review and see how we can help streamline your performance testing pipeline.

Phil Johnson

Practice Director - Performance Engineering

Speed as a Key Asset

At Planit, we can help you make performance an asset, not a liability. Our expert consultants can provide testing, assessments and advice to mitigate performance risks and achieve peak results.
Find out how we can help you navigate these challenges, achieve your performance goals, and deliver a rapid, responsive, and reliable experience that delights your customers.


Find out more

Get updates

Get the latest articles, reports, and job alerts.