Skip to main content

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

INSIGHTS / Articles

How Different is Testing AI Compared to Traditional Software?

 6 Feb 2024 
How Different is Testing AI Compared to Traditional Software? How Different is Testing AI Compared to Traditional Software?
How Different is Testing AI Compared to Traditional Software?
INSIGHTS / Articles

How Different is Testing AI Compared to Traditional Software?

 6 Feb 2024 

While AI has the potential of great value-add, it also generates many questions and concerns. Therefore, it is important to understand how we can build sufficient quality into AI systems, how the value of AI can be harnessed whilst ensuring the technologies remain safe, stays within the confines of what it is supposed to do, and remain within legal, ethical, and moral borders.

The test design and execution process with traditional software has typically been very linear. With a constantly evolving and learning AI systems it is quite non-linear, and there are many additional quality considerations that you would not commonly need to address when testing traditional software, including accountability, impartiality, and transparency.

An article in Forbes magazine suggests the following six elements need to be considered as part of the process of assuring quality in AI systems:

  • Accountability
  • Impartiality
  • Resilience
  • Transparency
  • Security
  • Governance

In addition to these six elements, there are many other aspects that might need to be included, depending on the individual context.

When testing AI systems, the training and testing data input phases are an important part of the initial process. In this training and teaching phase, testing and quality work is critical, and should take place well before the AI system is deployed into production.

Training data is provided to build the AI model – QE practices at this stage need to ensure that the data together with the algorithm will reach the results intended. During subsequent testing, QE practices again need to ensure a suitable range of representative data, both positive and negative, is utilised in testing.

Quality engineers need to carefully evaluate the training data set, and select the right testing data set, with the quality of both sets of data heavily impacting the quality of the output of each system. During this phase, it is critically important to evaluate and eliminate any potential biases, in both the data and within the resulting system behaviours, that you do not want the AI to demonstrate.

"Labelling" is a key concept when doing unsupervised training. This is where a user gives an expected answer to a question, so that when the AI tries to answer a question, it generates an answer then compares to the labelled answer and will correct itself.

This process is core to what model a Generative AI produces. All of this needs testing.

Despite rigorous training and testing phases, what happens in the real world, when the AI is exposed to the outside, might not have happened in the training and testing environment. Therefore, the continuous monitoring of the AI system is necessary to catch any behaviour anomalies in production.

When AI is implemented, it must be assured to always work reliably and responsibly, as the consequences of AI failure can be extensive. When quality is part of the process from the start and throughout the course of development and delivery, businesses can greatly reduce the risk of failure.

Given that company brands can be so heavily impacted by AI system failure, organisations must invest in delivering quality.

Want to know more about how the value of AI systems can be balanced against risk with built-in quality? Then download our complete e-book for further insights into how you can start delivering better quality AI systems faster.

About the author

As a Gartner analyst, Susanne owned the testing services Magic Quadrant for over 6 years, advising hundreds of CIOs, Application Leaders and Digital Transformation executives on software quality engineering related topics. At Planit, Susanne ensures our offering roadmap addresses current and future customer requirements, whilst incorporating emerging technologies and approaches.

Susanne Matson

Practice Director - Customer Insight & Advisory

Accelerate Your AI Journey

AI is transformational, yet only 33% of leaders are confident their enterprise will mitigate the risks of AI. How ready are you?
No matter where you are on your AI journey Planit has the expertise and solutions to accelerate you towards Quality AI.


Find out more

Get updates

Get the latest articles, reports, and job alerts.