Skip to main content
 
nz
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

Amplify
DoT
 
       

INSIGHTS / Articles

Best Practices for Agile Testing

 20 Nov 2018 
INSIGHTS / Articles

Best Practices for Agile Testing

 20 Nov 2018 

Since the inception and publication of the Agile Manifesto in 2001, Agile development has risen to prominence, with various iterative and incremental methods being collectively referred as Agile development methodologies. However, from the implementation point of view, each Agile methodology/framework has its own practices and methods.

So as organisations start to adopt Agile, development teams implement different ways and practices that best suits them. The most common challenge that teams face is completion of testing within the same iteration and release cycles, especially if it includes test automation as part of the iteration.

Having engaged in a number of Agile teams, I will share my experiences and best practices for Agile testing. These includes working in single vs. multiple Agile teams and other different team combinations. The common observation around testing is the shift in the mindset to be truly Agile (i.e., not engaging or thinking enough about testing early).

So, even if organisations adapt to Agile development, testing still follows the traditional Waterfall or V-model approach. I.e., it is mostly confined to “finished” User Stories instead of, for example, using the three amigos principle to engage early in testing and every aspect of development. Some other examples include:

  • Testing is focussed on finished User Stories from either a current Sprint or backlog (Sprint N stories tested in Sprint N+1 and so on).
  • In some cases, testers (mostly the automation folks) are excluded from Agile meetings (Sprint planning/PI planning/User Story discussions) so that they focus on building automation regression tests/scripts.
  • In a TDD approach with no testers, teams focus on writing basic unit tests and continue developing the features/stories (without much emphasis or thought given to other aspects of testing or quality).

Where teams or organisations are involved in the above practices, there is a high risk of defects being found at later stages, with integration-level and user scenario defects being most common. Some questions to ponder:

  • Estimation: Do testers get involved? Do story estimation points include required levels and types of testing?
  • Requirements: Does the team think through non-functional testing and user experience (U/X) for the User Stories being developed?
  • Acceptance Criteria / Definition of Done: Do your stories include both of these before proceeding with the development?
  • Backlog: How often do you write tests (manual or automation) on finished user stories or from the backlog?

Imbibe an Agile test culture and introduce best practices as part of the test strategy:

The Agile Tester Mindset

Quality should not be an afterthought process for testing and not the sole responsibility of testers. Every stakeholder of the project team should own quality.

Remember that Agile is a mindset and mature Agile teams engage in all aspects of testing early in their Agile development cycle, be it:

  • engaging with product owners to derive test requirements from the business requirements
  • converting test requirements into a set of Acceptance Criteria(s) (AC)
  • defining Definition of Done (DoD)
  • repeating all the above as the user story evolves and update them as a routine practice.

A testing-aware approach needs to be adapted and absorbed from the beginning of the development cycle.

Agile Test Process

Develop and implement an Agile Testing Process which leads to consistent practices, increasing the quality of delivered products. Irrespective of what methodology teams adopt (Agile/Scrum/Kanban/Scrumban etc.), having a consistent set of processes, such as “Visibility”, “Traceability”, and “Communication”, within the development cycle are beneficial:

  • Visibility provides transparency and the ability to view the progress against goals.
  • Traceability is sometimes perceived as heavyweight, but it is increasingly important in larger, distributed, and sometimes safety-critical projects. Traceability between requirements, design, code, and tests is vital.
  • Communication is a key and fundamental requirement, and helps collaboration between the teams.

There are several tools (Jira/Rally/VersionOne) available, with Jira being the most popular and widely used. Integrating a Test Management tool (eg: qTest/Xray/Zephyr) helps with a consistent set of test processes being adopted into the development cycle.

Some other best practices to consider (and integrate into your tools/processes, if applicable) are:

Acceptance Criteria (AC) – Product related

The members of a team working on the specific User Story should take on the responsibility and work with Business Analysts (BA) or Product Owners (PO) in agreeing and creating ACs, and making them mandatory for every User Story. Set a dedicated time (which some call a User Story Kick-Off or Grooming session), ideally 15 -30 minutes, to discuss each individual User Story.

Apart from discussing the details and specifics of the functionality, it is also an ideal time to come up with the initial setup of AC (and evolve later on). Dedicate 5-10 minutes of this meeting on ACs per user story. Some good techniques to use are BDD (Given/When/Then) and Specification-by-Example approach, Decision Tables, etc., plus agree on the specific testable data/boundary values, etc.

Note: Write ACs before the development of User Story, otherwise you may end up writing what the functionality does compared to verifying if the functionality developed meets the business or user requirements. These ACs don’t need to be concrete, as they evolve over time; but at the same time, they should not be empty either! They are a start of a conversation.

Definition of Done (DoD) – Process related

DoD is more process related and you identify all tasks that are required for successfully completing a User-Story transitioning through to Done. This can involve verifying if AC exists, unit tests written and passed, peer reviewed, manually tested, or has an Automation script (if a test exists, mapping to User Story and/or Business Requirement) and finally, acceptance of the story by BA/PO are some of the examples.

DoD should be made mandatory for the iteration and release cycles, but can be optional at an individual story level. Some of these DoDs can be automated where possible.

Estimation

When estimating stories (irrespective of techniques used - Points/T-shirt sizing/Relative-sizing/etc.) ensure the testing effort required to complete the User Story is considered. This helps testers to not only engage early on with each User Story, but also focus on considering the number and type of tests required, planning, design, etc. In addition, this also helps boost the consistency of the teams’ velocity for stories being completed and accepted (if that is a challenge that teams face with irregular or inconsistent sizing and velocity issues).

User Story Testing

Apart from writing unit-level tests and/or BDD features, create tests based on the AC and keep updating as the Story evolves. These will be your different user scenarios and acceptance tests, also Non-Functional Tests for responsiveness and latency as examples.

Time should be spent on designing tests correctly. Some aspects to consider are:

  • Test Data: gathering the data by either setting up new or using real-world customer data (which meets various privacy regulations, such as GDPR, if applicable).
  • Test Conditions/Ideas: specific conditions based on the business rules for the functionality to work. E.g. Use workflow diagrams for Yes/No conditions, etc.
  • Environment: setting up or choosing an environment (Dev/Test/Pre-Production/etc.) to test (and any pre-condition/pre-requisites to execute a test).
  • Test Scenarios: End-to-End User workflows consisting of happy path and other combinations (edge cases, boundary values, false positives).
  • Requirement Mapping & Risk assessment: some of the test/project management tools allow you to map tests to your requirements, and also perform risk assessment using Impact vs. Likelihood or Damage vs. Frequency. But for smaller/simpler projects, a simple Excel spreadsheet or whiteboard with sticky labels is good enough for tracking. This will help to identify the coverage and the most important tests to run when short on time.
Exploratory Testing

A practice used by all high performing teams, where all members and stakeholders of the team engage in testing. This complements other testing methods and helps in finding defects at a system level, integration, and mimicking end-user scenarios and UI/UX issues.

Dedicate an hour or two as part of exploratory testing. Team members have an option to pick up stories they have not worked on, helping them gain familiarity on other user stories and encouraging knowledge sharing across other areas of the product. As a good practice, add exploratory testing as a recurring task on the Scrum board and perform this on all the user stories (which helps in finding variances and fixing them early into the cycle).

Session-Based Testing

This can be thought of as structured exploratory testing, or a method to measure and manage your exploratory testing, where you create sessions and run session-based tests targeting specific areas/modules of the functionality. E.g. Checkout process, registration process, order process, security, or for web-related testing that requires different combinations of OS/browsers platforms.

The idea is to track each session with start and end times, members involved, time spent on each of the sessions, and more importantly, the coverage of different areas of the product. Collate and use the findings/defects with product owners to take necessary actions.

Repeat the process with different themes/functional areas/concurrent sessions as long as you see it beneficial and are finding defects, learning user workflows/journeys, behaviour patterns, reliability, performance, etc. of the product. Again, high performing Agile teams use this as a common form of testing, and members of the team have an option to choose a session that they were not associated with previously to become familiar in different functional areas of the product.

Note: extend these sessions to a wider audience (other Agile, sales, and support teams, if applicable). This helps uncover any defects that were not discovered in the development stage but before the release of the product.

In Summary

Develop an Agile test culture and embrace this as part of the test plan and strategy. Quality is not limited to few individual heroes of the Agile team; it is the whole team’s responsibility and everyone should be working towards it.

Think about these key items for your next iteration:

  • The Agile Test Mindset - start early, think about different aspects and levels of testing, and estimation to include all testing
  • Acceptance Criteria and Definition of Done  - make them mandatory
  • Evolve tests – don’t just do unit-level tests. Use Specification-by-Example, Decision Tables, and use-case scenarios with happy path and boundary values

Achieving Business Agility is not easy or something you can do overnight. Contact us today to find out how our expertise can help transform your Agile journey.

Embed Quality in your Agile Delivery

No matter where you are on your Agile transformation journey, we offer a broad range of consultation, assessment, training, and coaching services that enable your teams to deliver better outcomes faster, embed quality in every iteration, and adopt 'shift left' practices.
 
Find out how our Agile Quality experts can help you understand your Agile maturity and fast-track your journey to Agile success with quality embedded throughout. .

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.