Test automation solutions have been around for more than 25 years and yet, at a time when automation is no longer a nice to have, many organisations’ test automation projects are not nearly as mature as one might think.
Just because a test automation solution has been around for that many years does not mean it will bring you coffee — in other words, it doesn’t mean that it is a mature solution.
Which is why many test automation solutions deliver the same value that they did 20 years ago, mostly automating a static regression pack that keeps growing as the functionality of the application under test grows.
We say “static” because old test cases are rarely revisited and are only changed when code changes and the test case fails. We also say “static” because teams keep running the full barrage of tests every time there is a change made just in case that change affected some process we weren’t aware would be affected.
But with the adoption of Agile/DevOps principles, changes are mostly small but a lot more frequent, and it is here that old testing approaches fall short and cannot keep up with delivery pipelines.
Risk-based testing has long been touted as the way to ensure that the areas of the system that create the most risk are prioritised for testing. That seems like a great idea, but in practice it isn’t one adopted by many organisations, which instead aim to test regression packs with a 100% coverage aim. The main reason stems from a lack of traceability and understanding of where the risk stems from, what therefore should be tested, what has been tested, what has passed or failed, and finally what the risk of the actual coverage and failures is to the organisation. And with bespoke testing tools for different tests such as thick client, Web, native mobile and API testing, the challenge is even greater as organisations try to retain skills in each of these disciplines in separate teams and ultimately struggle to stitch these different results back into a cohesive picture of where quality assurance is heading.
Fortunately, there are some modern automation solutions on the market, like Eggplant from Keysight, that remedy these problems by employing AI-assisted, model-driven, image-based testing to address the abovementioned issues.
Image-based testing makes test automation technology agnostic. Besides enabling all test types to be run from one tool using one simple scripting language making more efficient use of automation skills, it also enables omnichannel testing and testing of end-to-end processes regardless of the technology in use. This applies equally to testing a closed car infotainment system, where a test tool agent cannot be installed, as it does to testing a process flow that starts in a Web app, requires an one-time Pin from the mobile to log in, and then requires validation against a processed record on the mainframe.
The model-driven approach provides the visibility and traceability that has been sorely lacking in test automation solutions. The model, also known as a digital twin of the application under test (AuT), is a visual representation of the AuT, which makes it easy for business and technical teams alike to know exactly what is included in tests before they are run, which tests were ultimately run and how thoroughly, and what passed or failed. The typical log file of the same information is still available but, as they say, “a picture is worth a thousand words”. Additional value derived from the model-driven testing approach is significantly reduced script maintenance. Automation scripts by their nature of being associated with states and actions in the model must be modular. So, if a change is made to a component affecting multiple process flows, one no longer has to figure out which test scripts are affected and amend all of those. Instead, the snippet of code behind that action or state in the model is changed and all process flows using this step are automatically up to date again.
Eggplant Digital Automation Intelligence is the tool of choice for modern test automation environments
Even with all the above-mentioned ease of use, minimised script maintenance and better traceability, test automation still cannot keep up when testing everything instead of testing what really matters. This is where AI can be put to good use by “remembering” what was tested in each release and what passed or failed each time, and then prioritising the next test run based thereon. For AI to work well, however, test configurations should no longer dictate what will be tested and instead should allow AI to perform automated exploratory testing.
If the AI engine knows of a failure in a previous test run, it will prioritise this area of the code to ensure it has higher coverage than areas of the application that have passed for many prior test runs. Another area AI prioritises is where changes were made because, again, that code should be tested more thoroughly. Any new code will also get a higher priority in a new test run.
The above value proposition, coupled with flexible DevOps toolchain integrations, makes Eggplant Digital Automation Intelligence the tool of choice for modern test automation environments.
For more in-depth information on the value of this new approach to test automation, please feel free to contact IT Ecology.
About IT Ecology
Founded in 2004, IT Ecology made it its mission to provide technical testing and monitoring competencies to the sub-Saharan African market that excel at delivering against unique customer requirements. Customers call upon IT Ecology as the solution thinkers and advisers and, coupled with a can-do attitude, our team has delighted our customers, again and again, exceeding expectations.
For more information, visit www.itecology.co.za.
- This promoted content was paid for by the party concerned