Qualitest just acquired AlgoTrace, an advanced data science company that focuses on building AI and automated machine learning tools.
What role, some have asked, does AI play in improving quality and quality engineering and why exactly is it needed now? Well, we believe that AI is changing the quality world, for good.
As Marc Andreessen famously quipped, “Software is eating the world.” Companies are increasingly changing their whole value chains with software. Who would have thought that McDonalds, as an example, would be as much a technology company as any startup in Silicon Valley. Their new apps and digital kiosks are changing every aspect of how they operate. Similarly, every company in every industry is replacing much of their value stack with powerful software.
But with great power comes great responsibility and that means having the ability to make sure this increasingly complex software doesn’t backfire and create new and possibly existential threats to the business models of brands.
So in this changing landscape, what are the biggest issues for every CTO? First, with increasing complexity comes increasing risk – in many cases catastrophic risk – for the business and for the executive. Second, in a multi-device, multi-platform world there is an increasing number of failure points and aspects to test, making broad test coverage prohibitively expensive. And third, beyond cost, testing can be incredibly time consuming and the enemy to release velocity in a world where time to market is a business imperative.
There’s only one tool in the testing arsenal that can effectively deal with these complications for software executives: AI and Machine Learning. Companies that are not using AI in their approach to quality assurance will be at a significant, debilitating disadvantage.
There aren’t many professions in which people put their career at risk every day by simply doing their job. When it comes to software development, leaders in roles like CTO or Head of Development are putting their head on the line every single time they release code, which must be done in a rapid – and ever increasing – pace.
System complexity, platform dependency and interconnectivity are increasing exponentially, and it doesn’t take much for software to not work properly. With the rise of Agile and DevOps and the increasing pressure to launch new functionality and launch it “now,” the chances of bad code debilitating a product or service rise exponentially, consequently driving a higher chance of brand damage and blowback from customers.
A buggy app launch or update will also allow other companies to gain a competitive edge, a toehold to steal market share. (Just look at the drop rate data of any buggy release – customers are ruthless.) The reputational risk, as well as the cost and revenue risk that comes with software that doesn’t work properly, are real and material.
So, the digital leaders of today put their career at risk every day they release new code. Only AI can help.
Nearly every brand we work with is undergoing a digital transformation and they are facing similar challenges. More complexity and interdependence between systems. More and more functionality to test. More UX to get right. They simply cannot test 100% of what they release. It’s just not possible.
So, the question we are often asked to solve is where to intelligently allocate testing resources. Brands want to know where they should focus the time and effort of their quality engineers to ensure reliable, functional, load tested, secure, usable, intuitive end-to-end customer journeys all driven by software.
But knowing exactly where and what to test is increasingly impossible for humans to figure out, even humans with years of quality experience. The only answer is to enhance our expert quality engineers with machine learning and AI.
So how do code developers and executives achieve a level of confidence that they are not risking their careers when they can’t test everything and when the market dynamics are demanding that they release more and more rapidly? They must come up with a way to test less (take less time) while providing sufficient confidence in increasingly complex releases – seemingly conflicting requirements.
That is precisely where machine learning and artificial intelligence come in. With Qualisense test predictor (Qualitest’s AI tool powered by AlgoTrace’s machine learning engine), quality executives can achieve high confidence levels in far less time. Using structured data and test results from prior releases, and unstructured data, like Agile stories or release notes engineers put out when they were making changes, Qualisense tells us not only which tests to conduct (and which to ignore) but also in what order to conduct these tests. As a result, Qualisense can provide launch assurance in a fraction of the time even when running with distributed teams.
In other words, AI-driven testing yields stunning results – our clients are able to focus their efforts on the areas that are more prone to failure, launching with the same level of confidence in quality with 25%-50% of the effort – solving the big issues for software executives – materially lowering career risk with a prescriptive risk-based approach that significantly increases release velocity and reduces launch errors or issues.
In summary, AI is changing quality assurance forever. The increase in complexity simply makes it impossible for humans to fathom all the potential parameters and permutations involved in testing such complex systems. Machine learning, on the other hand, is able to predict and point expert testers to keep a handle on quality in spite of the complexity.
It’s important to say that AI will not replace quality engineers; it enhances their capabilities and makes them more effective in what they do. It’s the true marriage of machine learning and human competence.
In order to be able to shine a light or apply the medicine (choose your favorite metaphor) to a given problem in the fastest and most efficient way, we need AI and Machine Learning to help to quickly ascertain where the issues are and, under the guiding hand of quality engineers, get the feedback back to the developers to make the code changes that will solve issues early in the development cycle – saving time and money.
Companies attempting to develop and launch code without the aid of machine learning to optimize quality will be at significant disadvantage. Their quality engineers will simply not be able to keep up.
As a quick postlude to how AI will change quality assurance forever, AI will also be used at increasing rates by companies to enhance their own system functionality. This imposes new risks. Untested AI engines driving business outcomes can and do result in incorrect or suboptimal results. One thing we are very excited about is that the AlgoTrace acquisition deepens our expertise in the new art of testing of AI.
Testing the unknown of AI powered software will require new methods, new roles, and more collaboration between data scientists, developers, and quality engineers. Qualitest now has a new practice area to help clients who want to use AI in their software offering to be assured that AI results are valid and unbiased.
Many companies are infusing AI into new and existing software at breathtaking speed. But, according to a recent report from Forrester, only one-third of developers claim to know how to test AI-based systems. In our experience, the number is lower. The problem our clients face when launching AI apps is that they don’t know that the AI is actually resulting in the appropriate outcome. According to Forrester:
“True autonomic AI components behave nondeterministically, so upfront test cases cannot be defined, which is like testing the unknown.”
AI-infused software raises many questions and risks around data bias and unintended consequences.
With this new practice, our goal at Qualitest is to give our clients confidence that when they are using an AI engine, it results in an accurate, unbiased outcome. Instead of being a mysterious black box that might generate problems instead of solving them, we are able to put AI to the test, too.