Even with the most advanced test tools and techniques in the world, the fastest processors, the greatest test architects, the highest level of parallelism in execution and the biggest data sets – you’ll never test everything. There I said it.

This isn’t really a huge surprise to any of us that work on the thin edge of the quality wedge; the maxim that you can’t test it all isn’t a new one. Sometimes, however, it helps to be reminded of this in the age of ever-more automation, autonomous system intelligence and hyper-connectivity.

We can talk to a palm-sized pile of silicon, metal, plastic and glass and it’ll tell us the world’s fourth highest mountain if we want to know – it’s Mt. Lhotse in case you’re wondering; you can turn on the TV and find out what the news that broke on the other side of the planet a few seconds ago was; you can secure yourself a new loan worth thousands of pounds just from a website without ever conversing with the bank manager.

We have at our disposal a huge amount of power for trivia and for the trivial, and yet we cannot objectively guarantee it is 100% reliable in every dimension, because we cannot exhaustively exercise it. Even using the automation, intelligence and connectedness to exercise themselves doesn’t seem to solve the problem, and even if it did, the time and resources required for absolute coverage would be astronomical.

The Iron Triangle

There’s long been the concept of the iron triangle in delivered products and services. This predates IT by a long way, but it still holds for those of us that work in that field. The triangle states that the time to produce, cost of production and delivered quality drive the planning and the delivered output of a product or service. The catch is you can only pick two to go in your favour, i.e. if you want it fast and cheap, it’ll be low quality; if you want top quality right now, it’ll cost a lot to get it done.

The triangle bothers me for two reasons: Firstly, I dislike telling people at the outset that there are three key aspects, but they can’t have them all, it feels a little unfair. But the other problem is that if the triangle really is an immutable law of project delivery, then who really wants to compromise on quality? There’s another old saying that if it’s worth doing at all it’s worth doing right. So why then, is there even the question that quality might be a point of compromise? Surely quality should be our target.

How to break the triangle?

This leads us to an interesting idea. If we say, just for a moment, that we’re going to ignore the cost and time in order to focus on building the product to the right meaningful quality for our customers, what would happen? By meaningful quality we really mean maximizing the capability of our product to meet not only customer wants, but also their needs – and these aren’t always the same – what would focusing primarily on this get us?

Well, according to the laws of supply and demand, if our quality went up, our demand would go up and if our demand goes up, our ROI goes up, so we can now see that we’re solving that cost issue. But what’s the difference between that and the iron triangle? Surely, that’s just another expression of choosing two of the factors that are going to take forever to deliver? Not necessarily.

AI to the Rescue

Maybe there’s a way we can achieve maximized meaningful quality with less time. This is where AI comes in. AIs bring us unprecedented power in decision support and identifying patterns in our applications and data. Historically, it’s taken entire teams of business analysts, customer ambassadors, product owners and quality professionals to understand what customer needs and wants are and how they’re expressed in delivered products, as there are a huge number of factors to consider.

We must think about functional and non-functional requirements, requested features, architectural and development capability as well as ability to actually evaluate conformance, and that’s without the considerations and exploratory checks around looking for unexpected behaviours, emergent complexity and so on.

In the past few years, however, we’ve started to gain a new tool in the box that can help – commoditized AI and Machine Learning. Current AI and ML systems are fantastic at taking diverse data and finding us some sense in the noise. We can take all the data that our usually diverse teams need to juggle in their heads and start to use the AI to show us the way to get the product out the door with the meaningful quality intact. This is, in fact, just an expression of quality engineering, letting the AI help us engineer quality into the product.

What would AI actually do?

An AI that could support an entire product team sounds like a daydream and for good reason – it seems unlikely that there is any one AI that could single-handedly take on the insights of an entire analysis, design, implementation and quality team and it will almost certainly stay that way for a while.

However, there is a case whereby multiple smaller AIs could support the behaviour. Consider for instance Qualitest’s own Qualisense TestPredictor. This tool focuses on a part of our original statement: we can’t test everything and even executing all the tests we have got may be infeasible due to time, money or technical constraints, which reinforces the triangle’s restrictions. Qualisense can, however, take on data about the quality delivered in the application and use this to identify where
we should test, focusing our efforts better to make sure that we at least test enough to be confident that our meaningful quality is good.

This is not the only example we can consider AI supporting operational decisions. For instance, we could use AIs to help us rank the features that we want to build into our product to make sure that we hit the key needs and wants of our customers in the right order to ensure they keep using our product between releases, and that our feature backlog isn’t too long at any one point.

We could also perform sentiment analysis on our users’ feedback on our product to understand from a user experience perspective if there’s anything we missed in our quality checks that they aren’t keen on and further improve and target our checking next time round.

This is to say nothing of the myriad of AI-powered tooling on the market that’s already coming to bear in helping our development teams write and test their code to high standards and speed up the detection of issues. Certainly, there is still a time and monetary constraint on these tools to get them trained up and implemented for our teams, but that tends to be an up-front investment, rather than an ongoing challenge the triangle will continually impose on us between releases and products.

AIs in and of themselves are no silver bullet, there aren’t any of those, but the power of AIs to make sense of more information in one decision than we ourselves can is a huge boon to us. We can apply laser precision to our decisions about what we build out for our customers and how we do it, making sure the quality is highest for those that really need it – the end users.

As users’ satisfaction with our quality goes up, so does their demand for our product and of course, profitability.

AIs let us start to solve the time problem, so as providers we can solve the quality problem and our consumers can help us solve the cost problem; with AI supporting us in all this we can finally break the iron triangle and stop compromising on quality.