Quality assurance (QA) in general (testing in particular), plays a vital role in AI platform adoption. AI platform testing is complex for the following reasons:

  1. Testing AI platform demands intelligent processes, virtualized cloud resources, specialized skills and AI-enabled tools.
  2. While AI platform vendors typically work towards rapid innovation and automatic updates of their products, the pace of enterprise testing to accept product updates should be equally fast.
  3. AI platform products usually lack transparency and interpretability. They aren’t easily explainable. Hence, it is difficult to trust testing.

Modern QA shifts the role of testing from defect detection to prevention. Moreover, the quality of AI is very much dependent on the quality of the training models and the data used for training. Therefore, unlike conventional SaaS testing models that only focus on cloud resources, logic, interfaces and user configurations, AI testing should additionally cover areas such as training, learning, reasoning, perceptions, manipulations, etc., depending on the AI solution context.

In an AI-as-a-Service model, the AI algorithm is provided by a platform vendor; IT enterprises configure it by developing interfaces and providing data for training to enhance end-customer trust of AI-based intelligent applications. Therefore, AI testing should address the basic components of data, algorithm, integration and user experiences.

Secondly, testing should validate the functional fitment of the solution within enterprise IT. It should certify the configuration of an AI platform product in the business ecosystem, such that the business purpose of AI adoption is met. It should also verify the training model used to raise the solution in an organizational construct.

Thirdly, the approach that AI algorithm adapts – such as statistical methods, computational intelligence, soft computing, symbolic AI etc. –must be addressed in the algorithm validation process. Though the solution is a black box, necessary coverage should be established to certify its validity.

Finally, the tools that AI logic applies – such as search, optimization, probability, economic models etc. – should be covered in the functional validation process.

It is critical to apply the nuances of AI to each test element. For example, data assurance strategy should address the complexities that AI solution would introduce in terms of volume, velocity, variability, variety, and value of data. Below figure presents a practical list of test attributes for test design considerations.

No alt text provided for this image

QA maturity is critically important for an organization to choose an AI platform solution. A high degree of test automation is key to success, without which enterprises cannot manage frequent releases of the AI platform (and the products within) as well as ongoing internal application releases.

The only way for you to cope up with the system changes that are happening inside your organization on the interfacing applications and data, and externally through the changes your AI vendor makes is to establish a continuous (integration and delivery) environment; the methodology matters. Hence check if your organization has the required level of QA maturity. If you don’t have it, fix it first before even you think of AI.