It’s been a long time since the first computer system was invented and introduced into the world. The first prototypes of computers had much smaller computational power than our modern-day technology and were mainly used for basic number operations. When the potential of these machines was understood to combine those number operations into much more advanced logic, we started driving to put more power and capability into our systems year on year, with great success! No longer are modern-day machines learning in one-dimensional ways, offering unilateral solutions to complex scenarios. Today, they have the capabilities to process vast amounts of data, assessing millions of combinations and probabilities in seconds. The power of machines has gone through huge leaps and continues to do so.
AI has been an academic subject for years, but we quickly came to understand that it needed a lot more data and processing power to realise its potential than most of our businesses had access to. In recent years, as we unlocked cloud processing, comparatively cheap storage and used these to start mining big data, AI became something Industry could focus on; and industry did just that! We have seen a huge uptake in active AI investments, projects and companies since the millennium, and this trend is continuing.
Although, one of the big questions everyone is asking is this: Are we actually ready for this level of machine progress and the capabilities of AI, or do we have a gift of fire here? This is what I call the AI conundrum.
This question is highly pertinent to my own industry, software quality assurance and engineering. AI presents us with unparalleled opportunities to deliver faster, more informed, and value-adding results for our customers. Through use of AI techniques, we can now make predictions about system usage for 3, 6, 9, or 12 months down the line by looking at past patterns. We can draw on lakes of data to consider thousands of data points in one single calculation, removing the need to rely just on the experience of a few gurus. By applying enough training and good enough data, we have seen AIs surpass human ability in critical areas like interpreting medical scan results.
At present approximately 1.25 million people worldwide die yearly due to car accidents, with 90 percent of these caused by human error from speeding, driving under the influence and distracted drivers, according to the World Health Organisation. With these figures in mind, the further implementation of AI and driverless cars could save many lives and avoid countless of injuries, sometimes life-changing. And that is in just one industry. To truly harness this power though we also need to make sure that the AI is making its decisions on the right data and only the right data; we’ve also seen high-profile cases where AIs have picked up inherent social biases based on bad input data undermining one of industry’s key needs from AI, the ability to make fair decisions.
So, where does that leave the human element? How do we apply all this raw processing power ethically to ensure that the “factually accurate thing” and the “socially right thing” stay in step as we rely more and more on decision-making with AI? Technology must to enable us to progress beyond our limitations, not just replace us in reaching them, and this means correctly tempering decisions made by artificial intelligence with the rationale of artificial emotion, whilst keeping those emotions free from unfair bias.
AI is a challenge to test in no small part because when we think of intelligence, we often understand it as the ability to acquire and retain new skills; in many cases we want AIs to acquire skills it takes us as human’s years of experience or training to acquire, for example, medical review and diagnosis. In other cases, we want AI to develop skills that humans may never acquire, for instance making a decision based on the varied inputs from all the systems of a car at speeds faster than the human brain is capable of processing data.
In trusting and embracing AIs in our day-to-day uses we need to think of ways to exercise them with huge volumes of data and/or representations of massive input combinations to show they truly have reasonable intelligence. The techniques for this are forming from a marriage of Data Science, Mathematics and Software Quality Assurance.
For true day-to-day integration with AI, which is surely coming, we also need to assess the emotional intelligence of our systems and consider the social and societal readiness of them. We are just scratching the surface of how we do that and what it truly means, ethics and emotional decisions are hugely subjective and vary widely with locality and culture; the “right” ethics in one society aren’t the same in another and the techniques to establish and assess this will continue to grow and evolve.
There is no doubt that AI can provide immense benefits; but also, we must be aware of the risks and downsides. It is incumbent on all of us from a sense of duty to our customers to deliver them the best possible results, but the results must also be meaningful and contextualised for them. Moreover, it is our duty to society to be progressive and constantly open to innovation and new technologies, especially when it comes to life or death.
That said, there are many different opinions on whether in the next 20 years when AI can eliminate the human element in areas of application requiring nuance, sensitivity and intuition. So, the machines are surely coming, but it could well be a long time yet before they take over!