Banks, financial services and insurance (BFSI) firms are looking to big data, artificial intelligence (AI) and machine learning (ML) to increase productivity, reduce costs and provide enhanced products and services for customers. But these new technologies are raising urgent questions about data governance, AI ethics, legality and trust.

New legislation from the EU seeks to answer some of these questions by identifying and flagging AI systems that are high risk. This includes AI systems used by BFSI firms, such as those involved in credit checks.

Complying with these new rules is going to be tricky, so sitting tight is not an option. You need to be ready. Read on to find out more.

How is big data related to AI and ML?

Big data, AI and ML are becoming inextricably linked as organizations use big data to train AI algorithms and, in turn, lean on AI to understand big data. But there are also clear distinctions.

Big data should meet the requirements of GDPR and be used lawfully, fairly and responsibly; AI and ML also need to be trustworthy. In short, can those who use or whose lives are affected by AI or ML-powered systems trust the decisions that those systems make?

When big data, AI and ML get it right

When big data and AI and ML align to make the best use of information, then magic happens.

Take the pressure BFSI firms are under to support customers through hikes in the cost of living and soaring energy, food and fuel bills. One of our banking clients is using AI and big data to automate, personalize and speed up resolving customers’ debt issues.

Matching big data with AI and ML also offers banks opportunities to help customers make their money go further. Internal big data, for example, tells a bank when a customer pays their electricity bill, who they pay it to and how much they pay. If banks couple this information with external big data and AI, they could start offering customers personal recommendations to switch electricity suppliers to get a better deal.

Although, banks would need to be absolutely clear about what recommendations are based on – the best price in the market or the best price from a selected list of preferred suppliers?

Confronting the risks of big data, AI and ML

Of course, there are also instances when the AI/ML model and/or the underlying data involved have made decisions that could damage people’s lives.

Take the AI recruiting tool developed by Amazon that showed bias against women. Or the recent announcement from the UK’s data watchdog, the Information Commissioner’s Office (ICO), that it will be investigating whether AI systems are showing bias against neurodiverse people and ethnic minorities when dealing with job applications. The problem? The ICO is concerned that neurodiverse people and ethnic minorities weren’t part of the testing for this software.

This underlines the need for AI rules that keep people safe, maintain their privacy, protect them from the likes of fraud and avoid unfair bias.

Current AI, ML, and big data regulations

Right now, regulation isn’t straightforward, as globally there’s no big data or AI-specific legislation. GDPR, for example, addresses many of the concerns over the handling of personal data, which is fundamental to the development of many AI systems. But other existing regulations expose overlaps, gaps and inconsistencies. These can confuse the rules and challenge the confidence of businesses and the public in AI.

What are the new EU rules for trustworthy AI?

This is set to change. In April, the EU published its draft AI Act, which proposes a legislative framework for AI that controls its use but also increases public trust. This Act will apply to organizations providing or using AI systems in the EU. It will also have huge implications for BFSI firms, as:

  1. It will also impact AI providers outside EU jurisdictions, including the UK, if their AI systems affect individuals in the EU.
  2. The EU Commission is taking a broad definition of AI and wants to protect individuals and society from a wider set of harms than those resulting from the misuse of personal data.
  3. The Commission is also taking a risk-based approach. And AI systems used to evaluate a person’s creditworthiness, check and track work performance and behavior, and recruit staff are set to be high risk. (There is also a small number of prohibited AI systems, but these are likely to be of very limited application to BFSI organizations.)
  4. Firms that market and/or use AI high-risk systems will have to follow stringent rules. They will also be subject to conformity assessments and registration requirements.
  5. These rules will be complex, addressing risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security. And, like GDPR, they will come with hefty fines for non-compliance – up to €30 million or 6% of an organization’s global turnover. Ouch!

And while the rules may not be finalized until at least 2023/4, BFSI firms need to be ready.

How to prepare for the new EU AI Act

The impact of this new legislation is expected to be as far-reaching as that felt when the EU’s GDPR regulations were implemented. So, at the very least, you should conduct a risk assessment, identifying which of your AI-powered systems are likely to be high-risk. This will give you an idea of how much time and effort implementing requirements will entail.

But little guidance is currently available, so the approach will need to be one of self-regulation ensuring any AI in use is properly governed. During the build and testing of AI systems, for example, you will need to identify who internally has the knowledge to verify the decisions made using AI.

UK diverges from EU AI Act

To complicate matters, it’s becoming ever plainer that UK policymakers are not likely to follow the EU’s regulatory lead. Indeed, in July, the UK government rejected the EU’s centralized regulatory approach to AI in favor of giving existing sector regulators AI oversight instead.

And in the UK, the BFSI sector looks to two regulatory bodies. The first, the ICO, already mentioned, protects individuals’ information rights under GDPR. It ensures that organizations – including banks, insurers, and financial services firms – use personal data lawfully, fairly and responsibly.

The second is the Financial Conduct Authority (FCA), which is the conduct regulator for around 51,000 financial services firms and financial markets in the UK. Part of the FCA’s responsibilities is ensuring that consumers can access financial products and services that are fit for purpose and represent fair value.

UK AI proposals call for a named ‘legal person’

Clearly, it’s a point of competitive advantage to be the country or jurisdiction that develops the AI policies and regulations that become the internationally accepted standards and norms. So, it will be interesting to see how the different approaches of the UK and EU play out.

But for now, the aim of the UK government is to encourage innovation by enabling sector-specific regulators, such as the ICO and FCA, to consider “lighter-touch options.” These lighter touches include guidance and voluntary measures or creating sandboxes – such as a trial environment where businesses can check the safety and reliability of AI tech before introducing it to market.

The government will also be asking regulators to interpret and put in place the core principles, which require developers and users to:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure, and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

And again, what jumps out here is the principle stating that AI systems will have to identify a legal person to be held responsible for any problems.

Key takeaways

Technological advances in big data, AI and ML can benefit banking, financial services firms and insurers and their customers by supporting financial stability, market integrity and competition. But there’s a need to ensure that the use of this new tech remains ethical, responsible and trustworthy.

The extent to which existing laws apply to big data and AI can be hard for BFSI firms to navigate. Overlaps, inconsistencies and gaps in the current approaches by regulators can also confuse the rules, making it hard for businesses and the public to have confidence in AI.

But new legislation is coming in the shape of the EU AI Act. Responsibility for compliance to this Act will likely fall to the UK’s regulators, such as the ICO and FCA. And, like GDPR, quality engineering teams will in most instances be tasked with verifying internal compliance. So, understanding the legislation will be paramount.

BFSI firms using AI should start planning to meet requirements as soon as possible, ensuring they have resources with the required skills available when needed.

quality engineering free assessment