How is Artificial Intelligence (AI) used in financial services?

How is Artificial Intelligence (AI) used in financial services?

Artificial intelligence (AI) has become an essential resource for financial institutions. Certain core operations couldn't function without process automation, and AI can enable faster, more personalised financial services. But alongside these advantages in a competitive market are a range of legal challenges, including data protection concerns, the algorithmic bias and the systemic risks in greater reliance on automation. Whilst no large financial institution can afford not to integrate AI into its business, it's important to make the parameters of AI deployment transparent and open to scrutiny.

 

What is AI used for in financial services?

AI uses probabilistic, rather than deterministic, decision-making logic.This means that it draws inferences from the presence of data points which it has been programmed to identify within a sample of material, rather than following the rules of causation that humans often use. If a large mass of existing data is structured so that a computer can 'learn' from it, AI can spot patterns across millions of data inputs. The distinguishing feature of AI from other kinds of automation, however, is that it can alter its analysis of the data if given feedback on whether its inferences were correct. In a financial services context, this means identifying processes that can be improved with AI technology to create a faster, more accurate services, such as credit decisions, or risk management solutions. 

 

What are the legal consequences of AI in financial services?

Data is the fuel of AI, but the drive to accumulate, structure and analyse information in 'data pools' presents the ever-increasing risk of large-scale data theft. The damage, reputational and otherwise, of such incidents may be compensable when the information concerns passwords, but an increasing proportion of the data which banks store about their customers can be described as 'inherent'. This includes fingerprint ID, voice samples and other data which a customer cannot alter should their information fall into the wrong hands.

Firms need to treat customers fairly when using AI, but how to do this isn't always clear. There may be a responsibility to disclose when a 'chatbot' is fielding a customer's queries rather than a human. Similarly, requires that consumers are told how their personal data will be used. How do you explain AI processes and continuous learning to members of the public?

The most serious concerns have arisen from the realisation that ' can occur when AI draws inferences from data which result in unequal treatment of people from different races, genders, nationalities etc. This could affect banks' money-laundering screening, insurers' pricing of policies or . Because of the probabilistic nature of AI, it is harder for a financial institution to spot biased reasoning in the coding which underpins decision-making; the responsibility to 'audit' AI to prevent bias must be an ongoing one as the data pool expands.

Finally, there is the threat of . This may arise from 'procyclicality' – (where human and machines mirror existing financial behaviour and amplify each other's excesses) and uniformity of thinking which leaves us all with the same blind spots and risky tendencies. Firms need decision-making diversity of all kinds, whether human or machine.

 

Financial service providers look to create an ethical AI framework

A recent report on included the industry recommendation that financial service providers develop an ethical AI framework with which to guide their decision-making, whether human or AI-driven. As firms, including Pinsent Masons, see the value in being 'purpose-led', the need to apply the same standards of transparency, accountability and ethics to AI becomes clearer. The opportunity to foster innovation and enable growth are clear, but so too is the need for leadership and responsibility.


Related Articles:
Latest Articles:
About the author:

Rory is a solicitor in the Financial Regulation team at Pinsent Masons LLP, with personal academic interests in private, public and international law approaches to emerging financial technologies.