Artificial intelligence (AI) can increase the efficiency and improve the security of financial technology (fintech) services, but it needs to be applied in a controlled way to avoid potential conflicts with customers and regulators.
AI can help automate and support an array of processes including decision-making with regards to lending, insurance applications or fund management, as well as customer support and credit risk assessment. Cutting edge fintech companies introduce AI to improve efficiency, precision levels, and to enable quicker query resolution by analysing and managing data from multiple and new, non-traditional sources.
Consider for example, the use of a financial model which predicts the likelihood of a default on a loan. A traditional assessment model might intuit that an increase in income would always mitigate an applicant’s likelihood of default. By contrast, an AI model might suggest that, based on other non-traditional data such as shopping habits or social media behaviour, a pay increase might disclose a greater risk of loan default.
Importantly, AI also plays a major role in the safeguarding of network systems, adding a further welcome level of protection to client data. Authentication of biometrics (facial, fingerprint or voice recognition, which are more difficult to bypass than traditional passwords) significantly reduces the likelihood of data breaches, providing a greater degree of comfort to fintechs and their customers alike.
Bias and the need for transparency
AI-driven decisions depend on the data with which they are fed and increasing vigilance is being brought to bear on the appropriateness of data input sets, particularly regarding any potential bias in the data. Concerns revolve around:
Sample bias (a data set which is not large or representative enough)
Exclusion bias (inappropriately excluding a dataset feature often during ‘cleaning’)
Observer bias (influence of prejudices by a researcher)
Prejudice bias (use of prejudiced data in an input data set which will create a prejudiced outcome)
Measurement bias (use of an inaccurate measurement tool).
In short, fintech AI products need to be closely scrutinised before their outputs are released into the public domain.
Reputational harm and the responsible use of AI
A failure to manage the use of AI in a responsible way can erode public trust. The 2020 Ofqual incident relating to the use of a controversial algorithm in assessing students’ performance and calculating grades during the pandemic, shows the damage that inappropriate use of AI can cause.
Businesses are increasingly awake to the risks of not using AI fairly.
Sustainability
Sustainability issues are on the agenda of boardrooms, as stakeholders seek to bring ESG principles to life and avoid claims of ‘greenwashing’. Sustainability takes account of a wide range of ethical values including respect, solidarity, individual and community wellbeing and social justice.
Ongoing attention to AI processes is critical to managing these concerns.
Regulatory Scrutiny
Alongside the existing regulatory obligations on the financial services industry, companies need to keep abreast of the AI regulatory landscape to ensure that their AI systems comply. Existing laws and regulations regarding the use of AI in UK financial services, include (opens a new window):
The FCA Handbook (opens a new window) (e.g., avoidance of market abuse)
The Prudential (opens a new window)Regulations Authority Rulebook (opens a new window) (adequacy of prudential risk management practices)
Equality law (avoidance of unlawful discrimination)
Competition law (avoidance of collusion and other anti-competitive practices)
Data protection law (adherence to relevant principles for managing and processing personal data).
Elsewhere, a draft regulatory framework for AI by way of the presentation of the Artificial Intelligence Act, a legal and ethical structure which seeks to provide ‘product safety framework’ around the use of AI, was unveiled in the EU in April 2021. The regulation will have far-reaching geographical application including to non-EU organisations that supply AI systems into the EU.
We can expect that legislative frameworks will emerge globally to manage the exponential rise in the use of AI.
Insurance Implications
The potential for exposure relating to the wide AI picture, both from a liability and from a first party perspective, is far-reaching. Consider the following examples:
Data and privacy: privacy regulatory and liability issues if data is managed incorrectly and/or unethically. Strict protocols are contained within the draft regulatory framework around documentation and record keeping, and safeguarding data.
Extortion risk: AI technologies and data sets are increasingly subject to the threat of theft or extortion.
Directors and officers: potential exposure around officer liability to manage threats appropriately, including the impact of ESG principles (e.g. social issues surrounding responsible, transparent and ethical use of AI, and governance issues around failure to manage exposures appropriately).
Professional indemnity: the negligent use of AI (perhaps through a poorly configured algorithm) in the provision of professional services (either directly or via vendor-provided software) or proposed regulatory fines involving non-compliance with AI uses.
Intellectual property: consider the need to protect AI algorithms with IP insurance.
Product liability: consider in the context of an AI supplier and the product/algorithm not operating as marketed.
While the use of AI in fintechs is no doubt here to stay, offering considerable benefits to businesses and customers alike, the increasing use of AI may cause new exposures to emerge over time, necessitating a closer look at various insurances to mitigate these risks.
For more information, visit our Cyber and Technology page (opens a new window).