The effects of the use of artificial intelligence (AI) are controversial and the EU is introducing a regulatory framework that sets legal standards for the technology’s application. This will have wider implications beyond the EU’s borders.
Like all things in life, benefits do not come without some risk. While the AI advantages in everyday life are tangible (from a Spotify recommendation to advanced disease mapping), there is also the potential for damage.
Stephen Hawking prophesised: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.” In a similar vein, Elon Musk opined last year, “Mark my words, AI is far more dangerous than nukes” - all the more poignant considering the recent death of two men in a Tesla. Preliminary investigations revealed “no one was driving the car”.
Regulators are now taking action to rein in the potentially damaging effects of AI.
Liability and the EU regulatory framework
The European Union is looking at issues surrounding liability and has published its draft regulation on AI (opens a new window). It will have far-reaching geographical application including to non-EU organisations that supply AI systems into the EU. The concept involves a risk-based approach, proposing that all AI falls into one of these levels:
Unacceptable risk and therefore prohibited. This bans, for example, the use of AI that deploys subliminal techniques.
High-risk AI systems (HRAIS). The regulation centres on HRAIS with a raft of new mandatory requirements. Biometric identification is a particular focus, but also systems relating to critical infrastructure (e.g. water supply), safety componentry (e.g. robotic surgery), and other processes which necessitate significant risk management.
Limited risk AI. This needs to be subject to enhanced transparency. Providers of AI that interact with humans must ensure that any individuals are aware of this fact e.g. chatbots.
Minimal risk. Voluntary codes of conduct are available.
The draft European Commission framework is proceeding through the legislative process and is unlikely to become binding for up to 2 years, potentially followed by a grace period of another 2 years.
Of particular note is the draft includes fines of up to 6% of revenue for non-compliance with prohibited AI uses, and with data and governance measures for HRAIS. Other fines of up to 4% are proposed.
The insurance implications
The need for all businesses to be aware of the legal risks to their organisation is critical. The proposal requires providers and users of high-risk AI systems to comply with rules on various aspects of data and data governance, documentation and record keeping, transparency and provision of information, robustness, accuracy and security, as well as human oversight.
If implemented, the rules will also necessitate an assessment that HRAIS meet these standards before they can be offered on the market or put into service. Further, the rules mandate a post market monitoring system to detect and mitigate problems.
The regulation brings with it possible new legal liabilities including:
Any increased exposure for failing to safeguard data according to the new AI regulatory guidelines referred to above (assuming implemented).
The potential for poorly designed machine learning to operate unethically and/or breach anti-discrimination laws. The possibility exists, for example, of acting in contravention of the EU’s Fundamental Rights Agency. Lemonade Inc., an internet based insurer recently came under fire for using AI to “pick up non-verbal cues that traditional insurers can’t” when analysing videos submitted as part of the business’s claims procedure.
Issues surrounding malfunctioning AI causing damage (technological damage) but also flaws in AI decisions, based on machine learning principles. Consider the legal liability where a system operates independently from its operator, where the designers could not anticipate a particular outcome, essentially, where there is no human to blame.
Additional risk of extortion. Nation states in particular will be keen to get their hands on attractive AI technologies and data sets.
AI-based cyber-attacks will also become another part of the cyber criminals’ arsenal, upping the ante for all businesses.
Simply put, in addition to possible severe financial penalties from regulators, AI causes the liabilities to rise exponentially, and has the potential to create a devastating impact on businesses’ reputation and commercial standing.
It is likely that certain insurances will see a rise in popularity. Market-leading cyber policies, for example, cover both regulatory issues and third party liability arising from privacy breaches. As outlined above, the AI threat creates regulatory liabilities far beyond the scope of the privacy breach regulations, which have been the main focus of businesses to date.
Combined with the recent torrent of ransomware incidents, we may see more and more businesses accepting that a cyber policy is no longer a discretionary spend. For those businesses using AI that already have cyber coverage, a recalculation of cyber coverage limits may be necessary.
A rapidly changing cyber risk landscape requires companies to reassess defence levels regularly to minimise potential damage to their balance sheets and reputations. While cyber insurance underwriters have become very selective in their approaches, this process is having a positive effect in helping companies to identify and address their weaknesses.
Rising Claims
The world has experienced a rise in severity and frequency of cyber-attacks. This has resulted in an increase in cost for businesses and claims for insurers.
A sharp increase in cyber incidents in the US, particularly ransomware, has led to higher insurance claim counts and loss severity over the past two years, according to a recent Fitch Ratings report(opens a new window) (opens a new window).
While US data on cyber incidents is generally broader due to mandatory disclosure requirements in all 50 states, there is evidence to show that the rest of the world is experiencing a similar trend.
Information from the Lockton Cyber Claims Team in London covering the last five years shows that:
Claims frequency has been increasing at an average of 13% year-on-year since 2017, and total loss has increased at an average of 80%.
Claims arising from external actors (such as data theft, malware, and social engineering) have increased by 59% as a proportion of all claims the team has witnessed between 2019 and 2020.
The share of claims specifically caused by ransomware grew from 5% of claims notified to Lockton in 2018 to 17% in 2020. Similarly, ransomware-driven claims accounted for 10% of the total cost incurred in 2018, while this figure increased to 80% in 2020.
A Tough Cyber Market
Cyber insurance premiums have been on the rise as underwriters have adjusted pricing to reflect claims history. The shift has also resulted in some insurance buyers facing reduced limits, ransomware sub-limits and co-insurance restrictions. Additionally, insureds are noticing an increased requirement of time and resources to address stricter insurer minimum standards. These tightening market conditions, increased time and resource commitments, set against declining market capacity, might cause some companies to wonder if purchasing cyber insurance still makes financial sense.
Notwithstanding all this bad news, even in this tense cyber risk environment, transferring cyber risk to the insurance market, as opposed to retaining it, is still likely to make commercial sense.
Survival of the Fittest
Insurers are raising the floor for minimum controls for businesses and seeking greater assurances around cyber security controls before submitting a quote. Some cyber hygiene standards which were recommended 2 years ago, are now considered ‘mandatory’.
While additional underwriter scrutiny may add further complexity and necessitate greater internal resources to provide the requisite degree of comfort to insurers, this scrutiny offers an opportunity to strengthen a company’s cyber defences. As the frequency and severity of attacks continue and as companies continue to expand their digital footprints, the greater focus on cyber hygiene protocols, may be viewed as a welcome opportunity to increase resilience.
Lockton have observed several instances where internal IT departments have actually leveraged off insurer minimum requirements as a key incentive to internal cyber security projects or improvements being approved – a win for both insureds and insurer.
Managing Vulnerabilities
Add into the mix, a 24/7 ‘cyber hotline’. A market-leading cyber policy typically includes a breach response team, providing immediate access to legal advisers, IT forensic consultants, specialist ransomware negotiators, and public relations and crisis management personnel. Having an experienced response team on call, ready to deal with the consequences of a cyber event is a welcome benefit, particularly when staff may be feeling vulnerable, and when time is of the essence. This will maximise the ability for an insured to get back ‘up and running’ as quickly as possible.
A significant increase in claims in the last 24 months, has led to breach response teams being engaged now more than ever. Lockton have observed insureds benefiting from intelligence obtained by breach response teams. By way of example a breach response team may be dealing with the same threat actor across a number of claims over a one-month period. Expertise in how particular threat actors operate and negotiate can be priceless and has led to better outcomes on claims.
Comprehensive Cover
There can be some misunderstandings around what cyber insurance is and in fact, what ‘cyber cover’ a company has, in fact, purchased. Anecdotally, we are aware of businesses which thought they had purchased ‘cyber cover’ only to reveal that their cover was a component part of another policy. Historically, some more traditional policies such as professional indemnity (PI) insurance, have extended to include limited cyber cover; however, this is often restricted to 3rd party liability (with little cover for 1st party costs, such as the breach response).
Relying on cyber cover in these more ‘general’ (i.e., not standalone cyber) policies can be risky, particularly considering the recent Lloyd’s ‘silent cyber’ mandate which has seen cyber-related losses excluded in these policies.
A standalone cyber policy is designed specifically to respond to events involving privacy breaches (as they often happen in the ‘cyber space’) and network security breaches (e.g., the classic ransomware attack or phishing event).
Cover generally extends to both 3rd party liabilities and 1st party costs.
Added Extras
Many companies, as part of purchasing a cyber insurance policy, choose to complete a full cyber risk analysis as part of the process (often using third party consultants who specialise in this area). This should ensure that the cyber threat is appropriately (and accurately) identified, mitigated, managed, and then transferred.
Insurers now also provide significant ‘add-values’ through information sharing, vulnerability alerts and applications that assist organisations in their broader risk posture.
Openly and transparently addressing a company’s cyber strengths and weaknesses can limit potential exposure to directors and officers (D&O) claims, based on a proposition that management failed in its duties to protect the organisation appropriately.
Addressing deficiencies, having assessments performed by independent third parties and transferring the risk to insurance all assist in showing serious consideration, understanding and management of a business’s critical risk, mitigating directors’ and officers’ obligations.
Furthermore, the process may mitigate claims of ‘greenwashing’ of environmental, social, and governance (ESG) principles, showing commitment to the S (e.g., data protection) and the G (management leadership).
For more information, visit our Cyber and Technology page. (opens a new window)