Opportunities and risks of AI in healthcare

Artificial intelligence (AI) is playing an increasingly influential role in the healthcare sector, helping to improve diagnosis, streamline and improve processes, enhance patient care, create efficiencies in data sharing and, critically, save more lives. As the technology continues to advance, the opportunities are vast: from analysing lab results and providing diagnosis, to assisting with patient surgeries, and correcting potential errors in drug administration.

At the same time, record inflation and ongoing labour shortages have placed healthcare services under mounting pressure, a fact reflected in the long waiting lists in the UK’s National Health Service (NHS) and other public sector healthcare services globally. With a growing need to redefine and remodel healthcare provision, utilising AI offers an attractive avenue towards a potential reduction in cost.

Yet, utilising innovative, advanced technology of any kind is not without risk. To identify those risks, and to mitigate them, it’s vital to understand the potential applications of AI within the healthcare sector. What’s more, by comprehensively stress-testing their insurance programmes, businesses can ensure they are adequately protected should any liability arise.

AI application opportunities in healthcare

Healthcare providers are already exploring the use of AI in a wide array of areas. For many, AI tools exist to equip medical professionals with additional information or analysis, which can be used to inform diagnosis or treatment decisions. Others are deploying AI applications in surgery: in 2020, neurosurgeons at Johns Hopkins University performed (opens a new window) the first augmented reality (AR) assisted spinal fusion surgery. More recently, surgeons have claimed that hospitals trialling robot-assisted surgeries enabled them to perform three operations a day instead of one (opens a new window), with some patients going home on the same day.

AI is particularly suitable wherever there are large data sets available, such as interpreting images from X-rays, MRI, and CT scans to detect abnormalities. In this way, AI can enable radiologists to make more accurate treatment decisions. Similarly, researchers are exploring potential applications for AI in fields ranging from rare disease diagnostics (opens a new window), to risk assessments and treatments for cancer (opens a new window).

Other potential applications include: 

  • Virtual nursing assistance 

  • Fraud detection 

  • Cybersecurity 

  • Gene editing 

  • Personalised healthcare plans 

  • Dosage error reduction 

  • Medical diagnosis 

  • Drug development 

The global AI healthcare market is predicted to grow to $187 billion by 2030 (opens a new window), up from $11 billion in 2021. But the benefits of AI, if realised, aren’t exclusively medical; the technology promises financial opportunities for the healthcare sector too. According to Harvard’s School of Public Health (opens a new window), using AI to make diagnoses may reduce treatment costs by up to 50%. Tools such as voice-powered assistants and machine learning are already available to help make the treatment of patients more efficient and reduce potential waiting lists. The technology can also help connect disparate healthcare data and enable better health monitoring and preventive care.

Understanding the risk

Despite these potential advantages, the deployment of AI is also introducing new risks. There are concerns around the potential for bias (opens a new window) in the underlying algorithms of AI tools. As a result, AI-driven decisions may result in inaccurate diagnosis or treatment with potentially devastating consequences for patients.

Due to the nature of healthcare data, there are also heightened privacy risks, even if the data is anonymised prior to processing. AI technology may not only be able to re-identify an individual, but also to make sophisticated guesses (opens a new window) about the individual’s non-health data. Healthcare entities and their third-party vendors are therefore particularly vulnerable to data breaches and potential violations of privacy laws and regulations.  

The key for insurers is to see a fully joined up approach to the risk assessment. This means full disclosure of the usage of AI tools, recognition of the reality that the fault may lie with the developers behind the AI tools rather than (or as well as) the treating physician. Insurers are likely to require detailed information about AI tools and how they are being deployed. Questions may include the frequency with which the AI tool is being deployed, the experience and qualifications of the physicians to use the tools, how it’s being serviced and maintained (in-house or third party), and if there is a contractual recourse against the developer. Underwriters are currently in the process of adjusting their underwriting approach to reflect their risk appetite.

The liability perspective

Liability coverage in healthcare is being diluted with respect to cyber and privacy. New exclusions have emerged in recent years around computer systems, data, and viruses, and insurers seek to remove any coverage more suitably included in a separate cyber policy. Where cyber exclusions apply under a general liability policy, write-backs for bodily injury may be negotiable, but they may only cover a limited definition of injury or mental anguish when resulting from physical injury. As the cyber market currently doesn’t offer coverage for bodily injury, there may be a gap developing between the two programmes that needs to be acknowledged and addressed. 

Furthermore, policy wordings may be becoming outdated, inadequately defining data, or the technology products and services being deployed by insureds. For example, there is often an all-encompassing broad definition which inadvertently reduces cover, e.g. a definition of “computer systems” including electronic devices such as smartphones, laptops, tablets, or wearable devices. Where AI is used for remote monitoring on a wearable device, this could fall into the definition of a computer system and subsequently be excluded under the general liability policy. However, such a device is part of the care being offered to patients, and coverage for any injury for such treatment should not be reduced or limited. The existence of broad exclusions and definitions around technology therefore has the potential to inadvertently limit cover for medical care and should therefore be carefully reviewed and amended as necessary.

Crucially, medical malpractice cover must recognise the increasing use of AI by treating physicians and be affirmative in its coverage. A surgeon using augmented reality to guide a less experienced colleague in a remote location will require comfort that the medical indemnity insurance not only covers the use of such technology but that it also provides affimative bodily injury coverage under the relevant products and liability coverages for the AI software being deployed and the coding behind it. Given the relative youth of AI utilisation within healthcare, data availability is scarce in some areas. Using statistics to demonstrate the efficiencies and lower injury rates created by AI is critical to address insurers’ concerns about processes which don’t include human intervention.

The cyber insurance perspective

To assess if an AI risk is or can be covered through cyber insurance policies, it is crucial to distinguish between the types of exposure. For example, if a healthcare provider is using AI to optimise data, for research and development (R&D), clinical trials, and patient data, these risks are likely to be covered by a cyber insurance policy as the damage caused would be "limited" to violation of privacy and financial loss. However, if the AI technology is used to deploy smart products to support with surgery or to manage pacemakers, for example, these risks are unlikely to be covered by a cyber policy. This is because the damages would fall under the bodily injury or property damage categories, both traditionally excluded from a cyber policy.

In addition, it is important to note that AI is becoming extremely valuable for defending organizations from cyber-attacks. The vast majority of ultimate generation antivirus or defence software use AI components to detect and respond to threats, so the use of AI could also have a positive impact to exposure, if managed correctly. Because the term AI is used very flexibly, underwriters have started to ask very specific questions before taking underwriting decisions:   

  • Is your organisation using AI to streamline and accelerate processes or in critical patient care? 

    • If the former, what controls do you have in place to make sure AI is deployed within its legal boundaries?

    • If the latter, what procedures do you have in place to monitor the accuracy of AI? 

  • If AI is used to treat patients, have you spoken to your medical malpractice insurer to make sure there are no exclusions for the damage caused by the AI? 

  • How is the technology or software managed (by internal departments or outsourced to third parties)? How much experience do they have in managing AI?  Do you have any manual workarounds if the technology malfunctions or is interrupted for whatever reason? 

  • What are your contractual provisions with technology providers? This is particularly important as the majority of technology companies limit their liability under contract to a multiple of the contract value, which is often a small portion of the potential exposure. 

  • Do you seek customers’ authorisation for: 

    • Being treated with the assistance of machines? 

    • Processing their data by AI? This is particularly important where medical records are involved with tightening privacy regulations.   

Recommendations:

It’s crucial that an insurance buyer shares an in-depth overview of the array of processes and services where they are both deploying AI today, and plan to do in the future. Once this is understood, the challenges and risks facing healthcare providers can be assessed. Further, the extent to which a risk transfer may be able to support in mitigating exposures needs to be identified. The insurance coverage will likely need to be tailored to the specific requirements of each individual healthcare provider.

One of the fundamental outputs is to interlink and dovetail cyber and liability policies to ensure no gaps or duplications in cover are present. This will be conducted through:  

  • In-depth analysis of AI’s utilisation across services  

  • Loss scenario workshops to stress test existing cover and identify gaps or duplications  

  • In-depth reviews of definitions and restrictive language  

  • Closely working with underwriters to ensure broad write-backs and breadth of cover is maintained  

Conversely, the medical malpractice, the products and the liability cover need to provide affirmative language around AI. Consideration should also be given to limiting any cross-class disputes by combining these covers, potentially including cyber, with one carrier. At least in the interim. Lockton sees an opportunity for the insurance market to be at the forefront of the use of AI in the healthcare sector and is encouraging the creation of an insurance product which removes any ambiguity or gaps in cover. We strive toward a single policy which encompasses medical malpractice, public & products liability and cyber. Watch this space.  

For further information, please access the Lockton Healthcare page (opens a new window), or contact:

Sara Baker, Head of London Market Casualty

E. sara.baker@lockton.com (opens a new window)

Carlo Ramadoro, Broker Cyber & Technology

E. carlo.ramadoro@lockton.com (opens a new window)

Jacob Bedi, Account Executive P&C

E. jacob.bedi@lockton.com (opens a new window)

Kevin Culliney, Partner, Head of Healthcare

E. kevin.culliney@lockton.com (opens a new window)

Jennifer Berridge, Account Executive

E. jennifer.berridge@lockton.com (opens a new window)     

Read our latest healthcare insights

AI in healthcare
Articles

Exploring the use of AI in healthcare