Artificial Intelligence (AI) is already being used to increase efficiency and accuracy in a wide range of healthcare areas, and healthcare service providers are currently exploring many more applications for the technology. Insurers need to be kept informed from the beginning of the development of new tools to ensure that the healthcare provider will be protected against the risk of a negative outcome triggering a claim.
AI is being used for a wide range of tasks to improve patient care, streamline operations, and enhance medical research. In the field of diagnostics and imaging, AI is able to assist in the interpretation of medical images like X-rays, magnetic resonance imaging (MRI), and computed tomography (CT) scans to detect abnormalities and enable radiologists to make more accurate diagnoses. The technology can also help to analyse patient data, allowing researchers and healthcare providers to predict disease outbreaks and patient readmissions. As outlined in a presentation at the recent CFC Summit, ‘Incisions, instruments…internet (opens a new window)?’, some practitioners are also using AI to monitor patient data in real time to identify signs of deterioration and to send alerts to intervene early.
Each area of healthcare presents unique challenges, and the pace at which AI applications can be developed will inevitably vary. But in the short-to-medium term, AI will be deployed more widely, particularly in electronic health records management and to increase administrative/operational efficiency. Natural language processing tools can extract and structure information from unstructured clinical notes, making it easier for healthcare providers to access relevant patient data. Billing and claims processing can also be automated using AI, leading to a reduction in errors. Both are already showing positive signs of freeing up healthcare providers leading to them being unencumbered by paperwork.
AI-powered opportunities in healthcare
Early and more accurate detection of diseases
Cognitive technology can help unlocking vast amounts of health data and power diagnosis
Predictive analytics can support clinical decision-making and actions
Clinicians can take a more comprehensive approach to disease management
Robots have the potential to revolutionise end of life care
Streamline the drug discovery and drug repurposing processes
Naturalistic simulations for training purposes
Technology applications and apps can encourage healthier patient behaviour, enable proactive lifestyle management, and capture data to improve understanding of patients’ needs
But where there are opportunities there are also risks. AI is known to be susceptible to bias. The algorithms that underpin AI-based technologies have a tendency to reflect human biases in the data on which they are trained. As such, AI technologies have been known to produce systematically erroneous results, which could negatively affect patients from particular groups.
AI-driven tools may also expose businesses to privacy and cyber security risks. In addition, a lack of human-like creativity and empathy may negatively impact the deployment of AI in a sensitive field like healthcare.
From an underwriter’s perspective, concerns around AI can vary depending on the specific use case, the size of the client concerned, and the regulatory environment.
Areas of less concern are likely to include administrative improvements, deployment of AI for clinical validation studies, data quality and governance, staff training and collaboration with healthcare professionals, as well as compliance with regulations. By contrast direct-to-consumer chatbots diagnosing conditions, and secondary AI/machine learning tools to detect cancer will likely require more detailed information.
If AI is being used in a clinical setting, it is important to understand if the tool’s algorithms have been clinically validated for efficacy and accuracy, to prevent misdiagnoses or incorrect treatment recommendations. Healthcare providers also need to be able to explain the ethical considerations and mitigating measures taken, specifically in relation to bias and fairness. Patients, meanwhile, typically need to be informed before AI is used in their care, and will need to provide consent.
Determining liability in cases of AI-related errors or adverse events poses a particular challenge to the healthcare sector. Healthcare providers, insurance brokers, and insurers need to work closely together to ensure that coverage is designed in a fashion which meets the healthcare provider’s needs and contractual obligations.
Although the liability landscape for healthcare providers utilising AI is relatively untested, there are anonymised claims analytics and trends reports that can help to better understand the risks.
Risk mitigation measures for AI use in healthcare
1. Protect patients’ privacy
Implement robust data security measures
Anonymise patient data and adopt strict access controls
Ensure compliance with relevant data protection regulations
2. Ensure algorithmic bias mitigation
Employ rigorous data pre-processing techniques to identify and remove bias from training data
Regularly audit and test AI models to ensure fairness and transparency
Involve diverse stakeholders and subject matter experts in the development and validation process
3. Maintain ethical use of AI-generated content
Develop clear guidelines and standards for the use of AI-generated content
Train healthcare professionals to critically evaluate and validate AI-generated outputs before making decisions based on them
Maintain human oversight and accountability in the use of generative AI
For further information, please contact:
Tom Hester – Senior Vice President Healthcare