De-risking AI in precision medicine

Artificial intelligence is the backbone of precision medicine, underpinning efforts to provide effective treatments for patients, based on genetic, environmental, and lifestyle factors. But if deployed without safeguards, AI systems can amplify bias, or falter in real world clinical settings – with direct consequences for patient safety, trust, and business resilience. For healthcare organisations, the question is how to de-risk deployment of AI to ensure that innovation translates into reliable and equitable outcomes for patients.

Rewards and risks from AI-enabled healthcare

Forms of precision medicine have been regularly deployed in traditional healthcare practices, including blood transfusions, organ donation, and allergen therapy. However, recent advances in AI are accelerating research and development in the field. By drawing on the computational power of AI, healthcare organisations are pioneering for precision medicine: from establishing genomic data repositories, to pioneering early cancer detection and treatment, and tackling health inequalities among minority populations.

But the growing influence of AI within precision medicine also raises challenges for the sector. Research has revealed several outcomes relating to bias in precision medicine, particularly concerning racial, ethnic, sex-specific, and ancestral disparities. These biases can negatively impact prediction accuracy, therapeutic responses, and the generalisability of treatments, especially for populations under-represented in clinical data sets.

Examples of bias include:

  • Treatment assignment biases and unbalanced observational data may distort machine learning models, affecting biomarker identification and personalised treatment decisions.

  • Data gaps in genomic datasets may result in limited understanding and poorer outcomes for minority ethnic populations.

  • Sex-specific and racial bias in AI algorithms can skew clinical risk assessments and treatment recommendations, leading to negative impacts for marginalised groups.

  • Lack of representative genomic data may reduce the effectiveness of disease prediction and personalised therapy in certain populations.

Regulatory obstacles pose a further challenge. The rapid development of AI has outpaced the legal frameworks designed to regulate it. For organisations looking to deploy AI within precision medicine, this creates considerable uncertainty. For example, AI-generated drugs may face legal challenges relating to intellectual property and liability, and potential delays to approval. As regulations evolve, they may also introduce new requirements that an existing AI-driven development process fails to meet – resulting in approval being withdrawn for the resultant drugs.

Even where healthcare outcomes are met through compliant AI, deploying precision medicine is not without risk to the organisations involved. Training AI models effectively relies on the processing of large volumes of sensitive data, including medical history and genomic profiles. As such, healthcare organisations have become an increasingly valuable target for cyber criminals seeking to steal data in exchange for ransom.

Are healthcare organisations liable for AI?

The question of liability as it relates to AI is complex and uncertain. Known as the ‘black box’ problem, AI systems can make opaque decisions that complicate efforts to establish fault. If such decisions result in injury or damage, then liability may – or may not – fall on the healthcare provider, the owner of the AI, or its developer (if separate). It may also extend to individuals, including board members, directors, and officers if found to have committed wrongful acts in the design, procurement, or deployment of AI solutions.

In this context, having a robust contractual relationships between the various parties engaged in the development, supply and use of AI is essential to ensure the liability of each party is clearly defined and understood. While healthcare organisations may seek to reduce exposure to these supply-chain risks by developing proprietary AI tools, this may increase the likelihood of loss should an incident occur.

Even if healthcare organisations are not directly liable for AI, outcomes that are unreliable or lack efficacy can still pose a substantial risk to organisational resilience. With any AI-enabled tool or drug, bad outcomes may only emerge late in the development stage. Or worse, they may only occur after the product or service has been deployed. Given the long-tail nature of research and development, this – or an update to regulation – could result in significant sunk costs for the organisations involved, as well as substantial financial and reputational loss.

Like many scalable technology solutions, AI-enabled precision medicine has a high propensity for systemic claims, in which a minor miscalibration or unreliability leads to widespread loss. To protect themselves in the event of a loss, healthcare organisations will need to work closely with brokers and experts to ensure they set appropriate limits.

"Healthcare leaders cannot afford complacency; rigorous validation, diverse data mandates, and hybrid clinician AI oversight are non-negotiable to derisk innovation. Without them, precision medicine risks becoming precision peril, shattering public confidence and inviting legal reckoning."

– Sam Shah, Professor of Digital Health, CoMD and visiting faculty

Key actions to mitigate AI risks

To prioritise outcomes for patients and clinicians, healthcare organisations must take deliberate steps to ensure AI models are safe, reliable, and trustworthy.

Practical actions for organisations include:

  • Validate and monitor AI models rigorously – conduct stress testing across diverse clinical scenarios, establish real-time monitoring post-deployment, and maintain clinician oversight to ensure accountability.

  • Embed fairness from the outset – train models on diverse datasets and deploy adversarial debiasing techniques to modify the learning process. Involve patient advocates and community representatives in AI deployment.

  • Align with evolving regulation – adopt existing regulatory frameworks and comply with emerging standards. To ensure cross-border compliance, prepare for alignment with forthcoming international frameworks (e.g. EU AI Act).

  • Adapt tools to local contexts – customise AI systems to local clinical environments, accounting for infrastructure and equipment limitations. Hybrid models – which combine AI-enabled healthcare with traditional diagnostics methods – can be more effective in low-resource settings. Train local clinicians to use tools effectively.

  • Prioritise data protection – train AI models using decentralised training approaches, without transferring sensitive data. Support this with transparent consent processes and strict, role-based access controls. Ensure privacy considerations are emphasised alongside diagnostic accuracy as a core objective.

By embedding risk management into each stage of AI adoption, healthcare providers can reduce liability exposure, safeguard sensitive patient data, and maintain trust. Ultimately, this de-risking will help to realise the potential of precision medicine and deliver lasting value for patients and organisations alike.

Talk to us

The growing use of AI in precision medicine means that a single clinical interaction can simultaneously trigger technology failures, data breaches, algorithmic errors, and traditional malpractice exposures. This makes it increasingly artificial to separate “cyber” risk from “professional liability” risk. To keep pace and to avoid gaps in cover, healthcare organisations need insurance protection where cyber and liability covers are intentionally dovetailed.

Our Healthcare Practice specialists work closely with healthcare providers to understand their exposures and identify practical risk and insurance solutions. By combining sector knowledge with insight into emerging technologies, we help organisations to adopt AI with greater confidence and resilience.

For more information, reach out to a member of our team.