Law firm AI: insurance considerations

Many law firms are exploring the use of AI technology, including generative AI (GenAI), to provide legal services, with some firms developing their own in-house AI tools. The uptake of AI is outstripping efforts to create guidelines and establish best practice, while new regulation is also set to create compliance risks. Law firms using AI software may therefore incur significant liability risks, with potential implications for their professional indemnity insurance (PII) and directors’ and officers’ (D&O) insurance.

This article outlines how the ways in which law firms are using AI may affect their liability and insurance. It explains the current law and regulation on AI, and how these diverge. The article also sets out the approaches insurers may take to AI risk and how law firms can manage this risk effectively.

How law firms are using AI

The last year has seen a significant uptake of AI tools across the professional services sector, in tandem with growing public awareness of GenAI tools.

Law firms are no exception, with many firms taking advantage of the benefits of AI to optimise processes and drive cost efficiencies at scale. The Solicitors’ Regulation Authority (SRA) has reported that, in the UK:

  • Three-quarters of the largest law firms had begun using AI by the end of 2022, twice as many as two years before.

  • Over 60% of large law firms were at least exploring the potential of using AI systems, as were one-third of small firms.

(SRA: Risk Outlook report: The use of artificial intelligence in the legal market 2023 (opens a new window).)

There are various use cases for AI within legal services, including:

  • Administration, for example, answering legal queries with AI-enabled chatbots.

  • Drafting text using GenAI tools.

  • Profiling, for example, checking legal documents for drafting errors and suggesting modifications to improve client communications.

  • Legal research, for example, identifying relevant case law.

  • Identifying and predicting risk, for example, automating routine tasks in a disclosure or anti-money laundering exercise.

As the capability of existing AI tools expands and new tools enter the market, the legal sector is expected to increase the volume and range of its use of AI. To distinguish themselves amid a competitive market, some firms are already creating their own AI tools specifically for legal work, in partnership with internal or third-party developers.

Risks and liabilities of AI use in legal sector

Although AI presents many positive opportunities for firms, its use in legal services also threatens to create additional liability risk, for example, where outputs result in unfair or incorrect outcomes.

Potential risks that apply to all organisations using AI include failure to properly:

  • Train or implement the AI system.

  • Monitor and check the outputs.

  • Train staff to use and understand AI tools.

  • Carry out adequate risk assessments.

  • Have appropriate internal policies and frameworks in place to govern use of AI tools, to monitor the AI’s outputs and to rectify issues with the AI models.

However, the use of AI to deliver legal services also throws up more specific risks for law firms, such as:

  • Errors and inaccuracies. When drafting legal arguments, AI “hallucinations” may create fictitious cases (as occurred in the US case of Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023)). This may be exacerbated by a lack of human oversight and review of the outputs.

  • Breach of confidentiality, for example, where:

    • AI tools are used to answer a question on a client’s case;

    • Personal data is disclosed when information is transferred to a potential third-party vendor; or

    • Systems holding confidential information are subject to a data leak or security threat.

  • Failure to obtain informed consent before using AI to process client data.

  • Breach of intellectual property (IP) and copyright, for example, when using AI to draft legal briefs, conduct research and so on.

  • Breaching contractual obligations.

Exposure to these risks also depends on whether the firm is using its own AI tool or a third-party tool. Firms are likely to have a greater understanding of the function and implementation of tools they have developed themselves, making any issues easier to resolve and risk management and governance procedures easier to document. In contrast, while third-party tools are potentially a quicker, more practical and more cost-effective solution, they may lack transparency, which could prevent or complicate efforts to identify risks ahead of time. Integrating third-party AI tools into a firm’s operations also poses a counterparty risk (for example, if that tool is withdrawn or ceases to operate) and security and privacy risks.

Legal and regulatory divergence

For law firms, the challenge of navigating AI risks is further complicated by a diverging legal and regulatory landscape for AI, which is heavily dependent on factors such as the end-users’ physical location. This may trip up any firm that mistakenly considers that regulations and legislation in different jurisdictions do not apply to their work because they are not physically based there. If the output of their AI use will be sent to, or used in, other jurisdictions, firms will need to meet the requirements of those jurisdictions.

Geographical regulatory differences

So far, the UK has adopted what it calls a “pro-innovation approach”, establishing a principles-based framework for regulators to interpret and apply to AI use within their regulatory remits (Department for Science, Innovation and Technology: A pro-innovation approach to AI regulation (March 2023) (opens a new window)).

In the case of law firms, this places responsibility for AI regulation with the SRA. This contrasts with the EU approach under the AI Act, which classifies AI applications according to one of four levels of risk, each carrying an appropriate level of regulatory intervention. Specifically, the use of AI within matters of justice would fall under the category of high-risk, potentially imposing stringent regulation on law firms who either develop their own AI tools or use third-party AI tools.

In the US, no federal legislation on AI regulation has yet been passed, although certain states have begun to introduce guidelines or legal precedents relating to AI use. In November 2023, the State Bar of California approved guidance on the use of GenAI in the practice of law (opens a new window), stating (among other requirements) that lawyers must anonymise client information when inputting data into GenAI tools and review all outputs before submission to court.

Each of these approaches is likely to place different obligations on law firms’ AI use. Firms need to understand which legislation and regulations are relevant to their business and operations and consider these as part of their risk mitigation strategies.

Existing law and regulation

Many of the principles of AI governance may also fall under existing law. For example, the General Data Protection Regulation ((EU) 2016/679) (EU GDPR) in the EU and the retained EU law version of the EU GDPR (UK GDPR) in the UK, or laws governing equality or consumer rights. In many cases, precisely how these laws will apply to the use of AI, and therefore the extent of liability that they will create, is not yet clear.

Insurance approaches to AI risk

Insurer attitudes

Inevitably, underwriters in the law firm insurance market are taking a keen interest in how AI is impacting firms’ ways of working. In considering law firm insurance applications, many insurers will expect to see evidence of how firms are adapting to the changes and preparing for the future.

This does not mean that firms are expected to lead the line with regards to implementing AI tools, but nor should they be overly averse to its potential benefits. Insurers recognise that a sensible approach for law firms to take in adopting AI tools is to proceed with change while being aware of the risks and managing or mitigating them as far as possible.

Law firm insurances

AI incidents can present a wide range of risks. They may impact one or more of the law firm’s insurance policies, as follows:

Professional Indemnity Insurance – PII indemnifies law firms against claims for loss or damage made by clients or third parties arising from negligent service or advice on the part of the firm. In the UK, solicitors’ PII policies must:

  • Comply with the SRA Minimum Terms and Conditions of Professional Indemnity Insurance (MTC) (contained in Annex 1 of the SRA Indemnity Insurance Rules (opens a new window)); and

  • Provide coverage for claims based on “any civil liability” arising from private practice.

PII policies should therefore respond where AI is used to perform legal duties and a claim against the insured later arises in relation to an alleged breach of those duties.

Cyber insurance – this may respond to, for example, claims relating to personal data breaches.

D&O insurance – these policies may respond to claims arising out of management failings, such as negligent deployment of AI and other regulatory claims.

Employment Practices Liability Insurance (EPLI) – this insurance may respond to claims of discrimination or bias against current or prospective employees arising out of, for example, the use of AI in recruitment or internal HR systems.

Impact on insurance applications

As insurers’ awareness of AI risks for their insureds deepens, they are likely to seek further information and assurances from firms as part of their application and renewal processes.

Duty of fair presentation

The use of AI in the provision of legal services represents increased risk to law firms. The extent of its use may therefore be disclosable to insurers under the pre-contractual duty of fair presentation under the Insurance Act 2015. Insurers may therefore adapt their proposal forms to include questions specifically targeted at discovering and understanding the firm’s use of AI, such as:

  • What forms of AI is the firm using?

  • What mechanisms does the firm have in place to monitor the use of AI?

  • Whether and how the firm obtains clients’ approval and consent to use AI in the delivery of legal services.

Warranties

Insurers may seek to manage risk by imposing warranties stipulating that the firm has either:

  • Not used AI.

  • Obtained appropriate limitations of liability from clients.

  • Only used certified AI tools (if a suitable certification system develops).

AI risk management

AI technology presents law firms with many positive opportunities to develop their business. However, in making the most of the technology, firms should take proactive steps to identify and manage the risks that AI use also presents to avoid issues, breaches and potential claims. Doing so also assists the firm in addressing any concerns or information requests from insurers, and may therefore also assist in securing insurance on the best possible terms.

Examples of best practice measures to consider include:

  • Creating internal AI policies and risk frameworks that are detailed and adhered to. The firm should ensure that these are regularly updated as the AI technology and its use within the firm evolves. This should include identifying relevant accountable persons and making sure they are aware of their roles in governing the use of AI tools within the firm.

  • Conducting ongoing monitoring of the algorithms underpinning the AI systems that the firm uses. Where the AI tools have been developed and supplied by third parties, the firm should seek evidence that they are undertaking monitoring procedures.

  • Ensuring staff are properly trained in the use and potential risks of AI, as well as methods for checking its outputs. Similarly, the firm should ensure that leadership teams and the board have sufficient knowledge and understanding of the tools, as certain legislation and regulatory responsibilities ultimately lie with them.

  • Ensuring that all relevant personnel are aware of the additional risks to their department arising from the implementation of AI. For example, a legal department may need to be more aware of any IP threats where GenAI tools are used in the performance of their legal work. Data security teams should also be alert to the increased threat of data breaches.

Firms may wish to discuss these issues with their insurance brokers, who will be familiar with insurer concerns, attitudes and requirements. Brokers may be able to support the firm in designing its AI risk management programme and assist with presenting the firm’s AI risk profile to insurers as positively as possible. It’s worth noting that the nature and extent of risks evolving from the use of AI is still ongoing and is something that will be constantly developing as time goes on. As insurers begin understanding more about these risks, it’s likely that additional questions may be asked by insurers and insurance products may evolve.


Contact us (opens a new window) for further information, or visit our Solicitors (opens a new window)page.

This article was authored by Lockton for Practical Law. Reproduced from Practical Law with the permission of the publishers. For further information visit www.practicallaw.com (opens a new window).

Our latest solicitors’ insurance insights

Residential property
Articles

Conveyancers: protecting against property fraud