Artificial intelligence (AI) tools are increasingly embedded within professional services, with uses that range from automating routine tasks, to conducting deep-level data analysis. Firms are deploying both third‑party tools and internally‑developed solutions at pace, with adoption growing quickly, particularly among junior staff.
But this embracing of AI within the professional services sector isn’t risk-free. A survey of underwriters conducted by the Lloyd’s Market Association (LMA) identifies Professional Indemnity (PI) as the insurance line most likely to experience AI-related losses (opens a new window), driven by the potential for erroneous or hallucinated outputs. Despite the clear operational benefits of this technology, the risks associated with AI use are bringing about increased regulatory scrutiny and a heightened need to demonstrate responsible implementation.
Against this backdrop, we’ve set out some key areas of risk that professional services firms may wish to keep in mind when developing and implementing AI tools. Any roll-out of AI must be conducted in a manner that is both responsible and aligned with professional, ethical, regulatory, and insurer expectations.
Key risk areas to consider
Governance and firmwide policies
As a starting point, all firms should have firmwide policies in place which cover responsible use of AI tools. Employees at all levels of the firm should be aware of when an AI tool can be used, and when it should not. Different tools may be used for different purposes, and the data inputted into a certain tool may also depend on its nature and security parameters. Limitations of AI use should be explained, including the potential for bias, generation of inaccurate information (hallucinations), and confidentiality/privacy concerns. Procedures for management of adverse incidents relating to the use of AI, including escalation processes, should be established. Firms might also wish to prepare a guidance note to accompany any firmwide policy, which could include examples of how the policy applies in practice, and can be periodically updated as the relevant technologies develop. All such documents should be clearly articulated and made easily accessible to all employees.
Where a firm is considering developing its own AI tools in-house, it may wish to document how that development process took place. This might include, for example, a description of the tool developed and its function, decisions made during its development, the appropriateness of testing and evaluation processes, the oversight provided in the design process, as well as any controls regarding the use of the tool. The document should consider the likelihood of risk events materialising and any potential impacts on the firm, clearly showing the thought process taken from the tool’s development to implementation.
Professionals are trusted to exercise their judgement and provide services with a high degree of competence. Such qualities must not be undermined by undue reliance on technology. Practitioners need to remember that professional judgement remains crucial, and should be exercised when conducting any independent evaluation of AI outputs.
Confidentiality
One of the major risks with unchecked AI usage is the potential for compromising confidential client data.
Firms need to consider carefully how data (and particularly confidential data) might be used when framing prompts for AI tools, how Generative AI models are trained, and ultimately how any data inputted into the tool can be subsequently disseminated, especially in the case of open-source AI tools. How such data is treated might also have an impact on material which would otherwise be subject to legal professional privilege.
Similar considerations will apply when firms are considering purchasing AI tools. When contracting with a new vendor supplying an AI tool, it is important to understand exactly what data the tool can collect, where that data is stored, how long it is retained for, and whether customer or client data is used for training models. A vendor should also be able to provide detailed material as to encryption standards, model information, and the use of external data. If a vendor is unable to answer these questions in a clear manner, firms should think carefully before making a purchase.
Considerations in this area can also overlap with legal obligations in respect of privacy, data protection, and intellectual property. Firms should be aware that PI exposure is significantly heightened where there is an indication that confidentiality obligations might have been breached.
Client knowledge of AI use
From the outset, when engaging a prospective client, the client should ideally be made aware of any AI tools to be used by the firm while working on their file.
One of the ways to explain the extent of reliance on AI tools used by a firm could be within the firm’s letter of engagement. A firm might wish to make clients aware of the extent of any reliance placed on the output of the tool by the firm, including whether a disclaimer is required.
This is particularly important where a firm’s advice or work product might end up containing inaccurate or hallucinated material as a result of the use of Generative AI, and the client then relies on that advice. Firms may risk being accused of negligence or misleading the client, and it would certainly be prudent to consider these issues well in advance of any adverse events occurring.
Training
Many professionals, and especially those in regulated sectors, are required to exercise due care in service delivery, as well as consistently maintaining good levels of professional knowledge and skill. With the rapid adoption of AI across the professional services sector, staying abreast of technological developments may now be considered necessary to ensure that clients continue to receive a competent professional service in 2026.
Firms should therefore review their current training/CPD programmes and consider whether training on AI usage may need to be further embedded. This will ensure that employees remain capable of making informed decisions when using AI tools and that they are up to date on important technological developments – this includes a working knowledge of a tool’s capabilities, but also its limitations. Additional training on prompt usage and red‑flag outputs can also be useful.
Conclusion
AI tools have the potential to drastically drive efficiencies and provide significant opportunities for innovation, especially in professional services. But with usage particularly high among younger professionals and adoption increasing across every service line, firms must urgently consider how AI is being used responsibly at all levels. Robust governance and oversight processes, careful consideration of confidentiality and legal privilege obligations, transparency with clients, and having the appropriate training are key – and these factors now all play a material role in how PI underwriters assess firms’ emerging risk profiles.
Ultimately, regardless of the nature of the AI tool, professional judgement and human input cannot be outsourced or delegated, and will remain essential to meeting ethical, regulatory, and insurer expectations.
For more information, reach out to a member of our team.
Relevant links
Thomson Reuters AI in Professional Services Report (opens a new window)
Law Society publishes new guide warning over AI risks (opens a new window)
RICS Artificial Intelligence in construction report 2025 (opens a new window)

