Accountants: Key considerations for implementing AI

The adoption of digital tools and artificial intelligence (AI) in the accountancy sector has experienced exponential growth in recent years. According to the Thomson Reuters Generative AI in Professional Services Report 2025 (opens a new window), 21 percent of accountancy firms were using AI in 2025, representing a year-on-year increase of 13 percent, the highest increase of any professional services sector surveyed. Firms of all sizes are now embracing third-party AI tools or expending significant resource to develop their own.

This wave of innovation, however, has not been without incident. Firms have been censured in court judgments and by governments alike for apparent irresponsible use of AI tools. Accountancy regulators are likewise increasingly alert regarding the need to implement such tools ethically.

For that reason, we have set out below some key areas of risk that accounting firms might wish to keep in mind when grappling with the development and implementation of AI tools across their firm, ensuring that any roll-out is conducted in a manner that is both responsible and compliant with professional and regulatory expectations.

Key risk areas to consider

Governance and firmwide processes

As a starting point, all firms should have firmwide policies in place which cover responsible use of AI tools. Employees at all levels of the firm should be aware of when an AI tool can be used, and when it should not. Different tools may be used for different purposes, and the data inputted into a certain tool may also depend on its nature and security parameters. Limitations of AI use should be explained, including the potential for bias, generation of inaccurate information (hallucinations), and confidentiality/privacy concerns. Procedures for management of adverse incidents relating to the use of AI, including escalation processes, should be established. All these policies should be clearly articulated and made easily accessible to all employees.

Where a firm is considering developing its own AI tools in-house, it may wish to document how that development process took place. This might include, for example, a description of the tool developed and its function, decisions made during its development, the appropriateness of testing and evaluation processes, the oversight provided in the design process, as well as any controls regarding the use of the tool.

In preparing any such documents and policies, firms and individuals regulated by the ACCA should continue to bear in mind the five fundamental principles in the Code of Ethics, as well as the potential for AI tools to pose threats to these fundamental principles. Professional judgment, objectivity, and competence must not be undermined by undue reliance on technology, and accountants need to remember that professional scepticism and human judgment remain crucial, especially when engaging in independent evaluation of any AI outputs.

Confidentiality

One of the major risks with unchecked AI usage is the potential for compromising confidential client data.

Firms need to consider carefully how data (and particularly confidential data) might be used when framing prompts for AI tools, how Generative AI models are trained, and ultimately how any data inputted into the tool can be subsequently disseminated, especially in the case of open-source AI tools.

Relatedly, ACCA members should bear in mind they are required to comply with the fundamental principle of professional behaviour, which includes complying with relevant laws and regulations. Accordingly, due regard should be given to how misuse of client data might overlap with other legal and regulatory obligations regarding privacy, intellectual property, and data protection.

Similar considerations might also apply when firms are considering purchasing AI tools. When contracting with a new vendor supplying an AI tool, it is important to understand exactly what data the system can collect, where that data is stored, how long it is retained for, and whether customer data is used for training models. A vendor should also be able to provide the ACCA member with detailed material as to encryption standards, model information, and the use of external data. If a vendor is unable to answer these questions in a clear manner, firms should think carefully before making a purchase.

Client knowledge of AI use

From the outset, when engaging a prospective client, the client should ideally be made aware of any AI tools which are to be used by a firm whilst working on their file.

One of the ways to explain the extent of reliance on AI tools used by a firm could be within the firm’s letter of engagement. A firm might wish to make clients aware of the extent of any reliance placed on the output of the tool by the firm, including whether a disclaimer is required.

This is particularly important where a firm’s advice or work product might end up containing inaccurate or hallucinated material as a result of the use of Generative AI, and the client relies on that advice. Firms may risk being accused of negligence or misleading the client, and it would certainly be prudent to consider these issues well in advance of any adverse events occurring.

Training

The fundamental principle of professional competence and due care requires ACCA members to maintain their professional knowledge and skill levels. With the recent exponential adoption of AI in the accountancy sector, staying abreast of technological developments may be considered necessary to ensure that clients continue to receive a competent professional service in 2026.

Firms should therefore review their current training/CPD programmes and consider whether training on AI usage may be embedded, to ensure that employees remain capable of making informed decisions when using AI tools and that they are up to date on important technological developments. The ACCA itself has a number of programmes on offer, covering topics such as AI applications in accounting and regulation and risk compliance. Additional training on prompt usage, red-flag outputs, and the limitations of AI tools can also be useful topics provided to firm employees.

Conclusion

AI tools have the potential to drastically improve efficiencies and provide unparalleled opportunities for innovation. Moreover, with 83 percent of 18- to 24-year-old chartered accountants (opens a new window) making use of AI at least once a week, firms must urgently grapple with how AI is being responsibly used at all levels. Such responsible use will require careful and ongoing consideration of key risk areas, including governance processes, confidentiality considerations, communications with the client, and training structures.

Ultimately, and regardless of the nature of the AI tool, ACCA members will need to always bear in mind the importance of complying with the five fundamental ethical principles. Members cannot abdicate nor outsource their responsibilities for professional scepticism and judgment to technology.

Kingsley Napley

Our latest accountants' insurance insights

news-article
Articles

Accountants: cybersecurity essentials to protect your practice