AI risks: what directors and officers need to know

All new technologies carry risks when introduced on an enterprise level. Artificial intelligence (AI) is no exception, and directors and officers may find themselves in the crosshairs should negative repercussions arise from the use of such tools.

To be prepared for the potential regulatory scrutiny or claims activity that comes with the introduction of a new technology, it is imperative that boards carefully consider the introduction of AI, and ensure sufficient risk mitigation measures are in place.

AI benefits and challenges

Across every sector, AI tools are redefining businesses’ ways of working, streamlining processes and increasing productivity. The technology has promised to facilitate customer services and is responsible for the creation of new products and employment opportunities.

But despite its many potential benefits, AI also brings new challenges for businesses, and risks to be managed. Although these risks will vary from sector to sector and depend on where the tools are being deployed, they can include risks such as harm to a businesses’ customers, or financial losses incurred directly by a business itself.

Companies’ disclosure of their AI usage is another potential source of exposure. Amid surging investor interest in AI, companies and their boards may be tempted to overstate the extent of their AI capabilities and investments. This practice, known as ‘AI washing’, recently led one plaintiff to file a securities class-action lawsuit (opens a new window) in the US against an AI-enabled software platform company, arguing that investors had been misled.

Just as disclosures may overstate AI capabilities, companies may also understate their exposure to AI-related disruption or fail to disclose that their competitors are adopting AI tools more rapidly and effectively. Cybersecurity risks or flawed algorithms leading to reputational harm, competitive harm or legal liability are all potential consequences of poorly implemented AI. Alongside this, growing regulatory activity, such as the EU AI Act (opens a new window), approved by European Parliament earlier this year, is pushing for transparency and is amplifying the scrutiny of businesses’ AI use.

An emerging risk for directors and officers

Ultimately, responsibility for some of these risks might rest with the C-suite: Not only do some believe that directors and officers are responsible for managing the implementation of AI, but some also believe that they must understand the risks that such tools pose and take steps to mitigate against potential damage. The latter will also be helpful in educating directors’ and officers’ (D&O) liability insurers that specific AI risks are well understood and managed at the board level.

Allegations of poor AI governance procedures or claims for AI technology failure as well as misrepresentation may be alleged against directors and officers in the form of a breach of the directors’ duties. Such claims could damage a company’s reputation and result in a D&O class action.

Key AI risks for directors and officers include:  

  • Legal liability – where AI takes a greater role in corporate decision-making, including undisclosed use of AI or inadequate disclosure of its risks 

  • Negligence claims – including allegations of discrimination, bias, invasion of privacy, and redress for victims who have suffered damage as a result of AI failure 

  • Product liability/breach of contract claims – including failure to ensure that an AI product that caused harm was free from defects 

  • Misrepresentation claims – if AI is used to generate reports, such as financial disclosures, directors may be held personally liable for misrepresentations or inaccuracies 

  • Competition claims – if AI is used to recommend transactions in price-sensitive securities, or to set the price of goods or services sold by a business, boards must make sure that the AI is not relying upon inside information, or causing the company to coordinate its prices with competitors in an anticompetitive manner 

  • Insurance risk – if a business suffers loss due to an AI failure, and it does not have adequate insurance, this may lead to claims, with directors potentially liable for a breach of their duties for failing to arrange adequate insurance coverage 

Looking at this list, it is perhaps not surprising that AI-related questions are top of mind for D&O insurers. In December 2023, Allianz Commercial warned specifically (opens a new window) about potential threats AI may represent to cybersecurity, increased regulatory risk, unrealistic investor expectations about its capabilities, as well as managing misinformation. 

As AI systems gain traction across a wide range of industries, insurers will increasingly inquire about AI applications in business operations and in connection with the management of the entity.

Board-level considerations for AI

Despite its challenges, AI can be a manageable risk. An organization must set up best practices and keep governance, compliance protocols, and legal frameworks up to date as AI technology evolves. Likely considerations for directors and officers include: 

  • What is the decision-making process for adopting new technologies?  

  • What is the right amount of capital investment to make in AI, recognizing that such investments are costly? 

  • How will the company track use of AI and any resultant cost efficiencies? 

  • How are customer attitudes towards AI and automation evolving? 

  • Are adequate cybersecurity measures in place to protect against AI-related vulnerabilities? 

  • Are transparent procedures in place to respond to AI issues and mistakes? 

  • Have staff been appropriately trained to use and manage AI, and are they equipped with the necessary resources to do so effectively? 

  • Has appropriate insurance cover been purchased to protect against AI-related losses? 

Boards, in consultation with in-house and outside counsel, may consider setting up an AI ethics committee to consult on the implementation and management of AI tools. This committee may also be able to help monitor emerging policies and legislation in respect of AI. If a business doesn’t have the internal expertise to develop, use, and maintain AI, this may be actioned via a third-party. An AI ethics committee may also be able to discuss the management processes for AI bias, for intellectual property, cyber risks, and data privacy.

As AI continues to evolve, it is essential for companies and their boards of directors to have a strong grasp of the risks attached to this technology. With the appropriate action taken, AI’s exciting potential can be harnessed, and risk can be minimized.

For more information, visit the Lockton Management Liability (opens a new window) page.

Read our latest risk control insights