AI Errors and Accountability

The widespread availability of OpenAI has led businesses to integrate it into their daily operations. While intended to aid people and improve work efficiency, a Deloitte AI ethics survey suggests that machine learning is prone to inherent biases and imperfections. The algorithm learns from provided examples to achieve desired outcomes, aiming for optimization. Through multiple attempts, it predicts outcomes and is rewarded or penalized accordingly to maximize results, particularly in complex tasks like stock trading. The Wall Street Journal suggests that companies using AI for content creation, decision-making, or influencing others may be held accountable for their AI's actions. Although there is currently no specific legal framework for AI usage, various cases have emerged highlighting misuse and the vulnerability of AI tools to generate false information.

Examples of AI Misuse and Accountability

1. Fraudulent Cases Generated by Open AI

In May 2023, a lawyer in New York reportedly referenced fraudulent cases generated by OpenAI during a court proceeding. Six of the cases submitted were found to contain fictitious judicial decisions, quotes, and internal citations. The plaintiff admitted to consulting OpenAI as an additional resource for legal research. Despite the defense's assertion that the cited cases were legitimate and could be found in reputable legal databases, legal experts criticized the failure to authenticate the research.

2. Misinformation by an AI Chatbot

As early as 2019, a Canadian airline integrated AI into its operations, including the implementation of an AI chatbot to enhance customer service. Earlier this year, the airline lost a small claims case due to misinformation provided by the chatbot regarding bereavement fares. Contrary to the airline's policy, the chatbot inaccurately claimed that retroactive applications for such fares were possible. Despite the airline's argument that passengers could verify information through a provided link, the court ruled in favor of the passenger, highlighting the airline's responsibility for the chatbot's accuracy.

3. Safety Concerns with a Self-Driving Car

Just last year, a self-driving car company in the US recalled 950 driverless cars following a collision involving one of its autonomous vehicles. The accident occurred when a pedestrian was struck by another vehicle and thrown in front of the autonomous vehicle. Despite initially stopping, the vehicle subsequently hit the pedestrian to veer to the right to clear traffic, dragging the pedestrian approximately 20 feet. As a result, the pedestrian sustained severe injuries and was trapped under one of the vehicle's tires. This incident led to the suspension of the car company's driverless operations due to safety concerns raised by regulators.

Determining responsibility for errors made by AI systems can be complex. Sometimes, the AI itself is solely at fault, while in other instances, the humans involved in its creation or operation may share some or all of the blame. This determination often requires legal experts to assess liability on a case-by-case basis. A lack of adequate training may point to the responsibility of those involved, while deliberate misuse places accountability on the end user. Without clear structures for accountability, organizations face operational risks, legal challenges, and damage to their reputation. It's essential to address accountability concerns and effectively manage the responsibilities associated with AI technology.

For further questions or assistance in understanding how to manage these risks, feel free to reach out to Lockton Philippines at info.philippines@lockton.com (opens a new window), or Lockton Asia at enquiry.asia@lockton.com (opens a new window).