Snapshot
Whilst artificial intelligence (AI) is undeniably a useful tool, the velocity in which AI has continued to develop and the varying platform options available could leave firms exposed.
Misinformation and disinformation pose serious business and ethical risks to companies, especially those utilising AI technology for consulting and advisory services.
The input of mass data, including confidential and sensitive personal information will need to be carefully considered as AI can be manipulated by threat actors. Firms can also be vulnerable to the security controls AI platforms have in place. Heavy fines and regulatory action are possible, and, in some scenarios, prosecution could occur.
Following the Privacy Act reforms, firms will need to revisit their existing policies and procedures to ensure privacy data obligations are being considered and met when using AI.
Misinformation and disinformation are risky business
AI-based models can be helpful and have been proven to enhance some business outcomes. However, they pose a serious threat of misinformation and disinformation as they cannot be relied upon to produce factual content.
These tools are informed by data they are trained on, and if that data is biased or unreliable, it will produce biased or unreliable results. Additionally, AI technology can be manipulated to introduce false data.
Professional services companies using AI tools are at risk that the information and advice they provide could be inaccurate. This has the potential to harm the brand and reputation of an organisation which can create mistrust and disengagement from potential or existing clients. Such consequences can take a toll on a company’s bottom line as they struggle to regain client confidence and rebuild professional relationships.
In an article from the New York Times (opens a new window), disinformation researchers used an AI tool as an experiment to see if it would produce content on conspiracy theories and misleading narratives. The results were alarming and concluded that the relevant AI platform could be manipulated to spread false and ambiguous information.
In an Australian Financial Review (opens a new window) article, an ASX listed company’s procurement expert (receiving and reviewing masses of professional service firms’ proposals each year) said:
“We read every document that is in a proposal, so I don’t want to be reading garbage. We evaluate proposals based on the value they can communicate. But if the proposal is truly bespoke, then I don’t see the value add of using [a bot].”
The article went on to say that the expert strongly feels that professional service proposals should be transparent and disclose the use of AI and how it has been used.
Third-party and IP data vulnerabilities
The evolving advancements of AI tools create more opportunity for threat actors to cause harm to organisations from sophisticated phishing attacks with synthetic profiles and smarter malware.
Whilst beneficial and an essential, undeniable part of doing business in the modern economy, there are risks associated with outsourcing services to AI providers and whether third parties are disclosing their exposures.
The full extent of the consequences that could arise from data inceptions from AI technology are yet to be seen. Businesses that hold masses of sensitive information and personal details are high targets and should understand how their AI provider stores and shares their data.
Professional services firms hold highly sensitive information in respect of individuals and including a company’s IP, making them a more attractive target for malicious parties.
Given AI is not always reliable, the use of AI for decision-making should be done with great consideration and a ‘human insight’ should always be explored prior to any action or proceedings.
Increased likelihood of regulatory action and fines
The Privacy and Other Legislation Amendment Bill 2024 (the Bill) marks the beginning of crucial reforms to the Privacy Act 1988 (Cth) and aims to enhance protections for individuals’ personal information.
The Bill introduces key reforms designed to strengthen privacy protections and prepare the ground for further legislative changes anticipated throughout the year. The Bill also introduces legal avenues for addressing serious privacy infringements, including penalties for malicious release of personal information.
Additionally, entities must clearly disclose the types of personal data used in automated decision-making processes that affect individuals’ rights or interests. Organisations could see fines of up to $50 million for serious violations or 30% of a company’s revenue.
Digital strategy risk considerations and insurance
Presently, the majority of professional indemnity policies do not explicitly address AI related risks. Although the policy wordings are silent, the issue posed with this is that determining liability could be complex and potentially lead to issues over coverage and where liability lies.
Apportioning liability is further exasperated where professionals have amended the AI software or where the AI software requires the interaction of the professional or a third party to produce the final work product. In this situation, any number of potential insurers may be implicated and at a minimum may be required to meet defence costs.
For example, assuming there was a failure in the design or advice provided by the AI, the question would arise as to what exactly caused the claim, was it the negligence of the AI developer or the insured?
Alternatively, was the insured negligent in failing to verify the information provided by the AI or was it in the failure of the AI to perform correctly? Hence, it is unclear where the responsibility emanates from and the proportion of liability that should be applied.
Further, AI and data science challenge the established norms of privacy. Boards should reflect on these when adopting AI techniques and how it may affect customers/clients, employees, and other stakeholders.
Companies should:
Explore AI security to see how they protect the exploitation of AI systems from malicious actors.
Use data encryption and segmented systems to isolate attacks. These are key methods a company can use to protect themselves when using AI services.
Consider the benefits and exposures associated with AI, ensuring they address both appropriately with insurers.
Conscientiously manage data analytics to comply with the Office of the Australian Information Commissioner's (OAIC) Guide to Data Analytics.
Take a proactive approach to monitoring and validating the data used by their AI-based tools.
Establish clear communication channels and blend AI intelligence with human knowledge and insight. This can help professional service organisations learn how AI is making recommendations and determine if the information is correct. Additionally, human insight enables researchers to provide feedback to the AI tool to help improve its accuracy over time.
The future holds many unknowns
AI-based tools clearly have their place in professional services, and many organisations are already reaping the benefits and enhancing their offerings.
Nonetheless, there are many developing challenges and unknowns in respect of how these applications will evolve over time and their associated risks. That does not mean companies should omit AI-based tools from the workplace. The focus should instead be on how they can be adopted into an organisation’s digital strategy.
For businesses to fully contemplate whether AI is right for them, they should consider the influence it will have on their business outcomes and ensure they understand the risk and vulnerabilities so that preventative measures can be put in place to reduce the impact.
The contents of this publication are provided for general information only. Lockton arranges the insurance and is not the insurer. While the content contributors have taken reasonable care in compiling the information presented, we do not warrant that the information is correct. The contents of this publication are not intended as a legal commentary or advice and should not be relied on in that way. It is not intended to be interpreted as advice on which you should rely and may not necessarily be suitable for you. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content in this publication.
© 2025 Lockton Companies Australia Pty Ltd.