Deepfake and the risk of vendor fraud: challenges and solutions for solicitors

Deepfake and the risk of vendor fraud: challenges and solutions for solicitors

Advances in artificial intelligence (AI) technology are increasing the threat to solicitors of deepfake-enabled vendor fraud, with conveyancing and property transactions a particular target. Where successful, these frauds can inflict significant financial and reputational harm, both on a firm and its clients.

To protect themselves, solicitors must adopt proactive measures to bolster their verification procedures and minimise their exposure.

The deepfake revolution

In an age in which the technology landscape is changing rapidly, the rise of deepfakes stands as a testament to the capabilities of AI.

In simple terms, a deepfake is an artificial video created through a machine learning technique called ‘deep learning’. Deepfakes can transform existing source content, for example, by swapping one person for another. They can also create entirely original content, where a person appears to say or do something they did not. Deepfake videos can be indistinguishable from real videos.

Driven by advanced algorithms, deepfakes blur the boundaries between reality and fabrication. As these manipulations become ever more sophisticated, they present potential pitfalls across various sectors, from the legal sector to national security.

Vendor fraud and deepfakes – a looming threat

Conveyancing and property transactions, where the verification of identities and documents is paramount, are particularly exposed to potential fraudulent activity. Deepfake technology has capabilities of convincingly impersonating sellers or agents, and can deceive solicitors into unwittingly facilitating fraudulent transactions, known as ‘vendor fraud’. The conveyancing process is an attractive target for fraudsters as it provides both the method of committing the fraud and the means of laundering. Vendor fraud can of course expose both clients and practitioners to financial and reputational harm.

Law firms and their staff need to be aware of the potential threat from deepfakes as a means to commit vendor fraud, particularly if using FaceTime, Zoom, or other technology systems to verify potential clients as an alternative to meeting their clients face to face. At this time, we are not aware of any insurer who has experienced a claim in respect of deepfake, and in many ways the ability to use video call technology to communicate with clients is, of course, an advantage. Nevertheless, firms may need to reassess the robustness of their electronic due diligence processes given the emerging risk of deepfake technology, and AI advances generally.

Mitigating deepfake fraud risks

Helpful guidance can be found in both the Law Society of Scotland’s AML/CF Sectoral Risk Assessment and the Legal Sector Affinity Group (LSAG) AML Guidance, both of which can be found in the Law Society’s AML toolkit (opens a new window). Although neither deals explicitly with the threat from AI-created deepfakes, both emphasise the need to consider acting without meeting individuals a risk factor when carrying out a practice wide risk assessment (PWRA). In these situations, it is important to exercise caution, and to consider whether enhanced due diligence is needed.

Chapter 18 of the LSAG AML Guidance lists red flags and warning signs in relation to client identification and verification.

Warning signs for deepfakes:

When it comes to the deepfake videos themselves, an important line of defence is training, so that staff are alert to the possibility of a fraudulent video call and can look out for discrepancies. These include:

  • Odd noise distortion.

  • A disconnection or delay between speech and mouth movement.

  • Pixelation.

  • No blinking or oddly patterned blinking.

  • Shadowing that looks unnatural.

Of course, the difficulty is that a deepfake may not display any of these tell-tale signs. It is therefore important to take a cautious approach and add layers of additional checks, possibly including additional verification and extra layers of supervision if there are any concerns about the client’s identify or the instructions being given.

Vendor fraud warning signs

Failures in identification and verification make it easier for vendor frauds to take place. Warning signs for this type of fraud include:

  • Properties being offered for sale over or under the market value.

  • Reluctance on the client’s part to provide documentation.

  • Altered, forged, or stolen identity documents such as passports.

  • Pressure to complete the transaction very quickly – for example within a few days.

  • Instructions for minimal work to be done – for example no searches requested.

  • Complex or unusual circumstances around the transaction.

  • Cash property purchases.

  • Funds coming from or going to unconnected third parties.

  • Being instructed to act for both the seller and the purchaser in the transaction.

  • Property being bought/sold in back-to-back sales.

To minimise their exposure to vendor fraud, firms should exercise caution when clients are not met face to face, ensure that the vendor’s title is properly established, and properly scrutinise any identity documents to ensure they appear authentic and show no apparent signs of being forged or altered.

Conclusion: navigating the deepfake terrain

In light of the challenges posed by deepfakes and vendor fraud, solicitors should adopt proactive measures to mitigate the associated risks. Rigorous identity verification protocols, bolstered by robust technological solutions, can help authenticate clients and counterparties, minimising the likelihood of falling victim to fraud. Additionally, ongoing education and training can empower solicitors to identify and combat deepfake-related threats.

For more on this, see the Law Society’s helpful article in collaboration with Mitigo; A deep dive into deepfakes (opens a new window).