Social engineering: are your defences still fit for purpose?

Fraudsters’ manipulation of workers is becoming ever smarter, and now artificial intelligence is getting more involved.

Fraudulent activity is a growing problem for many businesses. Losses are becoming more frequent and severe. And with heightened levels of computer and information security, fraudsters often try to exploit what can be the weakest link in the chain: employees.

Many fraudsters will use human interaction (or simulated human interaction) to gain trust, in order to obtain passwords, access or other information about a company and its security and computer systems.

Many emails look like exact replicas of emails from the companies they're imitating.

This practice – often referred to as “social engineering” – is nothing new. Many companies will be familiar with the concept and have preventative measures in place. The problem is that fraudsters’ methods are always becoming more convincing and targeted.

The most common type of social engineering is ‘phishing’ – where emails, text messaging or social media are used to trick people into providing sensitive information, or visiting a seemingly innocent but malicious site.

Especially for you

Almost seven in ten companies say they've experienced phishing and social engineering (opens a new window), according to Accenture. Each computer user at a small business receives an average of nine malicious emails per month (opens a new window), according to Symantec.

Emerging artificial intelligence (AI) tools enable fraudsters to craft ever-more convincing fake emails with which to trick employees. Workers of all levels of seniority are being deceived.

Picture the scene. You’re a company Finance Director due to make a large payment (a six-figure sum) to a supplier. By enticing you to click on a link in an email, criminals acquire your email password and then access your email account via Microsoft Office 365.

The criminals create a rule within your email account whereby if names or words relating to the soon-to-be-paid supplier appear, the email is marked as read and diverted to a folder they create especially. This way the fraudsters intercept, read and respond to emails relating to your supplier.

AI sent simulated spear-phishing tweets at a rate of 6.75 tweets per minute, luring 275 victims.

An email from your supplier with the invoice, including bank details, is sent to you but you never see it – because it is diverted to the fraudsters’ newly created folder.

The criminals then set up a domain name similar to the actual supplier’s, but with a ‘.co’ suffix instead of ‘.com’. From this domain, they email you just minutes after your supplier’s email was sent, with the invoice but with amended bank details.

You transfer the money to the fraudsters’ bank account. Your supplier contacts you, asking for a scan of the bank transfer details; but a day later you receive an email from the fake supplier email account informing you that the supplier’s bank account has confirmed receipt.

Such scenarios are increasingly common. Nobody is immune. And such forms of fraud, despite being very targeted, are no longer necessarily perpetrated on an individual basis. In the first instance, they are often committed systematically by computers that will necessarily exploit a weak link in a company’s defences.

For example, in 2016 an experiment was conducted (opens a new window) to see who was better at getting Twitter users to click on malicious links, AI or a human. AI won hands down. An artificial hacker sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. The human hacker could only send out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.

Strong defences in head office and most of the organisation therefore won’t be enough. Companies might be particularly exposed if they are decentralised, outsource a lot of their activities and/or have acquired businesses over the years.

Three lines of defence

Because of the advanced nature of social-engineering threats, your privacy and security measures should spread across three key areas: people, processes and technology.

Your privacy and security measures should spread across people, processes and technology.

Consider the following measures:

  1. People: provide on-going training to workers about social-engineering threats, and explain procedures for preventing or responding to them. Employees who regularly handle sensitive information – such as Finance, HR and sales workers – are more likely to be targeted, so it’s particularly vital that they are aware of the procedures and able to help identity threats.

  2. Processes: workers should naturally be encouraged to not click on suspicious links or provide information to outside organisations. Also ensure that you have procedures for workers to share with you details about attempted attacks. This can help you to better understand your vulnerabilities.

  3. Technologies: anti-virus protection and intrusion-detection/intrusion-prevention systems remain vital. Also use security intelligence tools to understand your security ecosystem and the potential risks you face. And encrypt data to make it unreadable, even if it's stolen.

Of course, as forms of social engineering continue to evolve, so must all of the measures described above.

Broader protection

There has recently been an increase in claims under crime insurance policies, caused by social engineering. There are many different types of crime insurance but most are structured on a “named-perils” basis, ie, by stating the type of criminal acts that are insured.

Because of the changing nature of social engineering, it is worth considering an “all risks” type of crime insurance. This covers losses due to criminal, malicious or fraudulent acts inside the company or perpetrated by external parties.

This broader coverage can make all the difference. For example, during one company’s recent insurance renewal, we noticed that its crime insurance was arranged on a “named-perils” basis, with the triggers for third-party fraud being forgery, fraudulent alteration, counterfeit and computer crime. The insurance was changed so that it provided “all risks” cover without named perils for third-party fraud.

Shortly after renewing its insurance, the company received an instruction from a supplier to make payments to new bank account details. After performing its usual checks, the company made a payment of over £900,000, only to find that the instruction to change the bank account details was not genuine.

This loss would not have been covered under a traditional crime insurance policy, because a forged signature would have been required in order to trigger the policy. Under Lockton's crime insurance policy, however, it is being treated as an insured claim. 


For more information, please contact Michael Lea on:

michael.lea@uk.lockton.com (opens a new window)