Legal notice, fines, and scrutiny facing tech giants.

Trigger warning - this article contains references to themes of child sexual abuse and exploitation which some individuals may find distressing

Overview:

  • Tech giants are under notice from the Australian eSafety Commissioner, insisting they disclose their policies and procedures for protecting against child sexual abuse and exploitation.

  • Hefty fines are likely, as much as $700,000 daily, if the tech and gaming companies cannot provide adequate information to the regulator.

  • Concerns over privacy, tracking, and mismanagement of data need to be addressed.

  • Steps tech companies can take to reduce and transfer elements of their risk exposure.

Techs are in the firing line (again):

2022 was a challenging year for the tech sector. Now, only three months into 2023 and the sector is back in the hot seat, with the Australian eSafety Commissioner calling for more accountability regarding policies and procedures in protecting the safety and welfare of children.

It’s time for the sector to make meaningful changes and to be held more accountable. Turning a blind eye, or claiming ignorance is not ok, with the commissioner asking for better clarity and transparency.

A world-first study undertaken by the Australian eSafety Commissioner reveals social media companies are not proactively searching for harmful materials that exploit children. In an interview with ABC News, Commissioner Julie Inman Grant said, “there were genuine concerns about how tech giants were monitoring harmful material on their sites.”

Social media sites and gaming platforms have transformed into empires with little to no ‘supervision’. There’s a general understanding that tracking and algorithms are used to collect data for targeted ads, sites, chatrooms, etc. but not enough focus on the exposure, or “opportunity” it gives to threat actors, predators, and cyber-criminals, who can utilise these tools for unintended and malicious intent, which has the potential to impact children significantly and dangerously.

The eSafety Commission research found:

  • Many tech companies possess the tools and ability to scan for child explicit material but haven’t focused enough of their energies on doing so. The commissioner specifically called out Microsoft as an example, claiming that they developed a tool called PhotoDNA to identify illegal images, yet Microsoft was not utilising its tool on their own services such as OneDrive, Skype, or Hotmail.

  • During the tech layoffs, some of the biggest redundancies came from the ‘safety personnel’ employed to protect children and survey images and materials.

  • Cases of child sexual abuse and unlawful content are far-reaching. In 2021, the US National Centre for Missing and Exploited Children received 29.3 million reports.

  • Algorithms contribute to the issue with suggestions and recommendations for explicit content involving children.

  • Live streaming tools such as Skype, Microsoft Teams and FaceTime don’t take any measures to detect illegal child content.

Predators hangout:

Social media is now well established within our society, yet the stakes with respect to its negative impact on children have never been higher.

Social media has long moved on from being platforms for staying connected to peer groups and communities. It is part of society’s everyday lives and has become a hub for ‘oversharing’, with few limitations to what people are willing to share.

Young people and children communicate through social media platforms and gaming sites, using them to:

  • Game.

  • Follow the latest trends from influencers and celebrities; and

  • Post their own content (i.e. TikTok dance videos, etc.).

With so much activity across social media, machine learning is constantly absorbing and monitoring online activity and interests whereby algorithms suggest similar or associated content. This is leaving people, and even more so, children, vulnerable and susceptible to manipulation, clickbait, grooming, or sharing of personal and sensitive information.

Predators trawl social media to find children to exploit, with a plethora of information making it easier for them to assume an appealing non-threatening identity. They also have the added benefit of being able to hide behind their devices and catfish with fake images and profiles to gain the victim’s trust.

  • According to the FBI over 500,000 online predators operate daily on social media sites.

  • The Crimes Against Children Research Centre found that 1 in 7 children communicated with online predators and that one in 25 children will meet their perpetrator in person which poses a greater risk for physical crimes.

  • Another statistic released by the FBI reveals there were 700,000 cases of child extortion (in 2022) whereby children were manipulated into producing compromising images and videos of themselves, and then blackmailed for financial gain. Usually with the threat of releasing the content on public domains.

  • The National Centre for Missing & Exploited Children’s (NCMEC) analysis previously indicated that the motivation behind child extortion was to gain explicit images of a child. However, later data from 2022, suggested that 79% of predators were motivated by monetary wins.

  • There is a strongly supported opinion that the tech sector does not do enough to make it challenging for predators to operate, in some ways making it a perfect environment to fly under the radar and remain undetected. Ms Inman Grant, Australia's eSafety Commissioner, is threatening to implement a mandatory code of conduct for techs after she rejected draft codes from the sector, stating, “they had insufficient community safety safeguards.”

Data is bittersweet:

By their nature, social media sites hold mass data:

  • Personal identification details. These include first name, surname, age, photo, relationship status, and education and employment.

  • Location tracking. For example, a person’s address and the address and locations of their peers, places frequented regularly like work or gyms, or the location a photo was taken.

  • Message and call interactions. Call and message history are logged.

  • Social behaviour. Monitoring what people engage with on their social media, how long they consume a piece of content. For example, reading an article/story or watching a video, and which groups or individuals are followed.

All this collected data is processed through algorithms to determine different personas and understand the actions of consumers.

Digital advertisers are willing to pay up for access to such information so that their products or services are more targeted to the right individuals. This has contributed to the rise of influencers and has excelled the rate some brands have entered the market – all great news for social media platforms.

However, there is a dark side to holding such valuable information, such as a higher likelihood of being targeted for it.

Additionally, the use of algorithms enables opportunities for malware and manipulation. In an article by ABC News, Dr Michael Salter, an Associate Professor of Criminology at UNSW, said, ”the major social media companies have developed their services and platforms with very few effective child protection measures in place,” and went on to say, “very often they are using algorithms to actively recommend this content, and we have had situations where social media company algorithms have been actively recommending explc content of children, sexual interest in children.”

The research undertaken by the commissioner and her team makes for a compelling argument that change and meaningful action must happen. Children are at high risk with very little risk placed on tech companies.

Take meaningful action, or face the consequences:

So what can social media companies do to take strides in the right direction? No doubt this will be a point of discussion in boardrooms across the sector over the coming weeks as they stare down the barrel of daily fines of up to $700,000.

1. Prevention:

Prevention must be the number one focus for social media sites. Preventative measures such as researching harmful content and understanding the profiles and motivations of predators enable social media companies to take more proactive steps to limit offensive content.

Another proactive tactic is the adoption of detection tools to assist in preventing explicit material from being exposed. Tech giant Meta is leading by example, with two new tools:

  • A pop-up that displays if someone has searched for terms associated with child exploitation. It notifies the user that this type of content is deemed illegal.

  • And a safety alert that warns the user that the type of content is against Meta’s policies and that there are legal consequences.

2. People:

Whilst many tech companies have reduced their headcount to decrease their overhead costs, they might not have anticipated the burning cost of fines heading their way. Workforce reductions create gaps in basic operational activity, and safety is among them. Laying off whole departments of safety personnel to reduce spending is not the answer and further burdens an organisation.

Directors' and Officers' should be investigating how they can adequately invest to moderate explicit material. Investment in people and providing necessary training for them will be a key contributor, as will investment in software tools (automated or otherwise).

3. Risk Transfer:

Insurers have a significant role to play in improving risk posture (as dictated by the market) and therefore reducing the risk to children. Social media companies that adopt safeguard frameworks or can show proactive and preventative measures to reduce exposure to offensive content, including that of child exploitation, will be viewed more favourably in the eyes of insurers.

4. The expectation set on boards:

Apart from the impending daily fines; boards, directors and officers, and c-suite executives risk being directly impacted by the increased attention from the eSafety Commissioner.

It’s an important reminder, for not just social media companies and techs, but for all organisations, that regulators are putting the onus on those at the top and there is a legal and societal expectation that these will be met.

5. D&O insurance policies:

Companies that can demonstrate a robust approach to child exploitation management that includes monitoring, reducing, informing and insuring the associated exposure will not only reduce the financial and reputational risk, but reduce potential director’s and officer’s liability, and make their risk more appealing to D&O insurance underwriters, which is likely to positively affect renewal outcomes.

Closing thought:

Predators will continue to stalk social media sites to exploit children. It is important that companies have a top-down, bottom-up approach to the security of their users and take all reasonable steps to reduce harmful content on their channels.

It is time for boards to take ownership of their creation and protect those who are most vulnerable or there will be repercussions, not just from a legal and financial standpoint.