Law Updates Criminal Law

By Charlene Harris

26.07.21

Sticks and stones may break bones, but words … can now also be prosecuted

Last week, the Senate passed legislation which will hold big tech companies, such as Facebook and Twitter, responsible for offensive or harmful content posted on their platforms.  The Online Safety Bill both consolidates and reforms existing legislation, giving the eSafety Commissioner new and enhanced powers to enforce online safety.  The eSafety Commissioner will now be able to order content to be removed, sites to blocked, and can require online service providers to disclose contact information for anonymous users who post abuse, bullying, or intimate images without consent.

What is the Bill supposed to protect against?

The Bill is intended to enhance protections against specific ‘online harms’, including:

  • Cyber-bullying targeted at Australian children;
  • Cyber-abuse material targeted at Australian adults;
  • Non-consensual sharing of intimate images;
  • Exposure to class 2 material, which is material classified as R18+ (restricted to adults as it contains content that is considered high in impact) or X18+ (a special and legally-restricted category of films restricted to adults due to sexually explicit content).
  • Exposure to class 1 material, which is ‘refused classification’ (RC). This is a classification category referring to films, computer games and publications that cannot be sold, hired, advertised or legally imported in Australia.  RC-classified material contains content that is very high in impact and falls outside generally-accepted community standards.
  • Material depicting abhorrent violent conduct, such as terrorist acts, murder, attempted murder, torture, rape and kidnapping.

How does the Bill enhance protections?

The Bill enables the eSafety Commissioner to obtain from online service providers the contact information, or other identifying information, of individuals using anonymous accounts to abuse, bully or share intimate images of other people without their consent.

The Bill also establishes a new power for the eSafety Commissioner to require internet service providers to immediately block end-user access to material that depicts, incites or instructs abhorrent violent conduct, where satisfied that the material is likely to cause significant harm to the Australian community.  This is to prevent rapid online spread of abhorrent violent material, such as what occurred after the terrorist attack in Christchurch in 2019.  This complements the significant fines and criminal prosecutions available under the Criminal Code for online service providers failing to ensure the expeditious removal of abhorrent violent material.

In addition, the Bill establishes a complaints-based removal notice system for online harms.  This operates by empowering the eSafety Commissioner to issue a removal notice which requires online service providers (see below) to remove material within 24 hours.

Generally, in order to be given a removal notice:

  1. The material must be accessible to Australian end-users;
  2. The material must have been subject to a complaint to the relevant online service provider;
  3. The online service provider must have failed to remove the material after receiving the complaint; and
  4. This must have then resulted in a complaint to the eSafety Commissioner.

In some circumstances, for example where class 1 or class 2 material is shared, the eSafety Commission can issue a removal notice without a complaint having been made.  In other circumstances, for example where an intimate image has been shared without consent, a complaint can be made directly to the eSafety Commissioner rather than to the relevant online service provider in the first instance.  This goes further than the Enhancing Online Safety Act 2015 (Cth) (“EOSA”) in two main ways.

Firstly, in relation to cyber-bullying against children, the EOSA applied only to social media platforms (where this extends to a broader range of online service providers).  Additionally, it requires compliance within 24 hours of receipt of the notice, where the EOSA allowed 48 hours for compliance.

Secondly, it introduces an entirely new scheme targeting cyber-abuse against Australian adults, which has received public support from online safety advocates such as broadcaster Erin Molan.  The test is now whether an ordinary reasonable person would conclude that it is likely the material was intended to have an effect of causing serious harm to a particular Australian adult, and whether an ordinary reasonable person in the position of the Australian adult would regard the material as being menacing, harassing or offensive in all the circumstances.

While the intention is apparently not to hinder free speech, the practical effect of these laws remains to be seen.

Who does the Bill apply to?

The Online Safety Bill applies to a broad group of online service providers, including:

  • Social media service providers that allow social interaction between end-users, such as Facebook, Instagram, Tik Tok, Snapchat, LinkedIn and Twitter.
  • Electronic service providers that allow end-users to communicate with other end-users, such as Gmail, Outlook, WhatsApp and some Xbox related services.
  • Internet service providers that allow end-users to access material on the internet, such as Google and Internet Explorer.
  • App distribution service providers that allow end-users to download apps, such as Apple and Google.
  • Hosting service providers that allow stored material provided on social media, for example cloud service providers such as Amazon and Microsoft.

These online service providers can be sued for half a million dollars if they do not reveal the identifying details of people sending messages or if they refuse to remove offensive conduct.

The Bill also extends penalties of up to $111,000.00 to individuals accused of posting or sharing harmful material.  The eSafety Commissioner is tasked with finding the balance between freedom of expression and the most hateful content.  With her investigators, the eSafety Commissioner will assess complaints and will need to determine firstly whether serious harm has been done and secondly whether there has been intent to cause serious harm.  Rightly or wrongly, this legislation will no doubt end in more prosecutions than previously under the Criminal Code offence of using a carriage service to menace, threaten or harass.

It is more important than ever that people consider what they are saying, posting and sharing online.  Because these laws are new, once charges begin to be laid there will be an opportunity to shape how these matters are prosecuted and how they are penalised.  If you need help navigating this unchartered territory, contact our team on 6279 4222.