Australia Signals AI Crackdown to Limit Underage User Access to Harmful Content, Regulations Target App Stores and Search Engines

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer
Key Takeaways
  • Regulatory Deadline: Starting March 9, AI services in Australia must restrict access to harmful content for users under 18 or pay millions in penalties.
  • Gatekeeper Liability: Australia's eSafety Commissioner may hold app stores and search engines accountable for distributing non-compliant AI services.
  • Widespread Non-Compliance: A review indicated that more than half of the most popular text-based AI products have not yet implemented public compliance measures.

Australia's eSafety Commissioner has indicated it may extend enforcement actions to major digital gatekeepers, such as app stores and search engines, as part of a new AI age crackdown. Effective March 9, internet services, including AI-powered tools, must implement robust systems to prevent users under 18 from accessing content related to pornography, extreme violence, self-harm, and eating disorders. 

The move follows a review showing widespread failure among AI service providers to comply with upcoming regulations. Non-compliance could result in fines up to A$49.5 million ($35 million). 

App Store and Search Engine Compliance Mandated

The eSafety Commissioner expressed concern that many AI companies are failing to meet their obligations, with a Reuters review finding that 30 of the 50 most popular text-based AI products showed no apparent steps toward compliance. For example, Elon Musk's Grok AI lacked age-verification measures and text-based content filters.

Reports say eSafety stems from concern that AI companies are “leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage”, a spokesperson said. The regulator has reported being told that children as young as 10 are talking to AI-powered interactive tools for up to six hours a day.

The regulator's warning signals a significant shift, placing liability not just on AI developers but also on the platforms that distribute them. This enforcement strategy targets key access points to ensure broad search engine compliance and app store restrictions. 

By holding gatekeepers like Apple and Google accountable, the regulator aims to compel the entire digital ecosystem to enforce Australia's AI regulations.

Global Implications

This initiative reflects a growing international trend to rein in the potential harms of AI. Australia’s approach, following its ban on social media for teens under 16, is among the most aggressive regulatory frameworks to date. 

The focus on age verification and content filtering for AI services addresses mounting concerns about the technology's impact on youth mental health. 

OpenAI banned a British Columbia mass shooting suspect's ChatGPT account more than half a year before the attack took place, but did not alert authorities because its usage “did not meet its threshold of a credible or imminent plan for serious physical harm to others.”

A broader trend saw several countries introducing plans to ban social media for children under 15 in the Netherlands, France, and Türkiye. Meanwhile, the U.K. is considering new social media rules to protect children.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: