top of page

How Australia’s Under-16 Social Media Ban Will Actually Work (Or Not)

Australia’s plan to ban social media access for users under 16 has sparked significant debate. While the pros and cons of this policy can be discussed endlessly, today I'd rather focus on the operational side.


Suppose the Australian government goes forward with this ban—how could it be implemented with a reasonable degree of success? Here’s a practical look at the steps, technologies, and challenges involved.


The UK’s experience offers some insight. In 2019, they attempted age verification with the “porn pass” under the Digital Economy Act, which required adults to buy a pass in stores to access online adult content. The plan was eventually scrapped due to privacy and logistical issues. Ofcom recently released new guidelines under the Online Safety Act that take a marginally more practical approach, suggesting options like facial age estimation and photo ID checks to balance protection with privacy.


Companies developing age verification solutions claim their tech is both effective and privacy-friendly. But tying people’s online habits to a digital fingerprint? That's always going to be risky.


Before we get into it, I’d like to say that age verification, at scale, is very, very hard, and no country has fully figured it out yet. With these lessons in mind, here are three potential options for implementing an effective age verification system in Australia (and how it can be likely circumvented):





Option 1: Establishing a Secure, Privacy-First Age Verification System through Third Parties


For a ban on users under 16 to work, age verification needs to be accurate, enforceable, and respectful of privacy. The eSafety Commissioner’s proposed “double-blind tokenised approach” would use a third-party provider to verify ages without exposing personal data to social media platforms. This system involves generating a device-based token that confirms a user’s age without linking it to their identity. Australia has funded an age verification trial in May 2024, to explore methods for safe, effective verification, but results are still forthcoming.


Example : If a user wants to sign up for Instagram, they're directed to a third-party age verification provider that checks their age using an ID. After verification, the provider issues a token to the device. Instagram then receives this token as proof of age but without personal details.


Potential Circumvention:


  1. Minors could use a parent’s or sibling’s ID to bypass verification, especially if tokens aren’t linked to individual users but rather to the device itself (and are stored for a while). Liveness checks—where the person is typically asked to hold up their ID and face the camera while performing actions like blinking to prove they’re live and not using a static image or prerecorded video—can help further reduce this workaround by adding an additional layer of verification. However, it’s worth noting that this process involves significant effort and requires handing over a hefty chunk of private information, just to post a picture of your coffee.

  2. Folks can bypass local restrictions using VPNs, as seen in Utah and Louisiana, where VPN use spiked after age checks were introduced as a prerequisite to access adult content.

  3. While there are no widespread reports of organized black markets of pre-verified devices or accounts in Australia, they are likely to emerge, following trends seen in countries with strict telecom regulations, where similar markets have developed as a response to restrictive laws


Option 2: Exploring Device-Level Age Verification by App Stores


Device-level age verification, where Apple or Google verifies users’ ages at the app download stage, could simplify enforcement significantly. However, app stores haven’t taken on this role, choosing to stay enabler and not enforcer.


There are reasons why it might not be in their interest.


  • Compliance is complicated : App stores operate in multiple countries with varying age verification laws. Implementing device-level age verification would require significant investment in compliance infrastructure, in every country (potentially). This would also expose them to increased scrutiny and liability for the safety risks associated with every app they host. There’s limited upside for these companies: taking on this responsibility could invite more criticism from regulators, increase operational costs, and lead to potential liability issues without directly benefiting their core business


  • What about the developers? Apple and Google prefer to delegate age compliance to app developers, who, according to App stores, are closest to their content and audience, giving them a better understanding of suitable age restrictions. This position allows app stores to reduce their direct liability, directing compliance responsibility to developers instead. If app stores took on device-level verification, they would assume greater accountability for age-related issues across all apps.


Option 3: AI-Driven Age Estimation by Platforms


Platforms like Instagram use AI-driven tools to infer users' ages by analyzing behavioral indicators—such as the accounts a user follows and the content they engage with. If these tools suspect that someone is under 18, they may flag the account or limit certain features regardless of the age the user claimed during sign-up. Some of the platforms have already rolled out such features, in partnership with third parties.


However, the effectiveness of AI-based age estimation is limited, as is with many AI use cases.Insufficient training data leads to inaccurate results, as AI often struggles to recognise variations in age across different ethnicities, ages, and facial features. Estimation alone is unreliable and would need to be part of a broader strategy to ensure effective enforcement.


So, What is a Regulator to Do?


Relying on any single approach—whether it’s age estimation, behavioral monitoring, or tokenized verification—falls short. Australia's eSafety Commission also notes this, and emphasises the need for a holistic strategy.


Operationally, a layered system that incorporates device-level verification, third-party age assurance, and AI-driven indicators can create a stronger net. For example, device-level verification can help confirm age at the app download stage, while behavioral monitoring can help flag users who might have initially bypassed the age check.


Tech solutions alone cannot resolve the societal factors that drive underage social media use. Educating parents and minors on digital safety, encouraging parents to engage with and monitor their children’s online activity adds protections that ,frankly, no technology can provide. The report's recommendations on need for education and training are spot on.


Lastly, no verification system will achieve 100% perfection. In my experience, there will always be ways around even the best systems. The goal and mindset should be to reduce risks as much as possible and make incremental improvements, while incentivizing all stakeholders to invest in safety, and make responsible design choices. Trust and safety teams everywhere need to be supported with the necessary resources to effectively address and combat emerging issues (oh yes, there will be emerging issues, as we will soon see).

Comments

Share Your ThoughtsBe the first to write a comment.

© 2025 by Devika Shanker-Grandpierre

bottom of page