
Australia’s Under-16 Social Media Ban: The First Domino in a Global Crackdown
Categories: Government, Policy, Social Media
- Australia’s Social Media Minimum Age (SMMA) law forces platforms to block and remove accounts held by Australians under 16.
- Meta and YouTube are already enforcing the ban, with hundreds of thousands of teen accounts affected and under-16s forced into log-out, view-only modes.
- To comply, platforms are leaning on age-assurance tools like ID uploads, credit-card checks, facial scans, and third-party verification vendors.
- The law carries fines of up to A$49.5 million per breach and is being framed by the eSafety Commissioner as the “first domino” in a global age-verification wave.
- Other governments, including Malaysia and several in Europe, are signalling interest in similar under-16 social media restrictions.
- Normalising ID and face scans expands the attack surface for scams, identity theft and deepfake abuse, making it crucial to audit and reduce your exposed digital footprint.
What is The Social Media Minimum Age (SMMA) Law?
CANBERRA – Australia’s strict new social media rules for children are not just an isolated experiment down under – they’re being pitched as a blueprint for a global crackdown.
Australia’s eSafety Commissioner, Julie Inman Grant, has called the law the “first domino” in a worldwide shift toward government-mandated internet restrictions.
Officially, Canberra insists this isn’t a ‘ban’ but a ‘delay’ – branding it the Social Media Minimum Age (SMMA). But for any 13 -15 year-old who loses their account – and their social graph – it will feel like a ban.
The eSafety Commissioner’s “domino” or “tipping point” language acknowledges other nations are planning to follow suit, and many of the tech giants have quickly bent the knee – unlike Australia’s 2021 Media Bargaining Code, which social platforms aggressively pushed back on.
How the SMMA Ban Works in Practice
The legislation forces platforms to remove and block accounts held by Australians under 16. This includes major social media platforms like TikTok, Instagram, Snapchat and X – it’s set to trigger a chaotic purge of digital identities.
Meta has already begun banning users, with potentially half a million Instagram and Facebook accounts affected. Many Australian teens will likely see their digital social lives erased in the near future.
How Social Media Platforms are Responding
YouTube has also capitulated, announcing it will forcefully sign out users under 16, stripping them of the ability to like, comment, or upload content, relegating them to the status of passive observers.
Critics argue this is a massive overreach of state power that overrides parental authority, but the government remains undeterred. The ban removes parents’ ability to lawfully decide that their 14 or 15-year-old can hold an account on covered platforms, shifting that power to the state and the platforms’ compliance teams.
To enforce new rules, platforms are being pushed toward more rigorous age-assurance systems. The precise details of these systems isn’t mandated, but there are guidelines.
- If a social media platform is age-restricted, they must take ‘reasonable steps’ to prevent under 16s from holding accounts.
- No Australian will be ‘compelled’ to use government identification to verify: it can be an option, but it cannot be the only option.
- Any information collected for the purpose of age verification must be destroyed after use.
Many people correctly question how this could possible be enforced; there is a range of imperfect options that social media platforms have available.
Real-World Age Checks: How Social Platforms Actually Enforce the Ban:
If you’re wondering how platforms will enforce the SMMA ban in Australia – look at the UK. Their Online Safety Act put similar requirements on a number of platforms, who have already responded. We’re likely to see functionally similar responses to Australia’s legislation in the coming days.
- YouTube – ID, Credit Card, or Selfie-based verification options
- Discord – face scans or ID uploads to UK users
- Imgur – more likely to just block the country (based on past behaviour)
- Spotify – ID or face scan options
- Snapchat – bank account connection, ID upload, face scans
The pattern is obvious at this point: platforms are converging on two tools – ID uploads and face scans.
The Australian Government’s Age Verification Rules (On Paper)
The penalties for non-compliance are staggering. Tech companies face fines of up to AUD 49.5 million ($32 million USD) for failing to keep children off their platforms. This aggressive financial threat has forced companies to comply rapidly, creating a “splinternet” where Australian users face barriers unknown to the rest of the world. For now.
Globally, Australia’s social media ban is already having a domino effect as described by the eSafety Commissioner. Malaysia has announced plans for its own under-16 ban in 2025, and the European Commission, France, Denmark, Greece, Romania and New Zealand have all signaled interest in similar age-gating measures.
Commissioner Inman Grant made it clear that this friction is intentional. By describing the ban as a “tipping point,” she signaled that the era of the open internet for young people is ending.
She has described the Australian law as ‘the first domino’ and says governments in Europe and elsewhere are watching closely. That’s also why Big Tech is lobbying so hard: if this model sticks in Australia, it becomes much easier to copy-paste into Europe, the UK and eventually the US. It’s likely that lobbying from Silicon Valley is driven by a fear of global adoption of this model – and not specifically about keep social media open to under 16 Australians.
Social platforms are likely to invest resources and development time into recapturing users as they turn 16 – with some reports claiming they are prompting under 16s to download their data, and will allow them to ‘restore’ banned accounts when they reach the minimum age threshold.
Children are one of the most vulnerable groups online and offline, and there does need to be stricter rules around how platforms treat developing minds. On paper, the Australian government’s messaging sounds reasonable – “no mandated ID uploads” and “destroy verification data after verifying” are positive commitments.
The Privacy and Security Risks of ID and Face Scans
So, you understand Australia’s ban (SMMA Requirements), how it’s enforced, what the penalties are – and what platforms are actually implementing globally. With that context, you need to understand the privacy and security risks you will be exposed to in coming years.
- Platforms can’t compel ID checks, but they can offer them. People will use them.
- Platforms must destroy verification data after verification. What if verification takes 6 weeks?
- “Error: Verification failed”; historically this has resulted in platforms prompting users to ‘manually verify’ through a support channel. This is very bad, and will result in more things like: 70k+ IDs Stolen from Discord Support
- There are so many ways to navigate around age check systems.
- New platforms and apps will get spun up constantly to avoid age verification requirements. This has already begun, as we predicted in October.
What These Risks Mean for Your Digital Footprint
With ID and face scans becoming normal data for platforms to request in Australia, we’re also likely to see an increase in Australian’s targeted with phishing scams designed to steal ID documents, selfies, or selfie-videos – which could then be used to commit identity fraud, deepfakes, extortion, and other targeted scams and financial crimes.
How to Protect Yourself in an Age-Verified Internet
Whether you agree with the ban or not, one thing is clear: the internet is moving toward stronger identity and age checks. The systems being built for under-16s today can easily be expanded to cover more services, more age groups, and eventually adults.
If you’ve spent years bleeding data into social feeds – posts, likes, comments, DMs, photos – an age-verified internet means more of that history can be tied back to a real-world identity.
Before that becomes the default, you need to audit your digital footprint (free guide here). Or, at least ask yourself:
- Which accounts still need to exist?
- Which posts, replies, or comments no longer represent who you are now?
- How much of your history do you actually want exposed in an ID-gated internet?
That’s exactly the problem Redact is built to solve. Instead of manually scrolling back through years of feeds, Redact lets you review and bulk-delete old posts, comments and interactions across major platforms in just a few clicks.
If governments and platforms are going to tighten identity checks, your best move is to shrink the amount of data they can link to you. Clean up what you control now – before every account is locked behind an ID check or a face scan.
