Australia became the first country in the world to ban children under 16 from social media. The law kicked in on 10 December 2025. Facebook, Instagram, TikTok, YouTube, Snapchat, X, Reddit, Twitch, Threads, and Kick are all covered. Platforms face fines of up to $49.5 million if they don’t take reasonable steps to keep under-16s off their services.
That’s the version of events that made international headlines. The more complicated version is what’s actually happened since.
Why the Law Passed
The legislative push was driven significantly by grief. Prime Minister Albanese was moved by a personal letter from Kelly O’Brien, whose 12-year-old daughter Charlotte died by suicide after being bullied. Stories of parents who had lost children to social media-linked bullying generated enormous public support, and the News Corp “Let Them Be Kids” campaign amplified those stories nationally.
Public opinion was firmly behind it. A YouGov poll in November 2024 found 77% of Australians supported the age limit. A Resolve Political Monitor poll put it at 58%, with a separate survey in December 2025 showing 70% of voters endorsed the ban.
The emotional logic was straightforward: these platforms have contributed to measurable harm in young people, the companies have done next to nothing about it voluntarily, so the government stepped in.
Jonathan Haidt, the social psychologist behind The Anxious Generation, was enthusiastic. He called it “freeing kids under 16 from the social media trap” and predicted other nations would follow.
He’s not wrong about that last part. The UK, France, Denmark, Malaysia, and New Zealand have all been watching and considering their own versions. The European Parliament passed a non-binding resolution in November 2025 advocating a minimum age of 16. Australia is being treated as the test case, whether it wants to be or not.
What “Enforcement” Actually Looks Like
Here’s where it gets messy.
The law doesn’t punish kids or their parents for getting around it. The obligation is entirely on the platforms. They need to take “reasonable steps” to verify age, which has meant things like facial estimation through selfies, uploaded ID documents, or linked bank details.
In practice, getting around it has not proven particularly difficult. Guardian Australia reported in February 2026 that teenagers under 16 were still able to access some platforms. Older siblings helping out. VPNs. Lying about birth years. The usual.
Meta blocked over 500,000 under-16 accounts in Australia in January alone, which sounds significant until you consider how many accounts there were to begin with, and how many of those kids simply made new ones with different details. Meta has also pointed out, not entirely without reason, that teenagers use over 40 apps per week, and many of those apps aren’t covered by the ban, which means exposure to harmful content continues through other channels regardless.
Reddit went further and launched a legal challenge in the High Court, arguing the ban curtails young people’s freedom of speech and that a teenager with an account is actually easier to protect than one browsing anonymously without one. The Digital Freedom Project, backed by NSW Libertarian MP John Ruddick and with two 15-year-olds as named plaintiffs, has a separate High Court challenge running in parallel.
The Problems With the Policy
The critics aren’t all tech companies with obvious financial interests in the outcome. UNICEF Australia has consistently argued that the real fix is making platforms safer, not delaying access. Their position is that you don’t protect children from a broken thing by making them wait until they’re 16 to encounter it.
There’s also the isolation question. Social media for teenagers isn’t just entertainment. It’s where sports teams organise, where school assignments get discussed, where friendships from different schools are maintained. University of Sydney researchers noted that young people rely heavily on social media to build community within their sporting circles, and that restricting access disrupts everyday social interactions that parents might not even be aware of.
Some teenagers reported feeling more isolated after the ban. In December 2025, most teenagers the Guardian interviewed were opposed to the ban and sceptical of its effectiveness. Given that they’re the ones actually affected by it, that’s a perspective worth taking seriously.
The privacy angle is genuinely thorny too. Proving your age online means sharing identity documents or biometric data with platforms, or with third-party age verification providers. Australians who want to use social media are now being asked to hand over more personal data to the same companies the government was trying to rein in. That’s an uncomfortable trade-off that hasn’t received nearly enough attention in the public debate.
What Public Opinion Actually Says
Support for the ban is high. Confidence that it will work is not.
That same December 2024 polling that found 58% support also found only 25% of people believed it would work, compared to 67% who thought it wouldn’t achieve its aims. More recent polling showed 58% not confident in the ban’s effectiveness. A majority of parents (53%) planned to selectively allow platforms for their kids anyway, rather than enforcing a blanket ban at home.
So Australians largely support the intent, while doubting the execution. That’s a reasonably coherent position. The intent — holding platforms accountable for the harm they cause young people — is legitimate. The mechanism, asking Meta to police its own age verification while under-16s simply use VPNs or borrow an older sibling’s details, is imperfect at best.
Is It Actually Doing Anything?
Honestly, it’s too early to say with confidence.
The law has been in effect for just over three months. Mental health outcomes take longer than three months to show up in data. What we can say is that the ban has created a significant administrative burden for platforms, driven some genuine age verification activity, and prompted at least some teenagers to spend less time on social media, whether by choice, reduced access, or social dynamics shifting now that it’s harder for the whole peer group to be on the same platform simultaneously.
Tama Leaver, a professor of Internet Studies at Curtin University, put it plainly: nobody expected 100% removal of every under-16 from every platform on day one. The question is whether the friction introduced by the law shifts behaviour meaningfully over time, and whether the international pressure it puts on platforms results in better safety design across the board.
That second part might actually matter more than the first. The threat of regulation — and now the reality of it — has already prompted changes in how platforms think about age verification and harmful content globally. If Australia’s experiment causes Meta and TikTok to genuinely improve their safety infrastructure worldwide, not just in Australia, that’s a much bigger win than however many 15-year-olds successfully deleted their TikTok.
Where This Leaves Us
The law is a blunt instrument responding to a real and serious problem. It was passed quickly, with genuine emotional weight behind it, and without the kind of infrastructure — consistent enforcement mechanisms, national age verification systems, privacy safeguards — that would make it actually work as designed.
That’s not a reason to oppose it. It’s a reason to keep improving it, and to be honest about what it can and can’t do. Removing a 14-year-old’s Instagram account doesn’t remove the social pressures, the bullying dynamics, or the content that was harming them. It removes one delivery mechanism while leaving others largely intact.
The government did something. Whether it did the right thing, or enough of the right thing, we probably won’t know for another couple of years. The rest of the world is watching to find out along with us.