Australia’s decision to ban children under 16 from holding social media accounts has ignited a global debate over online safety, responsibility, and regulation.
The law, which took effect in December, requires major platforms to remove underage users or face fines of up to A$49.5 million. Supporters see it as a long-overdue safeguard, while critics warn it may create new risks.
Meta says it has already blocked more than 544,000 underage accounts across Instagram, Facebook, and Threads in its first week of enforcement. While the company says it will comply, it has questioned whether a blanket ban is the right approach. Meta argues that teens may simply migrate to smaller, less regulated platforms, potentially exposing them to greater harm rather than protecting them.
The company has urged Australia to consider alternatives, including age verification at the app store level and parental approval before downloads. Meta claims this would create consistent safeguards across platforms and reduce the cycle of teens bypassing restrictions. It also warns that cutting teens off from online communities could increase isolation, particularly for vulnerable groups.
Australian officials remain unconvinced. They argue that platforms had years to improve age enforcement and failed to act meaningfully. From the government’s view, social media companies benefited from engagement-driven systems while families absorbed the consequences. The ban, they say, forces accountability and gives regulators time to strengthen enforcement and child safety standards.
The standoff highlights a broader question facing governments worldwide: whether tech companies can be trusted to self-regulate or whether firm intervention is necessary.
Australia has chosen regulation first, with adjustments to follow. Meta is pushing for shared responsibility. How this balance evolves may shape the future of online protections for children far beyond Australia.




