Australia is preparing to roll out a landmark ban preventing children under 16 from using social media platforms, set to take effect in December.
The government commissioned the UK-based Age Check Certification Scheme to evaluate enforcement options, and its findings suggest there is no perfect method.
Options tested included verification through government-issued IDs, parental consent systems, and AI-driven facial or behavioral assessments.
Each was found technically feasible but problematic in different ways.
ID verification is the most reliable but raises concerns that companies could retain sensitive data longer than required, exposing users to identity theft risks—particularly in a country that has suffered multiple high-profile data breaches in recent years.
AI facial assessments proved around 92% accurate for users aged 18 and above but faltered within two to three years of the 16-year threshold, resulting in potential false positives or negatives.
Parental approval mechanisms also raised privacy and misuse concerns.
The report recommends layering several methods for robustness, while also noting that circumvention through VPNs and document forgeries remains a challenge.
Communications Minister Anika Wells defended the policy, stressing that tech companies must deploy the same sophisticated data tools they already use for advertising and content targeting to protect children instead.
Social media giants including Facebook, Instagram, and YouTube face fines of up to AUD $50 million (£25.7m) for non-compliance.
Public opinion polls indicate strong parental support, though mental health advocates caution that strict bans may inadvertently isolate young people or drive them to unsafe corners of the internet.
The rollout will be closely watched internationally as a potential template for other nations seeking to regulate youth digital access.
The government commissioned the UK-based Age Check Certification Scheme to evaluate enforcement options, and its findings suggest there is no perfect method.
Options tested included verification through government-issued IDs, parental consent systems, and AI-driven facial or behavioral assessments.
Each was found technically feasible but problematic in different ways.
ID verification is the most reliable but raises concerns that companies could retain sensitive data longer than required, exposing users to identity theft risks—particularly in a country that has suffered multiple high-profile data breaches in recent years.
AI facial assessments proved around 92% accurate for users aged 18 and above but faltered within two to three years of the 16-year threshold, resulting in potential false positives or negatives.
Parental approval mechanisms also raised privacy and misuse concerns.
The report recommends layering several methods for robustness, while also noting that circumvention through VPNs and document forgeries remains a challenge.
Communications Minister Anika Wells defended the policy, stressing that tech companies must deploy the same sophisticated data tools they already use for advertising and content targeting to protect children instead.
Social media giants including Facebook, Instagram, and YouTube face fines of up to AUD $50 million (£25.7m) for non-compliance.
Public opinion polls indicate strong parental support, though mental health advocates caution that strict bans may inadvertently isolate young people or drive them to unsafe corners of the internet.
The rollout will be closely watched internationally as a potential template for other nations seeking to regulate youth digital access.