Questions from Students about "The Ban" this week.
- Kirra Pendergast

- 23 minutes ago
- 4 min read

In less than a month on December 10th the Australian Social Media Minimum Age Law starts being enforced. From 10 December 2025, social media platforms must stop Australians under 16 from having accounts and remove or deactivate existing under‑16 accounts. The legal duty is on the platforms, not on children or parents. Platforms must offer clear information, let users download their data, and provide simple review/appeal options if a mistake is made. They cannot make government ID the only way to prove age; a non‑ID option must always be available. Penalties for systemic non‑compliance can be very large (up to $49.5 million).
Here is some of the questions and answers to some of the questions students have asked our team this week.
“If I’m under 16, still have social media after the start date and something goes wrong online, will I get in trouble if I tell someone?”
No. Under these laws the consequences fall on platforms, not young people or parents. eSafety’s compliance focus is on the systems and processes platforms use; it isn’t about punishing individual kids for having an account. Even if some under‑16 accounts slip through, that alone doesn’t mean a platform is automatically non‑compliant.
Always speak up. Platforms must provide easy in‑app ways to report problems (including suspected under‑age accounts) and must handle those reports; if an account is deactivated, the user must be told what’s happening, how to save their content, and how to ask for a review.
What to say to students: “You won’t be fined or charged under these rules. If something goes wrong, tell a trusted adult and report it in‑app. The point of the law is harm reduction and support, not blame.”
-----
“Will this actually work? How can they tell 15 years 10 months from 16?”
Platforms can use a mix of age‑checking tools for example, age estimation (like face or voice analysis), age inference (patterns in activity), and age verification (confirming a real date of birth). No single method is perfect, so the guidance encourages a layered ‘successive validation’ approach: if one method is unsure especially near the 16‑year threshold the platform may ask for another check before deciding. Many systems use buffer zones near the cut-off so borderline results trigger more checks rather than a straight yes/no.
The guidance also notes accuracy around legal thresholds is the hardest part, so platforms are expected to keep improving their settings over time and back them up with easy review options for users.
Privacy note: This is not a Digital ID scheme. Platforms cannot require government ID as the only option; they must offer a non‑ID alternative (for example, an estimation method). -----
“Is Pinterest covered? What about CapCut?”
The law applies to any service where a key purpose is social interaction, users can link/interact with each other, and users can post material. Services excluded by the Minister’s rules aren’t covered.
Pinterest: Because people post Pins, follow, and interact, Pinterest fits that definition so it may be covered in Australia.
CapCut: If the version used here includes a social feed where users post, link and interact within CapCut itself, then it may be covered. The test is what the service actually does for Australian users.
Keep an eye on www.esafety.gov.au for updates but be prepared for December 10th by downloading things you want to keep.
-----
“I could get around it by…?”
Students will try these ideas; here’s what the guidance expects platforms to do:
“Change my country / use a VPN.” Platforms are expected to use several location signals IP address, GPS, device settings, phone number, app‑store data and to detect VPN/proxy use. So a VPN alone is unlikely to work for long.
“Use my parent’s photo for face ID.” Age‑estimation systems include liveness checks to stop use of someone else’s photo or a deepfake. If signals conflict (e.g., activity looks clearly under‑16), the platform should escalate to another check.
“Make an account in my parent’s name.” Platforms are expected to monitor for account takeovers or transfers (e.g., sudden changes in details, many accounts from one device) and act on them.
“Set ‘parent‑managed’ on Instagram / tweak my age later.” Relying on self‑declared ages isn’t enough, and platforms should block age changes without proper checks and prevent quick re‑registration after removal.
Circumvention attempts are anticipated and should be limited by design, but if a young person slips through, the focus remains on removing the account safely not punishing the child.
What this means in practice
For platforms (what they must do):
Detect and deactivate/remove under‑16 accounts with kindness, care and clear communication, including data‑download options and review/appeal.
Put age checks at sign‑up (with a non‑ID choice), use layered checks if needed, and prevent immediate re‑registration.
Monitor/limit circumvention (VPN detection, liveness, device/IP checks).
For young people:
If something goes wrong online, tell a trusted adult and report it in‑app. If your account is flagged by mistake, use the review process the platform must provide.
For parents/educators:
Reassure kids that they won’t be fined under these laws, and that speaking up is the safest way to get help. Platforms must provide clear information and support links when taking action on accounts.
Quick script you can use in class or with your kids
From 10 December 2025, social media companies, not kids, are responsible for making sure under‑16s don’t have accounts. If you’re under 16 and something goes wrong online, tell someone. You won’t be in legal trouble under these rules for speaking up. The company must remove under‑age accounts safely, let you save your stuff, and give you a way to challenge mistakes. Trying VPNs or using a parent’s photo is risky and often spotted. If you see a mistake or need help, report it in‑app and talk to a trusted adult.
For all of our free school and parent resources click here:




Comments