top of page

The Face Is Fake. The Risk Is Real.



Here we are, in 2025, where the most dangerous stranger your kid, your staff, or your leadership team will ever meet doesn’t even exist, and no one is teaching anyone how to spot them.

Because the new stranger isn’t a shady dude lurking on a playground or some random profile in a chatroom, it’s a live, AI-generated, real-time mimic of someone you trust, designed to look you in the eye, speak in a warm, familiar voice, and manipulate you into believing everything they say, because they know exactly what you want to hear and how to make you feel safe.


And if you think that sounds like sci-fi, if you think this is tomorrow’s concern, then you are already dangerously behind because this isn’t a hypothetical. This isn’t a test. This is now.

A $25 Million Lesson in Trust

In one of the most devastating and completely avoidable frauds of the year (so far), a multinational employee wired over $25 million USD during what appeared to be a completely normal video call with senior leadership, a group of people they had worked with for years, people whose faces and voices they knew intimately, people whose authority they trusted without hesitation.

Except every single person in that room every face, every voice, every subtle facial tic and reassuring gesture was fake. Not a recording. Not a video edit. A live, AI-driven, real-time impersonation, orchestrated using generative deepfake software that manipulated every detail of the call, allowing criminals to impersonate multiple executives at once, in perfect sync, in real-time.

This wasn’t a case of clicking the wrong link.This wasn’t someone falling for a typo-ridden email from a fake prince.This was an employee doing exactly what they were trained to do trust the system, trust the meeting, trust the face and losing millions because the system didn’t have a single layer of defence against synthetic presence.

When Law Enforcement Plays God with Synthetic People

In a report that should have sent shockwaves through every civil liberties office in the world (and yet somehow barely made a ripple), it was revealed that U.S. police agencies have been deploying AI-generated personas synthetic people with fake names, deepfake faces, and fully fabricated digital histories inside online communities, protest networks, and group chats as part of covert surveillance operations.

These fake people aren't just passive observers. They comment. They befriend. They provoke. They escalate.

They infiltrate online spaces under the pretence of being fellow activists, community members, even children and they do it with full legal backing and zero requirement to disclose to anyone that they aren’t real, because right now, no law says they have to. And if you’re not deeply disturbed by that, you need to re-examine who you think is protected by the word “safety.”

The Tools Exist, The Barrier Is Zero, and The Clock Is Ticking

It takes nothing to become someone else online now. Thanks to free, open-source tools that are fully operational as of last year, anyone with a webcam and a bit of internet access can replace their face in real-time during a live call, add voice modulation with frightening accuracy, and pass themselves off as anyone your kid’s teacher, your school counsellor, your HR manager, your therapist, your mother. Just a few clicks, and a synthetic identity walks into a virtual room undetected. So while your organisation is still proudly doing “cyber safety awareness week” with posters and outdated phishing drills, the real threat has already arrived, and it doesn’t give an eff about your training manual.

You Want a Solution? Start By Admitting the System Is Broken

You cannot solve this with stricter email policies or by telling people to “be careful on Zoom.” You need to burn the old assumptions to the ground and start over with policies, tools, and mindsets that begin with this one simple truth:

If you cannot verify identity outside of face, voice, or familiarity, you are already compromised.

The safety theatre needs to end. The performative panels. The corporate checklists. The feel-good campaigns that mean nothing when a child is speaking to a synthetic predator through a school portal or when a company’s entire capital reserve disappears into the hands of a fake CFO.

This is not about being scared, it is about being ready and right now, we’re not. When the next breach happens and it will you won’t hear alarm bells.You’ll hear a familiar voice. You’ll see a warm smile. You’ll feel relief because the person on the screen “gets you.” And then you’ll do what millions will do this year, you’ll trust the wrong person, in the wrong moment, because your system was built on illusion, and you never built the tools to spot the lie.

What you can do:

Audit Every System Where AI Can Enter Undetected

Ask these five questions in every tech review:

  • Can this platform be accessed via a fake identity?

  • Do we verify users beyond login?

  • Who’s responsible for identity checks and how often do they fail?

  • Can our staff or students report suspicious interactions without retaliation?

  • What’s our fallback plan if trust is breached?

Normalise the Phrase “Let’s Confirm This Another Way”

This should be your go-to line, and it should never offend someone real.

“Let’s confirm this through another channel.”

Use it in professional emails, in your kid’s group chats, during video calls. Normalise verification as a form of care, not suspicion.

 

 

 

 
 
 

Comments


bottom of page