top of page

The Future Is Fake (Deepfake)


ree

This week, three developments grabbed my attention, and they’re not disconnected. They expose a pattern: innovation outpacing responsibility, and harm waiting in the wings.

  1. Apple pivots from Vision Pro to AI glasses. Apple is halting its planned overhaul of the Vision Pro headset to reallocate talent toward smart glasses, devices meant to compete with Meta’s upcoming offerings. Reuters Apple is now reportedly developing at least two new glasses models: one that pairs with an iPhone (no internal display) and another that includes embedded displays. The move signals a race in the wearable space, and it matters, because every incremental step toward “normalising” wearable cameras makes it easier to justify erosion of privacy and lax oversight.

  2. OpenAI releases Sora 2, a new frontier of synthetic video.

    OpenAI has launched Sora 2, a leap from its earlier Sora model, as a stand-alone iOS app in the U.S. and Canada. The app enables users to not only generate video from text prompts, but to “remix” content and drop themselves or others into scenes through a “cameo” feature once the system captures a short video and voice recording for someone’s likeness. OpenAI asserts that uploads and depictions of people will be limited initially, with restrictions on explicit content, impersonation, and deepfakes. This is not a trivial expansion. Sora is now a social tool — a kind of generative-video TikTok — with algorithmic feeds and remixing at its core. As the Washington Post put it: “Everything is fake” becomes the tagline for Silicon Valley’s new social frontier. The Washington Post

    The question I keep returning to is if the release of Sora doesn’t trigger immediate global legislative pressure, cross-sector alignment, and a hard safety audit across the generative AI stack, we have betrayed whatever we claimed to value more than “innovation” namely, harm prevention, consent, accountability, and human dignity.

  3. A video from @WhiteHatterTeam that haunts me. I want to acknowledge what I can’t show here (for safety and privacy reasons). The video, shared by @WhiteHatterTeam on Instagram, encapsulates the worst-case consequences of unregulated synthetic and surveillance technologies in everyday life. It shows how easy it is to meet someone online who is an entirely synthetic creation, and how this is manipulated and can be manipulated for Sexual Extortion. This is a chilling glimpse of what many already face.

    The attached video is not a sci-fi scenario. It is daily life in slow motion. It’s why I push so hard for clearer lines. If the tech enabling synthetic images, identity misuse, grooming vectors, and covert recording is creeping into everyday consumer devices and we allow it then we are complicit in the normalisation of harm.


While policymakers scramble to catch up, the PR engines push narratives of “lifestyle” and “boundless creativity.” Consider one of the recent Meta smart glasses promo: a celebrity from my own home town was filmed sleek frames, drone footage, their child in shot, a caption promising the “future.” The message to me was surveillance-caliber hardware, built without deep privacy or child safety architecture, sold as aspirational fashion.

Those influencers may claim ignorance. But if you are being paid to promote a device and weren't informed of its capabilities, covert recording, synthetic reproduction, identity misuse, grooming facilitation then your role is not “just influencer.” You are enabling. You become complicit in the dissemination of tools already used to stalk, exploit, silence.


We are past the point of debating “emerging” risks

This is not about the emergence of threats. We are deep into system failure — governance failure, accountability failure, policy failure. These risks are not hypothetical. They are embedded.

So here are the questions I demand that platforms and public figures — especially those hired to promote these technologies — answer publicly:


  • Do you maintain a risk register?If not, why? If yes, when will it be audited, and when will it be transparent?

  • What does “informed consent” mean in your context?How do you brief your promoters on consent when the hardware or software can covertly record, re-synthesize, or exploit likeness?

  • Were your promoters briefed on digital harm vectors?Sexual extortion, synthetic CSAM, identity theft, grooming, reputation shifting — do they understand ripple effects of their post beyond follow counts?

  • Are you prepared to accept liability if harm results from your promotion?If your sponsored celebrity’s child is used in marketing for a device that facilitates harm, will you be held accountable?

We have had a decade of digital harm already. The toothpaste is out of the tube again. The window for ignorance is rapidly closing — or it should be.

What must happen next

1. Hard safety audits mandated end-to-end.Every generative AI model — no matter how “fun” or “creative” — must be audited with red-teaming, adversarial attack modeling, psychological safety review, and independent oversight.

2. Cross-sector legislative alignment.AI, consumer electronics, child safety, identity law, data protection — they all must interlock. We cannot let AI be regulated in isolation, while hardware and platform layers carry systemic risk.

3. Transparency and labelling by default.Every synthetic image, every AI-generated video, every “cameo” insert must carry immutable metadata and visible disclosure. Users must know what they are seeing and interacting with.

4. Platform and promoter accountability.If you promote a product with latent risks, you must be required to conduct (and publish) risk assessments and educational disclosures. The promotion must include warnings — the same way certain medical or financial products are regulated.

5. Child-first safety by design.Child safety cannot be an afterthought. Covert capture, synthetic impersonation of minors, grooming facilitation — those risks must be architected out before release.

The moment we normalise promoting these devices as fashion or aspiration is the moment we let surveillance, grooming, synthetic exploitation inch into everyday life. That ship doesn’t need to sail before we draw a line.

What you allow into the world, you become responsible for. And if we fail to regulate this now, we will spend the next decade trying to recover — with real lives in the fallout.

— Thanks to @whitehatterteam for the video.

 
 
 

Comments


Online Safety & Wellbeing.
By the Ctrl+Shft Coalition.

500 Terry Francois Street, San Francisco, CA 94158

ctrl-shft

Online Safety Pty Ltd - All rights reserved 

Stay Tuned.

Get the latest updates from Ctrl+Shft in your inbox.

Thanks for subscribing!

bottom of page