Gmail users, please listen up.
- Kirra Pendergast

- 6 days ago
- 2 min read

If you don’t want your emails, chats, and digital habits feeding into Google’s AI systems (yes, even when it doesn’t say “AI” outright) there’s something buried in your settings you need to switch off.
Google isn’t going to wave a flag or drop you a notification about it.
These so-called smart features? That’s just AI hiding under another name. Predictive writing. Autocomplete. Auto-summarise. “Help me write.” They’re generative AI tools in everything but name. And they’re hoovering up your data to keep learning. Your data trains their models, your habits improve their products.
If that doesn’t sit right with you, here’s how to take back a little control.
Step 1: Turn off the smart features for Gmail, Chat, and Meet
Go to Gmail on your computer
Hit the settings gear top rightClick See all settings Scroll down until you find “Smart features” Untick the box that allows Gmail, Chat and Meet to run on smart features. It may boot you back to the screen so you have to hit settings again each time. They don't make it easy.
Step 2: Kill it for Google Workspace and other services too Still in the General tab
Find “Google Workspace smart features”
Click through to Manage Workspace smart feature settings
Turn both toggles OFF
That’s it. Not particularly hard but also not obvious and most users won’t know to go looking unless someone tells them.
Now, depending on where you live Switzerland, the UK, Japan or the European Economic Area these features may be off by default. Because those regions have tighter data laws. The rest of us? We’re left fending for ourselves in an invisible game of opt-out.
The bigger issue here isn’t just privacy, it’s the quiet erosion of autonomy. These AI-infused features aren’t always helpful. They’re designed to reshape how you write, respond, work. To nudge your behaviour, subtly, constantly through dark patterns, and the more you use them, the more they learn. Not just about language patterns, but about you.
We’ve seen it over and over in the past few years as AI becomes baked into everything, rarely named, never fully explained no informed consent. Features are rolled out at speed and opt-outs are buried permissions are rolled back to when you first ticked a box saying "accept terms" and it has become your job to monitor it all.
This is what happens when regulatory frameworks can’t keep up with product roadmaps due to legislation passed way before AI in the way we use it today was a thing. When companies aren’t afraid of penalties and when user rights are treated as settings, not standards.
Unless that changes and AI enforcement becomes more than just a wishlist item in policy drafts, people will continue to be datafied by default without ever knowing what they gave up.




Comments