top of page

The story being told about AI

The story being told is smaller than what is actually happening.



When the chatter is all innovation and efficiency and extraordinary possibility we need to ask bigger questions because underneath it, something else is always taking shape. What I have learned over my years in the industry is that every major technological shift follows exactly the same pattern.

1.     The technology arrives fast.

2.     All of the excitement, innovation and positives hit the media cycles.

3.     The harm arrives with it or very shortly after.

4.     The rules that are supposed to protect people arrive last.

I watched it happen with the internet and social media, raising concerns in each of those waves because the pattern was already there, and I could see it clearly. Each time, the same gap opened between what the technology could do and what anyone was doing to make it safe. Each time, ordinary people paid the price while the regulatory frameworks were still being written.

The Organisation for Economic Cooperation and Development, the OECD, brings together 38 of the world's largest economies, including Australia, the United States, the United Kingdom, Canada, Japan, and most of Europe. The body that sits above individual governments and watches the big picture, tracking how countries are managing the challenges that affect all of us. The OECD is watching how governments around the world are responding to artificial intelligence. And what it is finding is something every parent and teacher deserves to know clearly. The gap between how fast AI is being built and deployed into our lives and how fast governments are developing meaningful rules to protect people from its harms is not closing. It is getting wider.

There is another body worth knowing about. The Global Partnership on Artificial Intelligence, known as GPAI and pronounced Gee-Pay, was created specifically to try to close that gap. It brings together more than 80 countries, alongside governments, researchers, universities, and community organisations, with one shared goal that is making sure AI is developed responsibly, safely, and in a way that respects the rights of ordinary people.

Despite all of this commitment, and despite all of that expertise, goodwill and international cooperation, the gap between what AI can do and what any government is actually doing to manage it remains.

At that level and in those negotiations there is often not much thought for the right now. Yet the people living inside it are real, in real homes, trying to make sense of a world that is changing faster than anyone is explaining it to them.

The clearest way to understand how slowly the rules move is to look at the European Union's AI Act. Europe proposed it in April 2021. It passed in 2024. Full enforcement does not begin until August 2026, and some rules will not apply until 2027. That is six years from idea to implementation in a field where the technology has changed beyond recognition multiple times in that same window of time, and Europe was the fastest.

The United States has no equivalent framework at all. And this matters enormously, because almost every major AI company in the world, OpenAI, Google, Meta, Microsoft, Anthropic, is headquartered there.

If you have heard me speak or read my previous writing, you will know I have been talking for years about a law written in 1996 called Section 230, which gave social media platforms nearly three decades of legal protection from being held accountable for the harm their products caused. It allowed an entire industry to grow, profit, and cause real damage to real families while remaining largely untouchable in court. Legal experts are now asking urgently whether AI companies could use similar legal arguments to avoid accountability for what their systems generate and do. It is being tested in courts, case by case, right now. That is not a reassuring place to be and the truthful answer is that nobody yet knows.

Australia hasn’t yet created a single, dedicated law for artificial intelligence, so for now we’re relying on a patchwork of existing rules like privacy, online safety and criminal laws alongside voluntary standards and cooperation from the companies building these systems. This past week’s meeting between Anthony Albanese and Anthropic reflects that approach in the form of a Memorandum of Understanding that encourages information-sharing and goodwill, but isn’t legally binding. There are real safeguards in place, and regulators do have powers, but the more tailored rules for AI are still taking shape. In the meantime, AI has become part of everyday life in our schools, our work, and our homes way faster than the collective understanding of what it means or how it should be governed.

When you searched for something online today, AI decided what you saw first. When you called your bank or insurance company, there is a reasonable chance you were speaking to an AI system. When your child's school uses a platform to track learning and flag who might be struggling, AI is often making those assessments. When you scrolled through your social media feed, AI decided the order of everything you saw and everything you did not. When you applied for a loan, a rental property, or a job online, an AI system may have screened your application before any human ever looked at it. The navigation app this morning. The customer service chat window. The content your family was recommended on a streaming platform last night. All of it, AI.

Everyday people often do not know this is happening, and of course that is not an accident. These systems are designed to be invisible, because invisible systems do not get questioned.

If an AI system makes a decision about you, and that decision is wrong or unfair or based on biased data, you may never know it happened. You will just wonder why you did not get the call back. Why the job application you were sure you were well qualified for was rejected. Why the quote came back higher than you expected. There is no letter that says an algorithm decided. There is no form to appeal to. In most cases right now, there is no law that says they even have to tell you. This is what the absence of binding rules actually means in everyday life. It is the quiet, invisible shaping of ordinary moments by systems that most people have never been introduced to and never consented to be assessed by.

A survey by the University of Melbourne and KPMG found that only thirty percent of Australians feel confident that current safeguards around artificial intelligence are adequate. Thirty percent. That means most Australians are not yet convinced the settings are right. They are sensing a gap even if they don’t know how to fully describe it. You can read that study here

There are always things you can do to take a little bit of control back starting here: Talk to your children about AI the way you would talk to them about crossing a road calmly, openly, and without alarm, but with enough honesty that they understand the world they are moving through.

Ask questions of your child’s school. What is their approach to AI? Are staff supported and trained? Are there clear, shared expectations for students?

When it comes to permissions and platforms, be cautious about agreeing to blanket consents. If something is broad, vague, or open-ended, it is reasonable to pause and ask what it actually allows. You have a right to understand how your child’s information, images and work may be used, now and in the future, before agreeing to it.

Stay informed, but not to the point of overwhelm, but enough to recognise when something feels off. And if it does feel off, trust that instinct. Raise it. With the school. With your local government member and even though you may not get a response, with the platforms themselves. These systems do not shift on their own; they shift because people pay attention, ask questions, and expect better.If you feel uneasy, that is not a failure of understanding. It is often a sign that you are paying attention. Parents and teachers are being asked to navigate a space that is changing quickly, and not always explained clearly. Doing your best within that is not falling behind, it is exactly what responsible adults do.

This will take time. The systems, the rules, they are all still forming and you will see a lot of fear and outrage popping up everywhere from media and advocacy groups. Centre on the fact that change does not come from stepping back. It comes from people who stay present, stay curious, and keep asking thoughtful questions, even when the answers are incomplete, because that is where you still have influence. If you’ve made it all the way to the end, thank you. These pieces take time (usually more than I ever expect), a lot of reading, and a fair bit of quiet thinking to turn complex policy and law into something that actually makes sense in real life. If you find this work helpful, grounding, or even just a little clarifying, subscribing is a simple way to support it. It helps me keep doing this slowly, carefully, and without rushing past the details that matter. No pressure, ever. But if you’d like to be part of keeping this kind of work going, you can choose from the options you will see when you click here

 

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page