AI Tools are Finally Helping Teachers Breathe. Now Comes the Hard Part.
- Kirra Pendergast
- Apr 21
- 5 min read

Teachers are not looking for miracles. They’re looking for time.Not the philosophical kind. Not the romanticised version. They are looking for 27 minutes back from the endless documentation of student behaviour. They are looking for relief from inboxes that refill faster than a coffee cup in the staffroom. They are looking for lesson plans that don’t take four hours to create and get shredded by a single disengaged classroom. They are looking for one less form to fill, one less login, one less click. And for many of them, AI has started to feel like an answer to a question they didn’t even have time to ask. That’s not a fantasy. That’s not hype. That is happening, right now, in classrooms where the lights flicker and the heating doesn’t always work. I see it. I see them. I am fortunate enough to work with educators around the world, and I get to see something the headlines rarely capture. Not fear. Not overwhelm. Not grand declarations about the future of learning. I see something quieter, more human, a visible exhale.
Let’s start by rejecting the smug cynicism that often surrounds AI adoption in education. It usually comes from people who haven’t stepped into a public school since their own graduation. They call it lazy, or worse, false innovation. They reduce it to trend-chasing, a shiny object in an already tech-sick system.
But here’s the truth no one wants to say out loud, many teachers are drowning. And the people mocking their lifeboats have never been in the water.
The average teacher spends 50 to 60 hours a week on their job, but only about half of that is direct teaching. The rest is prep, marking, admin, meetings, reports, supervision, crisis management, and emotionally supporting students whose needs are deeper and more complex than ever before. Add to this the collapse in respect for the profession, the surge in behavioural incidents, and the unrelenting scrutiny of social media parents, and you’ve got a system that eats its own.
So when a teacher finds a way to cut five hours off their weekly planning using generative AI, that’s not a shortcut. That’s survival. It is not “inauthentic” to use an AI tool to differentiate a reading comprehension task for three different literacy levels. It is smart and it is humane. It is the exact kind of decision-making we need in a system that has stretched teachers so thin they are breaking.
There’s a kind of joy I have seen come over a teacher when they realise an AI tool can write a parent update in under a minute, generate differentiated tasks in the time it takes the kettle to boil, or summarise a dense curriculum document into something they can actually use. It’s not the joy of novelty, of being seen, and finally supported, by something that doesn’t ask for more in return.
Educators are not out to overhaul the system, they are trying to survive it. They are showing up every day inside structures that are stretched thin by staff shortages, rising needs, outdated tools, and the bureaucratic weight of compliance culture. What they want is not transformation. They want time. Sanity. A little breathing room between the grind of expectations and the reality of what one person can actually do.
We do not measure progress by how futuristic it looks. We measure it by what it frees us to do. The ability to sit longer with a struggling student because your lesson planning took half the time. The headspace to reflect on your teaching practice because your reporting load didn’t consume your weekend. These are not small wins, they are the building blocks of retention, wellbeing, and quality education.
But here’s where we have to be honest, we cannot separate the relief AI brings from the responsibility it demands. Because every new tool that makes something easier also changes the terrain beneath our feet. When educators bring AI into their workflow, they are not just adopting a tool. They are taking on its risks whether or not they’ve been trained to see them.
Generative AI systems do not exist in a vacuum. They are built on data, designed by companies, and deployed in contexts that are often poorly understood. That means they carry bias and they make mistakes. They reflect the values and assumptions of the people who build them, and the data they’ve been trained on. For educators, that matters. A lot.
That’s why risk literacy can no longer be an optional add-on. It has to be built into the rollout of every AI initiative in education. If a school introduces a tool, it must also introduce a clear, living policy on how that tool should be used, what data it collects, where that data goes, and what recourse an educator has if something goes wrong. These policies must be updated often. Not bi-annually. Often. Because the technology is changing monthly. And if we treat policy as a static document, something to appease procurement or satisfy a governance checklist, we are not managing risk we are manufacturing it.
And just as urgently, we need to shift how we think about what educators teach. AI ethics cannot be a niche conversation reserved for Year 11 students. It must be a baseline for every student, from Grade 3 and up. They are stepping into a world shaped by algorithms, predictive systems, and invisible design decisions. They need to understand power, privacy, fairness, and agency in a machine-mediated world. That starts with educators being equipped to teach those things not from fear, but from confidence.
That doesn’t mean every teacher becomes a computer scientist. It means they understand enough to ask good questions, model healthy skepticism, and show students that technology is not magic, it is made.
And what is made can be remade, if you know how to look under the hood.
So yes, there is joy and there is excitement. There is a genuine sense of momentum as AI begins to ease the pressure on educators who have carried too much for too long. But if we stop there, if we confuse utility with immunity, we miss the deeper opportunity.
The opportunity in front of us isn’t just about making the system smarter. It’s about making it safer. Smarter systems can automate tasks, reduce workloads, and offer personalised support at scale. But a system that is merely efficient without being ethical will only replicate and accelerate the harms we already struggle to contain.
Safety isn’t about limiting innovation. It’s about ensuring that what we build doesn’t come at the cost of the most vulnerable, the student mislabelled by an algorithm, the teacher held accountable by a system they don’t control, the community left out of the dataset altogether.
Speed is seductive. AI promises faster processes, quicker turnarounds, streamlined reporting. But speed without scrutiny is a trap. What we need is not just acceleration, but equity. Fairer systems demand that we slow down long enough to ask, fair for whom? Fair by whose standards? Fair in whose language, whose context, whose version of the truth? Because a process that saves time but deepens bias isn’t innovation. It’s negligence in fast-forward. And yes, AI is giving educators breathing room. But breathing room is only useful if we use that breath well. If that extra hour means a deeper connection with a student, a sharper focus in the classroom, a little less burnout at the end of the week, then we’ve done something worth celebrating. But if we use that space to do more of the same, to double down on a system already creaking under its own contradictions, then we’ve simply automated the dysfunction.
Real progress is not measured by gains alone. It is measured by what we preserve in the process. Trust. Humanity. Autonomy. These are the things we cannot afford to lose, no matter how powerful the tools become.
Comments