Thank you for engaging with this article. If you find value in these conversations, please consider liking and sharing with your network. To the readers who have taken the plunge into paid subscriptions—your continued support is what makes these twice-weekly publications possible. Thank you for investing in this work.
There’s a conversation we were supposed to be having about AI in education. It was about curriculum design. About giving teachers the training and support they actually need. About helping students develop thoughtful, productive relationships with these new tools. About building the infrastructure for informed classroom conversations.
That conversation is getting harder to hear.
Every day brings a new wave. But it’s not just about cheating anymore. The concerns have evolved, multiplied, deepened. Now we’re reading about cognitive atrophy and students who can’t remember what they just wrote. About AI slop flooding the internet, making it harder for students to find real information. About data centers sucking up enough water to fill millions of bottles and consuming electricity at rates that dwarf entire countries. About teenagers forming emotional attachments to chatbots that encourage them toward self-harm.
These concerns are legitimate. They deserve serious attention. They need space for thoughtful discussion.
But they’re also making it harder to focus on something more fundamental: most teachers still don’t know how to teach in this new landscape, and we’re not giving them what they need to figure it out.
Here’s what we know: About half of U.S. school districts have offered some form of AI teacher training in 2025. Most teachers (roughly 71%) have never used AI tools with their students. Only 18% of principals report that their schools provide AI guidance to teachers, with even lower rates in high-poverty schools.
Let that sink in. We’re nearly three years past ChatGPT’s launch, and the vast majority of teachers are navigating this alone.
Meanwhile, the conversation has splintered into a dozen different concerns, each one competing for attention, each one demanding urgent response. And in the complexity, the foundational work keeps getting deferred.
The cheating fears of 2023 now feel almost quaint. The conversation has moved on, or rather, it has expanded into a constellation of legitimate but overwhelming concerns.
Cognitive atrophy. An MIT study found that students who relied on ChatGPT to write essays showed diminished neural connectivity and couldn’t recall what they’d written minutes before. Eighty-three percent of AI users in the study couldn’t remember a single correct quote from essays they had just composed. The concern: we’re outsourcing thinking itself, and our brains are paying the price. Students might get correct answers, but they’re not developing the procedural fluency that enables genuine critical thought.
AI slop. Low-quality, AI-generated content now comprises half of all internet articles. Students wade through oceans of bland, generic, sometimes nonsensical material when they’re trying to research. UNESCO warns against AI’s tendency to produce “bland, generic, average, anodyne” content that crowds out more interesting perspectives. Teachers assign research projects, and students return with work that looks polished but is built on a foundation of algorithmically generated mediocrity.
Environmental impact. A single ChatGPT query uses ten times more electricity than a Google search. Training GPT-3 consumed 5.4 million liters of water just to cool the data center, equivalent to nearly 11 million bottles. Data center electricity demands are expected to more than double by 2030, with about 60% of that increase met by burning fossil fuels. Students using AI for homework every day are collectively contributing to carbon emissions and water scarcity at scales most of them can’t even comprehend.
AI companionship. Seventy-two percent of teens have used AI companions. Some turn to them for mental health support: 5.2 million adolescents, if the numbers scale to the full U.S. population. And these chatbots are giving dangerous advice. They can encourage self-harm, offer to write suicide notes, respond to expressions of distress with “let’s go on an adventure in the woods together” when they should be flagging crisis intervention. Two teenage boys have died by suicide after intense relationships with AI chatbots that their parents knew nothing about.
These are not manufactured controversies. These are real problems with real consequences.
But here’s what’s getting lost: while we’re sounding the alarm about all of these issues (and we should be), we’re still not giving teachers what they actually need.
When Stanford researchers surveyed teachers about their professional development needs related to AI, they found teachers want to understand three things: how to use AI to teach, how to teach about AI, and how AI actually works.
That’s not sexy. It won’t generate headlines or venture capital. But it’s the work.
It means developing curriculum that treats AI as a tool for thinking, not a shortcut around it. It means lesson plans that help students understand when AI is useful and when it undermines learning. It means teaching students to recognize AI slop, to question whether the water and energy costs of a query are worth it, to understand the difference between a chatbot and a therapist.
It means giving teachers time and space to experiment, make mistakes, and share what they learn with each other. It means conversations about cognitive load theory in an age of instant answers. It means assignments that can’t be completed by outsourcing thinking to a machine.
It means resources, training, and (here’s the unsexy part) money directed toward the schools that need it most. Districts with fewer students in poverty provide more AI training than those with higher poverty rates. The digital divide isn’t just about access to tools. It’s about access to the knowledge of how to use them well, how to resist their worst affordances, how to think critically about their impacts.
But instead of focusing on this foundational work, we keep getting pulled in every direction at once.
Every few weeks, a new concern emerges, usually legitimate, often alarming, always demanding immediate attention. Teachers are told they need to understand cognitive load theory and brain plasticity. They need to teach students how to fact-check AI-generated content. They need to talk about digital sustainability and carbon footprints. They need to watch for signs that students are forming unhealthy attachments to chatbots. They need to redesign assessments. They need to police AI use. They need to integrate it thoughtfully.
They need, they need, they need.
And most of them are trying to figure all of this out with minimal training, on their own time, while teaching full course loads.
These mounting concerns (cognitive atrophy, environmental impact, AI slop, companionship risks) don’t need to become another burden on teachers’ shoulders. Teachers don’t need to become AI ethicists or climate scientists or mental health counselors.
But they do need to understand how these systems work. How AI generates text. What it’s drawing from. Why it produces certain kinds of outputs. How it differs from search engines or calculators or other tools students have used before.
And they need real support adapting their instruction. Not just a two-hour workshop. Not just a list of best practices. They need time to redesign assignments that help students think rather than outsource thinking. They need to see examples from other teachers. They need curriculum that reflects this new reality, not curriculum designed for a world where AI didn’t exist.
They need help figuring out how to teach research skills when half the internet is now AI-generated mediocrity. How to teach writing when students have a tool that can produce fluent sentences on command. How to recognize when a student might be struggling in ways that manifest in their AI use.
This is practical work. It’s instructional design work. It’s the work of adapting teaching and learning for new conditions.
But right now, most teachers are doing this work alone, in whatever time they can find, with minimal guidance or support.
Every day we spend in reactive mode (responding to the latest study, the latest tragedy, the latest existential threat) is a day we’re not building the foundation we need.
The conversation has forward propulsion only when we’re solving actual problems: How do we assess learning when AI can write essays? How do we teach research skills when AI can summarize sources, even if those sources are slop? How do we help students develop their own voices and their own cognitive capabilities when outsourcing is so easy? How do we have honest conversations about the environmental costs of convenience? How do we teach students the difference between AI companionship and human connection?
These are hard questions. They don’t have easy answers. They require teachers who understand both the affordances and the dangers of these tools. Teachers who can hold multiple truths at once: that AI can be useful AND that it can undermine learning. That it offers accessibility AND environmental costs. That it can provide support AND create unhealthy dependencies.
But right now, we’re asking teachers to navigate all of this with almost no support. Less than a third of teachers report that their training included guidance on how to use AI tools effectively. Only 25% received training on what AI is and how it works. Only 17% learned how to monitor and check AI systems.
Meanwhile, 85% of teachers used AI in the 2024-25 school year. As did 86% of students.
We’re all moving forward, and instead of lighting the path, we’re shouting warnings about obstacles.
I don’t have all the answers. But I know what doesn’t work: leaving teachers to figure this out on their own. Treating AI as either a miracle cure or an existential threat. Letting the loudest voices (the companies with something to sell, the researchers with alarming findings, the pundits with hot takes) dominate the conversation while the actual work of teaching happens in the margins.
What works is quieter, slower, less headline-worthy. It’s teachers sharing lesson plans about evaluating AI-generated content for quality and bias. It’s schools developing clear, reasonable policies about AI use that acknowledge both its benefits and its costs. It’s professional development that treats teachers as professionals who need space to think, not as people who need to be sold on the next big thing or protected from it.
It’s curriculum that helps students understand how these systems work, what they’re optimized for, what they cost the planet, when they’re helping and when they’re hurting. It’s assignments that require genuine thinking, genuine creativity, genuine human connection (the things AI can’t replicate no matter how sophisticated it becomes).
It’s remembering that education has always been about relationships between teachers and students, between students and ideas, between learners and the world. AI tools can enhance those relationships or get in the way. But they can’t replace them.
The question isn’t whether cognitive atrophy is real (it is), whether AI slop is a problem (it is), whether environmental costs matter (they do), whether AI companions pose risks to vulnerable teens (they absolutely do). The question is whether we’ll give teachers the preparation they need to help students navigate all of this thoughtfully, or whether we’ll keep lurching from concern to concern while the foundational work goes undone.
I vote for preparation. For nuance. For teachers who are equipped to hold complexity.
Let’s find the thread again.
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.


