AI: Education’s Frenemy (Part 2)

O brave new world,
That has such people in ’t!

—The Tempest

When I first wrote about ChatGPT three years ago, concerns about AI in the classroom were just beginning to emerge.  Much handwringing was done over fears of rampant cheating—especially in the text-heavy disciplines such as English and history—and anxiety among educators steadily mounted that AI tutors might soon be coming for people’s jobs.  There was an almost universal apprehension that the digital age’s ultimate disruptor to the education had perhaps finally arrived.  Lots of angst.

Which now seems positively quaint.

Because today, we have computers grading computers; brain scans showing AI inhibiting neurons; and thought leaders coining a new term, “anti-intelligence,” to describe what is happening to our youngest minds (more on this later).  Enormous data centers are sprouting like weeds—with the same corresponding economic costs and environmental harm as their literal botanical counterparts—and for over a year now, I have received at least two offers a day on Linked-In to earn hourly income training artificial intelligences that have a biology focus. 

Yet the real eye-opening/face-slapping/jaw-dropping/pick-your-cliché moment for me recently was when I discovered that all my video lectures for my senior electives now open on-line with a searchable 100% AI-generated transcript of what I am saying that I neither created nor gave any permission to create.  Google’s AI simply spontaneously takes my entire audio and creates the corresponding text on the screen to the right of the visual component—all in the few seconds it takes to start the usually 35-40 minute video.  Here’s a screenshot for any skeptic:

Now, I would hope that the implications of what Google is now doing spontaneously would invoke at least a quiver of discomfort (if not outright abject terror!).  But if not, then my reader is probably itself an LLM AI to begin with, scouring the internet for its own training purposes, no emotional response required.  My writing has simply made it more proficient at invading what little remains of my already barely existent privacy.

However, what disturbed me most when discovering Google AI’s latest feature was not the act itself; it was the reality that here was one less opportunity for my students to have to think for themselves.  As English teacher Thomas David Moore sums it:

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”

And without opportunities for cognitive heavy-lifting, brains atrophy; minds devolve; and the entire point of education becomes at risk.

But that brings me back to what I mentioned earlier, the notion of “anti-intelligence.”  As its originator, John Nosta, describes it:

Anti-intelligence is not stupidity or some sort of cognitive failure. It’s the performance of knowing without understanding. It’s language severed from memory, context, and even intention. It’s what large language models (LLMs) do so well. They produce coherent outputs through pattern-matching rather than comprehension. Where human cognition builds meaning through the struggle of thought, anti-intelligence arrives fully formed.

Thus, for example, when Google automatically transcribes my lectures, my students do not have to wrestle with grasping the cognitive story I am asking them to learn by watching and engaging with the video; they can simply look up the factoid they need for a particular question, without any concern for the larger intellectual context within which that question resides.  In other words, they no longer need to learn anything from my lectures; they just need them as employable databases.

Which is fine, I freely acknowledge, if you already know how to think.  I do not need to possess all human knowledge in my brain because I possess the critical thinking skills honed by decades of training that enable me to effectively employ those databases containing that knowledge for constructive cognitive purposes.  Where things become problematic is that anti-intelligence has become the “cognitive climate” where the minds of today’s youngest children develop, and “when AI answers arrive instantly from childhood, it may affect whether certain cognitive capacities develop.”  Every theory of brain development is clear: children learn through a series of encounters with constraints that carry costs when mistakes are made.  Without both those costs and those constraints, they will fail to generate both the necessary knowledge and the intellectual capacity to make steadily more informed decisions. 

Yet today’s children, as Nosta points out, “aren’t just using artificial intelligence (AI) as a study aid; they’re building their cognitive patterns in an environment where answers arrive before questions even fully form.”  We have never lived in such a world, and that’s what makes the potential future of AI in education so troubling: the pathway the brain needs to follow during childhood “doesn’t just make thinking harder; it makes thinking possible.” If we remove that path, do we remove thinking?

It’s a disturbing (if not distressing) thought; especially given that 61% of Americans can’t name the 3 branches of government, half our adults can’t read a book written at the 8th grade level, and—my personal favorite—25% of us apparently still think the sun revolves around the earth rather than the other way around! Add in the fact that nearly half of college graduates report never reading another book of any kind following graduation and that significant majorities of today’s youth report either being bored or otherwise disengaged at school and the notion that AI could interfere even further with this current situation is positively disheartening.  We are already a society where “the rejection of learned knowledge is often seen as an expression of personal liberty” and “hostility to education is now actively separating us from a shared reality” (Millet, p. 148).  If AI’s increasing ubiquity inhibits our collective cognitive capacity beyond the damage digital technologies and underfunding have already done to our educational systems, then we really are “sitting ducks for tyrants and profiteers, willing to believe whatever tales they choose to tell us” (Millet, p. 149).

Lest we “abandon all hope,” though, I need to point out that steadily increasing numbers of us in education—at all levels—have begun adapting to this new reality—as we always have even since those first aforementioned cuneiform days (it was hard to cheat in the strictly oral culture preceding them).  High schools and colleges alike report returning to Bluebooks for exams and in-class writing for essays.  Hand-written lab notebooks are making a comeback in the sciences, and at least two universities, Purdue and Ohio State, have now made proficiency with AI in one’s matriculating discipline a graduation requirement because A) there is the practical need for individuals in general to be able to distinguish truth from fiction and because B) you won’t be able to do your job in the future without such knowledge.  As one microbiologist put it:

AI has already “revolutionized” her field. Recent research suggests that AI-enabled analysis of large genomic data sets, for instance, is allowing scientists to look at DNA directly from environmental samples, revealing entire ecosystems of previously unknown microbes.

In other words, there are questions of value in need of answers that the human brain does not have the computing power to solve but which our brain does have the critical thinking to put to meaningful purpose.  AI can do things we can’t; we just need to stop surrendering to it the things we can do that it can’t.

The challenge, therefore, is to determine where AI has value in educational situations and where active resistance to it needs to take place.  For instance, if we know a climate of anti-intelligence threatens proper brain development, then we need to pay careful attention to how we construct pre-primary and early-childhood educational environments and experiences, and we need to teach parents not to park their toddler(s) in front of an I-pad, no matter how exhausted and tired the work-day may have left them.  Knowing that screen time inhibits neural activity, we need to plan lessons that don’t require extensive use of computers, and we either collect cellphones at the start of the school day (as so many K-12 institutions are finally doing) or ban them from being out in the classroom (as so many colleges and universities now do).

At the same time, where an AI program can enhance educational investigation in ways no human brain can ever accomplish, then designing lessons to actively employ it adds value to the learning.  For example, if I want my students to explore the actual attitudes of Americans about gun control, I can have them see how many times any type of restriction has been proposed by every level of legislature in the land.  Or if I want them to have a better understanding of a pastiche before making them hand-write their own, I can have them generate such a thing from an entire body of an author’s work.  Indeed, in my discipline, the sciences, where genuinely enormous databases are the rule rather than the exception, the potential uses of AI to enhance student learning are almost too numerous to list here.  The bottom line is that there are lots of potential positive possibilities for education’s frenemy in the classroom; they just require wise discernment on the part of the teacher.

But that is perhaps the greatest challenge for dinosaurs such as me when it comes to AI and teaching because I have zero interest in artificial intelligence.  Period.  In fact, I would go so far as to say I have negative interest; I’m actively antithetical to it even.  The simple truth is that I relish difficult, hard thinking.  I enjoy the excitement from the intellectual uncertainty of being “lost” and finding my way “home.”  To state the obvious, I treasure the blank page and what it is going to demand of me to fill it.  I am “the life of the mind.”  Thus, learning that Google now spontaneously generates transcripts of my video lectures simply fills me with annoyance since I will now have to reconfigure how I have my students employ them in their learning.  I know I must adapt as an educator to this changing environment as I have so many times before, and I know that I will do so.  But after 37 years of adapting, I’m starting to appreciate my grandfather’s attitude when VCRs arrived on the scene (and this from a man who was born before airplanes and lived to see the space shuttles):  nope; done; don’t want to deal with this. 

Maybe I can find an AI that can help.

References

Millet, L. (2024) We Loved It All: A Memory of Life.  New York: W. W. Norton & Company.

Moore, T. (Sept. 8, 2025) Jelly Beans for Grapes: How AI Can Erode Students’ Creativity.  EdSurge.  https://www.edsurge.com/news/2025-09-08-jelly-beans-for-grapes-how-ai-can-erode-students-creativity.

Nosta, J. (Jan. 22, 2026) Growing Up Anti-Intelligent.  Psychology Today.  https://www.psychologytoday.com/ca/blog/the-digital-self/202601/growing-up-anti-intelligent.

Toppo, G. (Feb. 17, 2026) At These Universities, Using AI Isn’t Shunned–It’s a Graduation Requirement.  The 74.  https://www.the74million.org/article/at-these-universities-using-ai-isnt-shunned-its-a-graduation-requirement/.

Catechism and AI Revisited

This is the nature of the razor-thin path of scientific reality:
there are a limited number of ways to be right,
but an infinite number of ways to be wrong.
Stay on it, and you see the world for what it is.
Step off, and all kinds of unreality become equally plausible.

—Phil Plait

Two stories about artificial intelligence recently caught my attention.  The first, out of the University of California, Irvine’s Digital Learning Lab, examined how successful ChatGPT could be at grading English and history essays when compared to an actual teacher.  The second, an editorial about the AI revolution in general, exposited on the very practical and financial boundaries all AI technologies are rapidly finding themselves running up against.  Together, I found these stories causing me to revisit some themes from my very first posting about AI, and as I reflected more on all of these items, some shared threads between the two stories rapidly became apparent that I want to discuss here today.

But first, a quick synopsis of each article.

In the story about grading, researcher Tamara Tate and her team sought to compare ChatGPT’s ability to score 1,800 middle school and high school English and history essays against the ability of human writing experts to do so.  Their motive was to see if ChatGPT could help improve writing instruction by allowing teachers to assign more of it without increasing their own cognitive load.  If, for example, teachers could use AI “to grade any essay instantly with minimal expense and effort,” then more drafts could be assigned, thereby enabling the quality of student writing skills to improve. 

What they found was a lot of variability, with ChatGPT’s scores matching the human scores between 76% and 89% of the time, which Tate summarized as meaning that ChatGPT was “roughly speaking, probably as good as an average busy teacher [and] certainly as good as an overburdened below-average teacher. [But that] ChatGPT isn’t yet accurate enough to be used on a high-stakes test or on an essay that would affect a final grade in a class.”  Furthermore, she cautioned that “writing instruction could ultimately suffer if teachers delegate too much grading to ChatGPT [because] seeing students’ incremental progress and common mistakes remain important for deciding what to teach next.” Bottom line, as the title of the article states, the idea “needs more work.”

In the editorial about the AI revolution, technology columnist, Christopher Mims makes the strong case that the pace of AI development is hitting three walls: a rapidly slowing pace of development, mounting prohibitive costs, and what I will call the productivity boundary.  In terms of development, Mims points out that AI works:

by digesting huge volumes of [data], and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.

As for costs, training expenses are in the tens of billions of dollars while revenues from AI are, at best, in the billions of dollars—not a sustainable economic model.  Finally, the evidence is mounting that AI does not quite boost productivity the way its evangelists have touted because “while these systems can help some people do their jobs, they can’t actually replace them.”  Someone still has to check for AI hallucinations, and “this means they are unlikely to help companies save on payroll.”  Or to put it another way, “self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.”

Which brings me back to why I think these two articles share common threads of thought and what made me revisit my original posting about AI.  Both articles obviously point to AI’s limitations, and the grading one is simply a specific example of the “productivity boundary” Mims discusses.  Both articles have a cautionary tone about AI being the be-all-end-all solution to “all life’s problems” the way its many proselytizers want to claim it can be, and the grading one even brings up the economics of AI as it warns about schools jumping on the proverbial bandwagon and purchasing AI grading systems too quickly.

But it was the analogy of the truck driver that caused all the metaphorical gears in my head to click into place.  English and history teachers don’t just teach writing, and when they grade the writing, it is not just the quality of the writing they are grading.  They are not “just driving the truck.”  I am confident that ChatGPT could be a marvelous tool for catching run-on and incoherent sentences or for catching disorganized paragraphs and poor thesis statements, and if using it for that would enable an already overburdened teacher the chance to get a few additional drafts for an essay accomplished in their class, I’m on board.  The only way you get better at writing is to write.

However, what ChatGPT cannot catch (and here is where I suspect at least some of those discrepancies in the percentages found in the grading research come from) is the quality, the originality! of thought and ideas that a given piece of writing expresses.  Only the human teacher can do that because only the human teacher has actual intelligence as defined by biology:  the capacity to use existing knowledge to solve an original, unique, and novel problem.  No AI can solve a problem it hasn’t already seen—which is part of what Mims hints at with his remark about “10 more internets;” only a human mind could create them—and that is why we will still need the human teacher to do the final grading.

Which brings me back to some of the themes I first addressed in Catechism and AI.  In looking back at that essay (where I first wrote about this misuse of the word “intelligence” in computer science), I realized that what the technological breakthroughs since then have made possible is the deepening of the illusion of intelligence.  Once something like ChatGPT could be trained on the entire internet, pretty much every prior human answer to a problem was now part of the algorithm, and so when you present it with a problem that is novel to you, it appears like it can solve it on its own.  It appears intelligent.  And since problems truly novel to everyone who has ever lived grow exponentially fewer each day, AI can appear intelligent quite a bit of the time.

However, present it with a problem that is novel to both you and the AI and suddenly you get one of those hallucinations Mims points out you need an actual human to fix.  That remains the limitation of AI:  it cannot handle the truly novel, the genuinely unique.  Nor can it create it.  As I’ve written before, AI may be capable of producing a mimic of a Taylor Swift song, but it cannot produce an actual Taylor Swift song.  The challenge is in remembering that the mimic isn’t really Taylor Swift.

Again, here is where the technological breakthroughs since I first wrote about AI have deepened the illusion.  The content generated by AIs such as ChatGPT may look novel because that particular arrangement of words, images, etc. happens to look novel to you.  But somewhere, at some time, some human mind already put those same words, images, etc. together; some human mind created.  And you are just now on the receiving end of a tool that we can now train on what every human mind has created to date for the past 10,000 years. The ultimate catechism! And a lot of prior human creativity with which to fool someone.  We see a parallel in the development of the performance of magic shows:  one hundred years ago, we only had the technology to create the illusion of a woman sawn in half; forty years ago, David Copperfield has the tools to make the Statue of Liberty appear to disappear.  None of it is any less illusory; it just gets harder to tell.

And where that fact may grow increasingly problematic is in the realm of another theme from my earlier writing:  interpersonal relationships.  When I first wrote about AI five years ago, Her was only a movie; now it’s a reality.  For a monthly subscription, I can have the AI companion of my choice (romantic, platonic, and/or therapeutic), and have “someone” in my life who will never push back on me.  Add DoorDash, Amazon, and Netflix, and I could spend the rest of my life once I retire (or get a work-from-home internet job) in my own solipsistic bubble without any need for any direct human contact ever again.  Not gonna happen as they say, but the fact that I can write those words should be sobering (and shuddering) to anyone reading them. Because if we are ultimately successful at reducing our most basic humanity to an illusion, climate change and the next pandemic are going to be the least of our concerns.

Yet if Christopher Mims is correct, then AI may rapidly be approaching its illusory limits, and if Tate and her crew are correct, then watchful use of AI may help teachers give their students more practice improving their writing skills—and therefore their thinking skills—while not adding to their grading loads. So perhaps there is cause for optimism.  The key, I think, is always to remember that the “I” in AI is—at least for now—a biological falsehood, and what I now realize was missing from my earlier work on AI is the necessary emphasis on novelty being at the core, the essence of what it means to be intelligent.  That doesn’t mean the CS folks may not eventually pull off an algorithm that truly can create.  But for now, we do not live in that world, and we just need to keep reminding ourselves regularly of the reality of that fact.

References

Barshay, J. (May 20, 2024) AI Essay Grading Could Help Overburdened Teachers, But Researchers Say It Needs More Work.  KQED/Mind Shift  https://www.kqed.org/mindshift/63809/ai-essay-grading-could-help-overburdened-teachers-but-researchers-say-it-needs-more-work.

Mims, C. (May 31, 2024) The AI Revolution Is Already Losing Steam.  The Wall Street Journalhttps://www.wsj.com/tech/ai/the-ai-revolution-is-already-losing-steam-a93478b1?mod=wsjhp_columnists_pos1.