Deeper Learning

The problem with schools isn’t that they are no longer what they once were;
the problem is that they are precisely what they once were.

—Roland Barth

As mentioned in my most recent essay, I spent a portion of my summer reading the research of Jal Mehta and Sarah Fine of the Harvard Graduate School of Education, and in their work, they explore the status of what they call “deep learning” in America’s public high schools.  They characterize “deep learning” as having three fundamental properties:  mastery, identity, and creativity, and they elaborate on these properties as follows:

Mastery because you cannot learn something deeply without building up considerable skill and knowledge in that domain; identity because it is hard to become deeply learned at anything without becoming identified with the domain; and creativity because moving from taking in someone else’s ideas to developing your own is a big part of what makes learning “deep” (p. 299).

Or as educator David Perkins marvelously summarizes it: “playing the whole game at the junior level.”

Mehta and Fine also describe in their work that the classrooms where deep learning is taking place are spaces where “students had real choices, learning by doing was the norm, there was time to explore matters in depth, and students were welcomed as producers rather than receivers of knowledge” (p. 5).  They then describe teachers successfully generating this kind of learning as individuals with strong links between their sense of self and their sense of purpose, educators who:

sought to empower their students; they wanted them to be able to approach both their fields and other life situations as people who could act on the world and not simply have the world act on them.  While their hopes for their students as people came first, they cared about their students through their disciplines or subjects (p. 351; original emphasis).

However, while looking for these aspirational qualities of deep learning in nearly 100 schools scattered across the country and across the socioeconomic spectrum—ranging from progressive charter to International Baccalaureate to traditional comprehensive—what they actually found was not a lot of deep learning.  Over the six years of their research, more than 300 interviews of administrators, teachers, and students, and over 750 hours of classroom observations, what they found was that the long-standing model for learning still dominated:  teacher as transmitter; pupil as recipient.  Indeed, “in classroom after classroom, students were not being challenged to think.  Roughly speaking, about 4 out of 5 classrooms we visited featured tasks that were in the bottom half of Bloom’s taxonomy” (pp. 24-25), and it was clear that performance was valued over learning—with “treaties” between the students and teachers where students basically did what their teachers asked and, in return, the teachers did not micromanage every aspect of student experience. 

Mehta and Fine did find examples of individual schools that did one of the three features of deep learning extremely well, and as I mentioned in Lemonade, they found individual teachers where deep learning was occurring in nearly every school.  But these individuals were consistently the isolated minority in their building, and no school was found where all three—mastery, identity, and creativity—were the governing paradigm for school life.

Which of course begs the question:  why not?

One answer discovered was inertia.  The teacher-as-transmitter learning model has been around millennia while the student-as-active-problem-solver model is only roughly a century old. Combine that with parent resistance—especially in the higher income schools where parents associate their own success with their own traditional teacher-as-transmitter learning—and there has not been a lot of political pressure to change.  In addition, with all “the state and district demands for breadth over depth and pressures for external credentialing,” what you have is a “core grammar of education that involves racing through a mass of information with few opportunities for choice or for exploring a subject in depth” (p. 249).

However, it is not only inertia that is preventing deep learning from being prevalent in our schools.  The most successful teachers providing it in their classrooms spoke of long, lonely journeys, with few role models and little mentoring.  Many had to earn enough of what my mother likes to refer to as “deviant’s credits” to enable them to buck the system, and Mehta and Fine are clear that “our most successful examples had to buffer themselves from external pressures” to conform (p. 44).  Add in the reality that “there is no world where a supervisor would watch 15 minutes of a surgery or a trial and make consequential decisions about a doctor’s or lawyer’s professional performance” (p. 395), and the absence of general respect for the profession leaves little external motivation to take that long, lonely journey to becoming a deeper learning educator.

Nor is that journey a simple one even for those who do undertake it.  Part of what Mehta and Fine identified in their research was that when examining the traits of the most effective teachers they observed, there was no “one-size-fits-all.”  Each deep learning teacher had struggled through discovering their purpose as educators in their own unique way and had their own individual understandings of how to “play the whole game at the junior level.” Hence, examples of deep learning educators seldom contained any overlapping features beyond the fact that each had embedded becoming a teacher into their sense of identity. Or to put it another way, in deep and important ways, each of these teachers was the curriculum in their respective classes.  Which, as the authors note, tends to frustrate those in education who are seeking best practices or simple technical solutions to confront the problem of deep learning’s absence from America’s schools.

Yet lest we put all this absence of deeper learning in America’s classrooms completely on the proverbial shoulders of the adults, our authors also discovered that student disengagement plays a significant role as well.  Chronic absenteeism, the allure of cellphones, the new cultural normal that in-person is optional…all contribute to the statistics that between 5th and 11th grade, the number of students reporting that they find school engaging drops from 75% to 32%, and “since students have to be at school to take the poll, even the 32% underestimates the level of disengagement, because the most disengaged have dropped out of school and are not in the data” (p. 27).  Furthermore, even when students seemingly are engaged, the lower levels of cognitive demand Mehta and Fine found in most of their classroom observations has the potential to lead to situations such as this one where:

One teacher told us that when she tried to refer to material that students had successfully answered questions about on a state science exam only three months earlier, the students not only didn’t know the content but argued that they had never seen it before! (p. 200).

Which points to something the neuroscientist in me recognizes that I’m not certain Mehta and Fine do.  They are correct when they assert that the deep understanding that comes from deep learning “requires both a significant repository of factual knowledge and the ability to use that factual knowledge to develop interpretations, arguments, and conclusions” (p. 12).  But the first portion of that claim—”a significant repository of factual knowledge”—requires a large amount of time, energy, and mental investment to get it embedded in the brain’s long-term memory (where we know from work on creativity that knowledge must reside or the brain literally won’t use it to think).  Indeed, one of the explanations frequently offered for the lack of deep learning in schools of all kinds is that students must master the basic skills and knowledge before they can engage the material more deeply.

However, Mehta and Fine rightly point out that the teachers they observed who employed deep learning “led with authentic complex tasks, and embedded within those tasks the basic skill-building needed to take on those tasks” (p. 326).  So deep learning is not antithetical to developing “a significant repository of factual knowledge.”  What is, is time.  If I’m “playing the whole game A at the junior level,” then—to paraphrase Oliver Burkeman—I’m choosing not “to play the whole game B at the junior level.”  I can’t.  As Burkeman wisely observes, any choice I make automatically precludes my other options, and therefore, the time spent to achieve deep learning in one discipline means a lack of time to achieve deep learning in another because our amount of time is finite and our brains simply work the way they do.

Hence, I will suggest that part of what may be keeping deeper learning from taking place more often in our public schools is the choices we have made about curriculum and what counts as being educated.  We can only accomplish the current breadth of disciplines at the expense of depth, and so we may need to make some challenging choices about what we want our children learning deeply if we want deeper understanding to occur in our schools—recognizing that that itself also comes with its own risks as the world of computer science is learning the hard way right now, with AI replacing the entry-level coders currently coming out of college.  Crystal balls are always cloudy, and as Harvard economist, David Deming, points out, it can actually be “quite risky to go to school to learn a trade or a particular skill, because you don’t know what the future holds.  You need to try to think about acquiring a skill set that’s going to be future-proof and last you for 45 years of working life.”

Bringing me to one final thought on why Mehta and Fine found so little deep learning in the classrooms where they visited; something they fully acknowledged right at the start of their work.  And that is the fact that:

Perhaps the most important reason that there has not been more deep learning in American schools: limited public demand for it.  The qualities associated with deep learning—thinking critically, grappling with nuance and complexity, reconsidering inherited assumptions, questioning authority, and embracing intellectual questions—are not widely embraced by the American people. (p. 38).

We are fundamentally an anti-intellectual society, and in many ways, our public schools (and a lot of our private ones) simply reflect this fact back to us.

Why, though, should we care? I know; it’s a rhetorical question.  Anyone who has read my letters to my graduating seniors knows why we should be concerned about the lack of deeper learning in our schools, and anyone who has observed the first 8 months of the Trump presidency really knows why.  But I would like to give the final word this time to Mehta and Fine, whose book went to press right toward the end of Trump’s first term in the White House and whose final words in their book are:

Perhaps the most important role [schools] play is training our future citizens.  These are people who will need to be able to tell truth from fantasy, real news from fake news; they will need to understand that climate change is real; and they will need to be able to work with people from other countries to solve the next generation of problems.  If we cannot shift from a world where learning deeply is the exception rather than the rule, more is in jeopardy than our schools.  Nothing less than our society is at stake (p. 400).

References

Barshay, J. (Aug. 4, 2025) 7 Insights About Chronic Absenteeism, A New Normal for American Schools.  The Hechinger Report.  https://hechingerreport.org/proof-points-7-insights-chronic-absenteeism/.

Board of Editors (July/Aug. 2025) Education in the U.S. Needs Facts, Not Ideologies.  Scientific American.  P. 88-89.

Horowitch, R. (June 2025) The Computer-Science Bubble is Bursting.  The Atlantic. https://www.theatlantic.com/economy/archive/2025/06/computer-science-bubble-ai/683242/.

Mehta, J. & Fine, S. (2019) In Search of Deeper Learning: the Quest to Remake the American High School.  Cambridge:  Harvard University Press.

The Dangers of Safetyism

Education should not be intended
to make people comfortable;
it is meant to make them think.

—Hanna Holborn Gray

In the early 1980s, a Canadian historian, James Stokesbury, wrote two, one-volume histories of World War I and World War II.  They remain, in my opinion, among the best abbreviated examinations of these calamitous events, and I revisit my copies of both books when I am feeling the need for some perspective about the world they helped create and in which I have grown up and lived.  Each time I do, I find myself discovering some new theme which I had not seen on previous reads, and when I recently revisited them this past month, what struck me this time was the almost rabid isolationism of the United States at the start of both wars and its impact on their outcomes.  I was reminded yet again that we are a highly reactionary society, not an anticipatory one, and that what that can cost can literally be tens of millions of human lives.

I share this bit of personal background because in my other recent readings I have found what I think is a new form of isolationism, and I believe we may be looking at a new reactionary response rather than an anticipatory one.  And no, I do not mean the isolationism within the MAGA movement and their cult leader, Donald Trump, which are impacting the current election.  This is an isolationism at a larger scale, one that is permeating our entire society, and it is something which First Amendment attorney, Greg Lukianoff, and social psychologist, Jonathan Haidt, call “safetyism.”

What is “safetyism?” Lukianoff and Haidt define it as “a culture that allows the concept of ‘safety’ to creep so far that it equates emotional discomfort with physical danger” and does so to such a degeree that it “encourages people to systematically protect one another from the very experiences embedded in daily life that they need in order to become strong and healthy” (Coddling, p. 29).  Examples include:

  • helicopter-parenting that schedules every minute of a child’s day to ensure that said child is never without some form of adult supervision;   
  • school districts such as the one my niece and nephew attended where district policy would not allow them to enter their elementary school unless dropped off by car—even though said school was a five-minute walk from their house;
  • “the head teacher of an elementary school in East London [issuing] a rule that children must not even touch recently fallen snow, because touching could lead to snowballs” (Coddling, p. 236; original emphasis)
  • universities cancelling controversial speakers simply because some members of the campus might find them disagreeable;

Hence, at its extreme, safetyism is the notion that even ideas are physically dangerous and must therefore be regulated to prevent exposure to them.  In other words, we must find a way to isolate each and every one of us from anything that might cause pain.

Sounds crazy right? Yet Lukianoff and Haidt point out that to some degree it makes a certain twisted logic because as “we adapt to our new and improved circumstances, [we] then lower the bar for what we count as intolerable levels of discomfort and risk” (Coddling, pp. 13-14).  Modern industrial society with its medicine, abundance of food, sanitation systems, etc. has so removed us from the environment we evolved in that “coddled” isn’t even an adequate word to describe our lives today.  Yet that same modern industrial society bombards our paleolithic brains with news of inflation, school shootings, and climate change, and so our heightened feelings of fear for our safety can drive us to the crazy isolation of safetyism.

Creating some very unintended and negative consequences in the process.  As the subtitle to Coddling suggests, we have now raised an entire generation incapable of adulting; put the production of new knowledge at risk—since “to advance knowledge, we must sometimes suffer” (Kindly, p. 19); and even endangered our form of governing because “citizens of a democracy don’t suddenly develop this art on their eighteenth birthday” without many preceding years of free play and self-negotiated conflict (Coddling, p. 191).  Just as the isolationism of the 1910s and 1930s did, safetyism has put us at grave risk as a nation, and I think it is worth quoting Lukianoff and Haidt at length here:

After all, if focusing on big threats [car seats, reducing exposure to second hand smoke, etc.] produces such dividends, why not go further and make childhood as close to perfectly safe as possible? A problem with this kind of thinking is that when we attempt to produce perfectly safe systems, we almost inevitably create new and unforeseen problems.  For example, efforts to prevent financial instability by bailing out companies can lead to larger and more destructive crashes later on; efforts to protect forests by putting out small fires can allow dead wood to build up, eventually leading to catastrophic fires far worse than the sum of the smaller fires that were prevented.  Safety rules and programs—like most efforts to change complex systems—often have unintended consequences.  Sometimes these consequences are so bad that the intended beneficiaries are worse off than if nothing had been done at all…efforts to protect kids from risk by preventing them from gaining experience—such as walking to school, climbing a tree, or using sharp scissors—are different.  Such protections come with costs, as kids miss out on opportunities to learn skills, independence, and risk assessment (Coddling, p. 169).

How, though, did we get here? What has allowed safetyism to arise and to thrive? One answer Lukianoff and Haidt provide is what they call the three great “Untruths” that have taken hold in our society:  the Untruth of Fragility (“what doesn’t kill you makes you weaker”), the Untruth of Emotional Reasoning (“always trust your feelings”), and the Untruth of Us vs. Them (“life is a battle between Good people and Evil people”).  Together, Lukianoff and Haidt argue (and document), these three ideas have permeated much of the modern parenting literature, school policies pre-K thru PhD, and social media, and the consequence are large numbers of children and young adults who are not resilient, who are experiencing poor mental health, and who are permanently trapped in their own biases.

Added to that, Jonathan Rauch argues (and also documents), has been the rise over the past thirty years of what he calls the fundamentalist and humanitarian threats to research institutions of all manner.  Namely (from the first) that all knowledge of any kind is absolutely relative and therefore equal in truth value and (from the second) that since “one person’s knowledge is another’s repression” (Kindly, p. 116), we must “set up authorities empowered to weed out hurtful ideas and speech (Kindly, p. 131).  Objective truth withers and dies; new knowledge becomes impossible; and thoughtful, critically reflective individuals who might be able to challenge Lukianoff’s and Haidt’s three great Untruths become a thing of the past.

There is, of course, also the reality of the rise of social media and the consequent tribalism it has empowered that generates affirmation for the belief in the need to be safe against the “Other.”  As Lukianoff and Haidt point out:

The bottom line is that the human mind is prepared for tribalism.  Human evolution is not just the story of individuals competing with other individuals within each group; it’s also the story of groups competing with other groups—sometimes violently.  We are all descended from people who belong to groups that were consistently better at winning that competition.  Tribalism is our evolutionary endowment for banding together to prepare for intergroup conflict (Coddling, p. 58).

And when social media simultaneously provides both the intragroup identity and the intergroup conflict, you have the perfect recipe for safetyism.  In fact, Lukianoff and Haidt go so far in recognizing this reality that they actually have a term for the subgroup of Gen Z where we see this the most: iGen, the group who grew up after Steve Jobs unleashed the iPhone on the world.

So now what? This group of individuals bathed in safetyism since birth has only grown larger over time, and a strong case could be made that the whole reason for the general tenor of the last decade of election cycles is that we are losing the number of actual adults in the room.  As political scientists Steven Levitsky and Daneil Ziblatt have written, “[political] parties [have] come to view each other not as legitimate rivals but as dangerous enemies.  Losing ceases to be an accepted part of the political process and instead becomes a catastrophe” that must be protected against at all costs (Codling, p. 131).

However, there are still a few of us around adulting our way through life, and a large number of us work in education, where the task has been, remains, and will always be—as the old folk wisdom puts it—preparing the child for the road, not the road for the child.  Not that schools shouldn’t be places of safety, but as former University of Chicago president, Hanna Holborn Gray reminds us in the epigram at the start of this essay—and ALL the brain research affirms—a certain degree of discomfort is necessary for learning to take place.  Critical thinking is simply the ability to connect one’s claims to reliable evidence properly, but developing the capacity to do it involves falling and skinning one’s mental knees over and over again until you can skate logic’s constraints with ease.  Learning hurts, and there will never be any way around that.

And while it is true that we can pave a bit of a child’s road for them, it has always been utterly self-defeating to think any of us could do more than that.  Food, clothing, shelter…love, caring, empathy…medicine, education, athletics…generational wealth…we can smooth some of a child’s road for them.  But we cannot prepare them for that call in the night that a loved one has died or the diagnosis of cancer or the failure of a marriage.  Each person’s road is unique, and so we can only truly help prepare them to travel it.

What’s more, if “it [has been] foolish to think one could clear the road for one’s child [in the past], before the internet, now it’s delusional” (Codling, p. 237).  As I commented in Chapter 9, I have confronted the paradox these past 15+ years that even as my digital native students have arrived in my classroom more and more unprepared for critical thinking, I have been steadily more successful at enabling them to do so.  That I can do so, I think, is because of the subject I teach, and the how and why of that is what I will explore next time.

References

Lukianoff, G. & Haidt, J. (2018) The Coddling of the American Mind: How Good Intentions and Bad Ideas are Setting Up a Generation for Failure.  New York: Penguin Books.

Rauch, J. (2013) Kindly Inquisitors: The New Attacks on Free Thought (Expanded Edition).  Chicago: The University of Chicago Press.

Stokesbury, J. (1981) A Short History of World War I.  New York:  William and Morrow Company, Inc.

Stokesbury, J. (1980) A Short History of World War II.  New York:  William and Morrow Company, Inc.

Catechism and AI Revisited

This is the nature of the razor-thin path of scientific reality:
there are a limited number of ways to be right,
but an infinite number of ways to be wrong.
Stay on it, and you see the world for what it is.
Step off, and all kinds of unreality become equally plausible.

—Phil Plait

Two stories about artificial intelligence recently caught my attention.  The first, out of the University of California, Irvine’s Digital Learning Lab, examined how successful ChatGPT could be at grading English and history essays when compared to an actual teacher.  The second, an editorial about the AI revolution in general, exposited on the very practical and financial boundaries all AI technologies are rapidly finding themselves running up against.  Together, I found these stories causing me to revisit some themes from my very first posting about AI, and as I reflected more on all of these items, some shared threads between the two stories rapidly became apparent that I want to discuss here today.

But first, a quick synopsis of each article.

In the story about grading, researcher Tamara Tate and her team sought to compare ChatGPT’s ability to score 1,800 middle school and high school English and history essays against the ability of human writing experts to do so.  Their motive was to see if ChatGPT could help improve writing instruction by allowing teachers to assign more of it without increasing their own cognitive load.  If, for example, teachers could use AI “to grade any essay instantly with minimal expense and effort,” then more drafts could be assigned, thereby enabling the quality of student writing skills to improve. 

What they found was a lot of variability, with ChatGPT’s scores matching the human scores between 76% and 89% of the time, which Tate summarized as meaning that ChatGPT was “roughly speaking, probably as good as an average busy teacher [and] certainly as good as an overburdened below-average teacher. [But that] ChatGPT isn’t yet accurate enough to be used on a high-stakes test or on an essay that would affect a final grade in a class.”  Furthermore, she cautioned that “writing instruction could ultimately suffer if teachers delegate too much grading to ChatGPT [because] seeing students’ incremental progress and common mistakes remain important for deciding what to teach next.” Bottom line, as the title of the article states, the idea “needs more work.”

In the editorial about the AI revolution, technology columnist, Christopher Mims makes the strong case that the pace of AI development is hitting three walls: a rapidly slowing pace of development, mounting prohibitive costs, and what I will call the productivity boundary.  In terms of development, Mims points out that AI works:

by digesting huge volumes of [data], and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.

As for costs, training expenses are in the tens of billions of dollars while revenues from AI are, at best, in the billions of dollars—not a sustainable economic model.  Finally, the evidence is mounting that AI does not quite boost productivity the way its evangelists have touted because “while these systems can help some people do their jobs, they can’t actually replace them.”  Someone still has to check for AI hallucinations, and “this means they are unlikely to help companies save on payroll.”  Or to put it another way, “self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.”

Which brings me back to why I think these two articles share common threads of thought and what made me revisit my original posting about AI.  Both articles obviously point to AI’s limitations, and the grading one is simply a specific example of the “productivity boundary” Mims discusses.  Both articles have a cautionary tone about AI being the be-all-end-all solution to “all life’s problems” the way its many proselytizers want to claim it can be, and the grading one even brings up the economics of AI as it warns about schools jumping on the proverbial bandwagon and purchasing AI grading systems too quickly.

But it was the analogy of the truck driver that caused all the metaphorical gears in my head to click into place.  English and history teachers don’t just teach writing, and when they grade the writing, it is not just the quality of the writing they are grading.  They are not “just driving the truck.”  I am confident that ChatGPT could be a marvelous tool for catching run-on and incoherent sentences or for catching disorganized paragraphs and poor thesis statements, and if using it for that would enable an already overburdened teacher the chance to get a few additional drafts for an essay accomplished in their class, I’m on board.  The only way you get better at writing is to write.

However, what ChatGPT cannot catch (and here is where I suspect at least some of those discrepancies in the percentages found in the grading research come from) is the quality, the originality! of thought and ideas that a given piece of writing expresses.  Only the human teacher can do that because only the human teacher has actual intelligence as defined by biology:  the capacity to use existing knowledge to solve an original, unique, and novel problem.  No AI can solve a problem it hasn’t already seen—which is part of what Mims hints at with his remark about “10 more internets;” only a human mind could create them—and that is why we will still need the human teacher to do the final grading.

Which brings me back to some of the themes I first addressed in Catechism and AI.  In looking back at that essay (where I first wrote about this misuse of the word “intelligence” in computer science), I realized that what the technological breakthroughs since then have made possible is the deepening of the illusion of intelligence.  Once something like ChatGPT could be trained on the entire internet, pretty much every prior human answer to a problem was now part of the algorithm, and so when you present it with a problem that is novel to you, it appears like it can solve it on its own.  It appears intelligent.  And since problems truly novel to everyone who has ever lived grow exponentially fewer each day, AI can appear intelligent quite a bit of the time.

However, present it with a problem that is novel to both you and the AI and suddenly you get one of those hallucinations Mims points out you need an actual human to fix.  That remains the limitation of AI:  it cannot handle the truly novel, the genuinely unique.  Nor can it create it.  As I’ve written before, AI may be capable of producing a mimic of a Taylor Swift song, but it cannot produce an actual Taylor Swift song.  The challenge is in remembering that the mimic isn’t really Taylor Swift.

Again, here is where the technological breakthroughs since I first wrote about AI have deepened the illusion.  The content generated by AIs such as ChatGPT may look novel because that particular arrangement of words, images, etc. happens to look novel to you.  But somewhere, at some time, some human mind already put those same words, images, etc. together; some human mind created.  And you are just now on the receiving end of a tool that we can now train on what every human mind has created to date for the past 10,000 years. The ultimate catechism! And a lot of prior human creativity with which to fool someone.  We see a parallel in the development of the performance of magic shows:  one hundred years ago, we only had the technology to create the illusion of a woman sawn in half; forty years ago, David Copperfield has the tools to make the Statue of Liberty appear to disappear.  None of it is any less illusory; it just gets harder to tell.

And where that fact may grow increasingly problematic is in the realm of another theme from my earlier writing:  interpersonal relationships.  When I first wrote about AI five years ago, Her was only a movie; now it’s a reality.  For a monthly subscription, I can have the AI companion of my choice (romantic, platonic, and/or therapeutic), and have “someone” in my life who will never push back on me.  Add DoorDash, Amazon, and Netflix, and I could spend the rest of my life once I retire (or get a work-from-home internet job) in my own solipsistic bubble without any need for any direct human contact ever again.  Not gonna happen as they say, but the fact that I can write those words should be sobering (and shuddering) to anyone reading them. Because if we are ultimately successful at reducing our most basic humanity to an illusion, climate change and the next pandemic are going to be the least of our concerns.

Yet if Christopher Mims is correct, then AI may rapidly be approaching its illusory limits, and if Tate and her crew are correct, then watchful use of AI may help teachers give their students more practice improving their writing skills—and therefore their thinking skills—while not adding to their grading loads. So perhaps there is cause for optimism.  The key, I think, is always to remember that the “I” in AI is—at least for now—a biological falsehood, and what I now realize was missing from my earlier work on AI is the necessary emphasis on novelty being at the core, the essence of what it means to be intelligent.  That doesn’t mean the CS folks may not eventually pull off an algorithm that truly can create.  But for now, we do not live in that world, and we just need to keep reminding ourselves regularly of the reality of that fact.

References

Barshay, J. (May 20, 2024) AI Essay Grading Could Help Overburdened Teachers, But Researchers Say It Needs More Work.  KQED/Mind Shift  https://www.kqed.org/mindshift/63809/ai-essay-grading-could-help-overburdened-teachers-but-researchers-say-it-needs-more-work.

Mims, C. (May 31, 2024) The AI Revolution Is Already Losing Steam.  The Wall Street Journalhttps://www.wsj.com/tech/ai/the-ai-revolution-is-already-losing-steam-a93478b1?mod=wsjhp_columnists_pos1.