In response to Chapter 4, one of my readers brought up the issue of Artificial Intelligence (AI) and whether computer learning isn’t basically the catechism method I criticize so vigorously, and I would like to reply by unpacking two ideas that I don’t really address anywhere in this project but which do have an impact on how we educate children. The first is: what do we mean by intelligence? The second is: are learning and understanding the same thing?
The word “intelligence” in our society is a term we throw around rather loosely (and as we shall see, the notion of “AI” is a prime example). But in the biological sciences, the concept of intelligence has a very precise definition: it is the capacity in animals to solve novel problems using pre-existing knowledge. It correlates significantly in the mammalian brain with the percentage by mass and volume devoted to the cerebrum, particularly the pre-frontal cortex (PFC), and in birds with the percentage devoted to the nidopallium caudolaterate (NCL). Humans, for example, have the highest PFC percentage amongst mammals; while corvids and parrots are the birds with the highest NCL (there’s a reason parrots can communicate with us!). In fact, the 3% difference in the cerebrum between humans (~76%) and chimpanzees (~73%) is likely one of the significant reasons we build zoos for them and not the other way around.
This correlation also helps explain why intelligence changes and grows as we develop from infancy to adulthood. As I discuss at different points throughout this project, the brain both grows additional neurons and synaptic connections at key stages of development, and it adds additional knowledge to its long-term memory (LTM) with each new life experience. Thus, as a child ages and matures, both the ability to solve novel problems as well as the pre-existing knowledge to draw on to do so increases. It is yet another reason for having a growth mindset and again why we need to expose children to as wide an array of problems and different experiences as possible (i.e. the value of the liberal arts).
But this correlation also has implications for other aspects of education as well. That 76% is highly consistent within the human population, which is why I have taught my students for years that by biological definition, we are all equally intelligent: we all have the same hard-wired capacity to solve novel problems. Schools, though, have a long history of focusing on memory processing and retrieval rather than novel problem solving, too often wrongly equating an individual’s speed at LTM input and output with intelligence, and that has led to a whole cascade of ill consequences.
For starters, since the hippocampus responsible for LTM access can be one of the more variable parts of the brain between individuals, equating memory speed with intelligence has led to the longstanding bad habit in education of calling those with quick memory processing “highly intelligent” or “gifted” and those with slow memory processing “less intelligent” or “remedial.” This, in turn, has frequently led to homogeneous grouping (i.e. “tracking”), where the “gifted” get presented with more interesting, challenging, and novel problems than the “remedial” receive—which can itself lead to differences in PFC development that can simply reinforce the grouping decisions—all of which can lead to significant differences in the quality of education and future career opportunities available to a given child. And as discussed in Chapter 8, those differences ultimately lead to many of the educational inequities and socio-economic injustices in our society.
It is why I continue to argue that all children need and deserve complex novel problems to train their intelligence; they simply also need whatever time their individual brains require to solve them. Many of my most successful students in my AP Biology class had challenges with memory processing. They simply took longer to solve the complex problems I assigned them, and since one of them who has stayed in touch with me just graduated from medical school, I think it is safe to say that her intelligence was never at issue.
Another consequence that happens in education from equating intelligence with memory speed is that doing so has only reinforces the use of the catechism as a teaching tool: here’s the question; here’s the answer; remember it; when I ask this question, give this answer. Those who can do that rapidly, as I have said before, look very successful in school, and those who can’t, do not. Yet there is minimal to zero novelty in this process at all. No one employing it is challenging a child to solve a new and previously unseen problem; in fact, an genuine problem is never presented at all, just a pre-determined question with a pre-determined answer. Therefore, using a catechistic approach to teaching does absolutely nothing whatsoever to help develop intelligence because it never even employs actual intelligence in the first place.
That is also precisely why the use of this term in AI is such a total misnomer. There is not an AI in the world that can tackle something truly novel, and to see why, we need to address my second question from earlier: are learning and understanding the same thing? The short answer is no, but to grasp why, it is actually helpful to learn a bit about how AI really works.
As I argue in Chapter 4, the ability to interpret data and make judgments about it is the essence of the learning process, and in this regard, AI does engage in learning. It employs algorithms to generate a pattern found in the data fed to it (e.g. pictures of cats) and then tests that pattern by predicting whether the next set of data (e.g. a picture of a dog) matches. When it predicts correctly, it reinforces the algorithm’s use of that pattern to make future judgments; when it predicts incorrectly, it uses the algorithm to refine the pattern and tries again. This is why it takes tremendous amounts of data (i.e. millions of pictures) before an AI can, in fact, correctly identify a given picture as a cat (or anything else), and it is why AI learning uses a catechistic approach because its purpose is fundamentally that to which a catechism is put: always get the same correct answer to the same question.
However, while learning involves figuring out how to interpret data and make judgments about it, understanding is related to intelligence in that when we understand something, we can make novel meaning from our learning. Something an AI simply cannot do. As Lars Holmquist points out, AI cannot understand even something as basic as sentence—which is why, for example, chatbots cannot correctly carry on a successful conversation using natural language: they cannot understand a novel query.
But that does not mean AI is not capable of developing some highly sophisticated responses to situations involving judgments about pattern recognition. For example, in the research M.I.T.’s Sherry Turkle has done, she has witnessed devices such as Paro (a robot resembling a baby harp seal) respond to voices and touch with the mimicked caring behaviors one would expect from a warm, fuzzy animal. Similarly, other AIs have been developed to learn complex games such as chess and Go that are capable of playing and beating an actual human opponent—in no small part because the speed with which an AI can judge the patterns on the game board are vastly faster than those of the human brain.
Yet no matter how well it presents caring actions in response to input, a Paro is not capable of actual caring. Nor can game playing AIs understand the game they are playing (they could not, for instance, modify the rules based on how a game is currently played; they can only play that game). In both situations, a computer is merely mimicking at set of behaviors, not originating them, and as Turkle points out from her research, that ability to mimic—no matter how sophisticated—cannot be interpreted as evidence for an actual mind at work. AI still fails the Turing test precisely because it is not another human being.
What happens, though, if we decide to start altering the parameters of the test? What if “acts as if it cares” becomes good enough? In researching for Chapter 9 and the impact of technology on the teaching and learning process, I came to realize that today’s technology presents a potential danger far greater than the one it poses for education and that is its impact on how we engage in human relationship. What happens in a world of AI where your refrigerator (or any other networked device in your house) can act as if it is capable of being in a relationship with you when it has absolutely no capacity at all to understand what a relationship is? We may presently make fun of this idea in such episodes of The Big Bang Theory where the character, Raj, falls for the Siri on his I-Phone, but how many of us now have an Alexa or an Echo in our homes and think of them—at least a little—as another person in our home?
Turkle calls it the “alive enough” (p. 35) approach to our interactions with our devices, and she shares stories of Paros standing in for human companionship in nursing homes (their actual intentionally designed purpose by the way!) and of a young woman she met at a psychology conference who “confided [to Turkle] that she would trade in her boyfriend ‘for a sophisticated Japanese robot’ if the robot would produce what she called ‘caring behavior’…a responsive robot, even one just exhibiting scripted behavior, seemed better to her than a demanding boyfriend” (p. 8). Texting already now stands in for human conversation for an entire generation, and on-line lives in artificial worlds are for some more satisfying than IRL. When, Turkle and others ask, does “a pretend world of as-if feelings, as-if empathy” (p. xxiv) stop being seen as one of mental pathology and start becoming seen as one of normalcy?
Yet, I think one of the reasons we are all finding the current shelter-in-place of the COVID-19 pandemic so challenging to tolerate is that evolution has hard-wired us to be social creatures, with actual physical interaction. Even self-imposed quarantine for a good cause still feels, like solitary confinement, a bit like punishment. We genuinely crave and genuinely need real, authentic companionship, and while brain scans now show that our favorite substitute, dogs, apparently really do love us, even they are not fully adequate to the task.
Simply put, no AI can ever understand how to love. But it is scary how many of us seem to be convincing ourselves—like the movie, Her, suggests—that faking it is good enough. We are already in danger of descent into total narcissism as a society; how far do we want to let AI take us down that rabbit hole before it is too late?
References
Güntürkün, 0. (2020) The Surprising Power of the Avian Mind. Scientific American, January; pp. 48-55.
Herculano-Houzel, S. (2009) Frontiers in Human Neuroscience. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2776484/#:~:text=Now%20that%20numbers%20of%20neurons,et%20al.%2C%202009).
Herculano-Houzel, S. (2012) The Remarkable, Yet Not Extraordinary, Human Brain as a Scaled-up Primate Brain and Its Associated Cost. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3386878/.
Holmquist, L. (2017) Intelligence on Tap: AI as a New Design Material. https://dl.acm.org/doi/fullHtml/10.1145/3085571.
NOVA. (2020) Dog Tales. https://www.pbs.org/wgbh/nova/video/dog-tales/.
Turkle, S. (2017) Alone Together: Why We Expect More from Technology and Less from Each Other, 3rd Edition. New York: Basic Books.
My daughter needed a math tutor in third grade because she struggled with memorizing multiplication tables and was the slowest at the timed tests. Today she is graduating from Villanova with a degree in civil engineering and aspires to designing bridges.
This commentary on intelligence speaks directly to individual differences/speeds in the learning and problem solving process versus memory and speed of recall.
I guess it is ideal to excel at both. But, as her parent, I was patient through the rote memory years when she struggled. Now, I am in awe of her interest and ability in her problem solving skills. There is always a hand held calculator for 9×8=56!
LikeLike