We cannot blame technology for this state of affairs.
It is people who are disappointing each other.
Technology merely enables us to create a mythology
in which this does not matter.
—Sherry Turkle
An algorithm is only as open-minded and smart
as the human who built it.
—Rana el Kaliouby
Be on your guard and very careful,
for you are walking about with your own downfall.
—Sirach 13:13
Any of my regular readers already knows my strong, even impassioned, love-hate relationship with digital technologies. I resisted owning a smartphone until 2014 because I had no desire to turn into one of the “alerts” junkies I was watching populate my life around me. Yet, I have designed and built multiple websites since the late 1990s for the practical reason that the Internet enables me to reach an audience of greater than one. I only finally purchased my android phone (no evil Steve Jobs for me!) because every business model on the planet just assumed that there was a miniature computer permanently attached to my fingertips and I needed to function in society. Yet I have taught coding to my STEM students now for well over a decade for the simple truth that it engages the brain in some of the most rigorous and complex problem-solving available. Like I said: love-hate.
Part of the hate, though, is that the negative impact of our on-line lives on our actual lives has become well-documented. Since the rise of digital technologies, clinical depression, loneliness, exhaustion, stress-levels, and anxiety have all seen sharp increases in the general population—especially in our youth—all directly attributable to the use of these tools. Meanwhile, there has been a 40% decline in the markers for empathy over the past 30 years, with the corresponding rise in intolerance and bullying—again directly attributable to the use of digital technologies. There is even evidence that these devices have made us less economically productive (Homayoun).
Furthermore, a strong argument can be made that smartphones in particular actively contributed to our poor handling of the pandemic and the size of our death toll in this country (see The Unprepared Generation Part 1 and Part 2), and I have already written elsewhere about the risks they pose to education. Toss in a year now of “Zoom School” and “Zoom Work” where so many of us lived on our screens nearly 24/7 and is it any wonder that my sister, who is a clinical social worker who supervises case management for a large insurance company, reports between a 21-28% increase in mental health referrals in any given month from March 2020 thru May 2021?
What, though, is the causal link? Why does our use of digital technologies appear to generate so many negative consequences for so many individuals? The answer renowned computer scientist, Rana el Kaliouby, offers is this: our devices are making us functionally autistic. To understand how they do so, we simply need to recognize the fundamentally critical role that emotions and their corresponding facial expressions play in communicating successfully and meaningfully with others. When we are on our devices, she argues, we are effectively always “face blind,” and since so essential are these non-verbal clues to human interaction “that, even before we are born, we practice our smiles in our mother’s womb” (p. 235), their total absence leaves us unable to judge or gauge how to behave appropriately in our digital exchanges—in much the way an individual suffering from autism does when interacting directly with other people. As she summarizes:
Early in my work, I realized that when it comes to recognizing and interpreting feelings, computers are functionally autistic: They can’t see or process emotion ‘data’ or respond to emotion cues…[therefore when employing them] without any real emotional connection, it’s easy to forget that we are talking to and about other human beings, and the absence of real-time social interaction twists and distorts our behavior. When it comes to the digital world, our computers have trained us to behave as if we lived in a world dominated by autism, where none of us can read one another’s emotional cues” (pp. 9-10).
And since computers have become our dominant means for communication today, she concludes, we have all become functionally autistic in much of our interaction with one another.
el Kaliouby’s solution to this situation is interestingly enough, more technology. She is one of the founding pioneers in the field of artificial emotional intelligence (Emotion AI), and her initial research was in fact to develop an algorithm that could read and process the emotions of facial expressions to aid those suffering from actual autism. Her goal, with the help of the Autism Research Centre at Cambridge, was to develop an “emotional prothesis” similar to Google Glasses that an individual on the autism spectrum could use to process the emotional expressions of others in real time in much the same way that my corrective lenses enable me to process vision correctly.
Today, el Kaliouby heads the company, Affectiva, spun off from her research at the renowned MIT Media Lab, and she and her colleagues are producing some of the most cutting-edge algorithms for artificial emotional intelligence in the world. Indeed, she has become a bit of an Emotion AI “evangelist” through her TED talks and work with the PBS series, NOVA, and her most recent focus is on the impact of robots in society.
el Kaliouby recognizes that millions of current employment opportunities will either disappear or become fully automated in the near future, leaving only work that requires an person’s uniquely human EQ and CQ abilities (see Teaching Creativity). In addition, she grasps that as robots enter more and more of our daily lives (Alexa is a primitive example), there will be an ever greater need for them to possess some rudimentary capacity for empathy (her story of getting ticked off at a literalist Alexa is one to which we can all relate!). And, of course, being the “evangelist” that she is, el Kaliouby believes that the answer is more and better artificial emotional intelligence.
First, she postulates that “Emotion AI will empower human beings to strengthen our uniquely human skills, the very skills that will be in great demand. This is how we retain our EQ in a tech-driven world” (p. 269). Second, she thinks we will have a better world when empathetic learning robots “democratize education for everyone, regardless of zip code or social or economic status;” when friendly medical avatars enable patients to ask the same questions repeatedly without feeling stupid, rushed, or judged; and when mental health providers can look for clinical signs of illness in speech patterns via smartphones because “after all, a lot of us today talk to our devices even more than we talk to our friends, family members, or health care professionals!” (p. 284).
It was reading that last that gave me pause. It reminded me of the words of another person associated with both MIT and digital technology who is not so rosy in her outlook: Sherry Turkle. Turkle has asked “what are we missing in our lives together that leads us to prefer lives alone together?” (p. 285), and after reading el Kaliouby’s memoir, I’m not sure the answer isn’t what she claims, namely empathy. However, I am not sure that she and I meand the same thing by that term. el Kaliouby may claim that “through AI, Mabu [a home health companion robot] develops empathy for the user and, using that empathy, devises the best way to interact with him or her” (p. 275). But what is empathy? If empathy is simply a programmed behavioral response—robot recognizes an input as “angry voice;” responds with programmed soothing words as output—then all Emotion AI is doing is providing yet another technology to cater to our narcissism and isolation from one another: as the one young woman remarked to Turkle, why wouldn’t I trade in my boyfriend for a robot if the robot always demonstrated nothing but caring behavior toward me? (pp. 7-8).
Genuine empathy is the capacity to generate within ourselves a recreation of the experience of the other person. When I see someone tearing up in response to something unkind that I have said, my brain immediately produces a similar feeling of emotional pain within me. I “get it;” I empathize, and out of that empathetic response, I am forced to change, to respond appropriately and seek healing. In fact, our brains even evolved dedicated cells, “mirror neurons,” for this very process: in a social species, they were and are critical for our very survival.
Any AI, though, is never going to have this capacity. It will only display the programmed response through its algorithm to the set of inputs it is trained to associate with that response. And while I agree with el Kaliouby that training AI to recognize emotional inputs is value-added—having your smartphone, for example, “read” the text you are about to dash off in a fit of spite and warn you not to hit send would make the world a better place for all of us—I want to challenge that perhaps teaching individuals the mindfulness and self-reflection to think twice in the first place might be even more value-added. Likewise, empathetic learning robots and medical avatars might improve educational and medical situations where resources are limited, but that’s a “band-aid” for such problems. Why not invest the necessary human capital in the first place to truly solve them?
As Turkle reminds us, because of our digital technologies, “we came to ask less of each other. We settled for less empathy, less attention, less care from human beings” (p. xxi), and if, as el Kaliouby suggests, we address these matters with Emotion AI, then do we risk potential pathologies becoming normalized? Perhaps el Kaliouby is correct: that because digital technologies are here to stay, we will all need Emotion AI “protheses” to function in the future just as I need corrective lenses to see now. But where will the future EQ for developing these “protheses” come from once it is only the “digital natives” with their already underdeveloped EQs to write the algorithms? Also, who’s algorithms? el Kaliouby herself cautions that because China has “access to massive amounts of data on people collected by the government in ways a democratic society would not tolerate” (p. 303), it can produce AI algorithms more rapidly and more effectively than anywhere else in the world. What might a totalitarian Emotion AI look like?
In her work, Turkle provides an analogy between the technology that brought us sugared soda water and the technology that brought us digital devices. We embraced both with equal abandon, she points out, and it took us over a hundred year before realizing that the former was not good for us at all and, in fact, distinctly unhealthy for us to consume. While I am sympathetic to el Kaliouby’s program and her clear passion for it—her work on autism has the potential to transform millions of lives, and the autonomous car that could protect a driver from themselves (as well as a smartphone that could) would most definitely make the world a better place!—my fear is that from the 35,000-foot level, what Emotion AI is really giving us is simply “diet soda.”
And anyone familiar with the link between long term exposure to artificial sweeteners and a significant increase in risk for developing type 2 diabetes later in life knows that diet soda isn’t much better for us than regular. Turkle’s analogy continues to haunt.
References
Homayoun, A. (2018) Social Media Wellness: Helping Tweens and Teens Thrive in an Unbalanced Digital World. Thousand Oaks: Corwin Press.
el Kaliouby, R. (2021) Girl Decoded: A Scientists Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology. New York: Currency.
Turkle, S. (2017) Alone Together: Why We Expect More from Technology and Less from Each Other, 3rd Edition. New York: Basic Books.