The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking

Image of The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking
Author(s): 
Release Date: 
June 3, 2024
Publisher/Imprint: 
Oxford University Press
Pages: 
256

a critical read for AI leaders and ethicists, entrepreneurs and investors, journalists and concerned coders.”

"What can be, unburdened by what has been"—a phrase popularized by Vice President Kamala Harris—points to a core aspect of being human: the power to reinvent ourselves.

“Auto-fabrication,” a term coined in 1939 by the Spanish philosopher Jose Ortega y Gasset, is our daily challenge. No cow awakens to ask, “What kind of cow can I be?” But many humans ask themselves, “How can I do better?”

In AI Mirror, philosopher Shannon Vallor issues a dire warning about auto-fabrication, however: Big Tech is deploying AI tools across industries that are quickly diminishing it. Vallor, a professor at the University of Edinburgh and former ethicist at Google, employs the metaphor of a mirror to explain this pernicious effect.

She demonstrates how the values embedded in AI reflect only those of a small minority of Silicon Valley leaders and developers. When we interact with AI, it is their value-laden mirror we are gazing into. And that mirror is being replicated across healthcare, recruiting, lending, and other industries, where AI tools are being installed. Blind to our epiphanies and changes of heart, these tools make predictions—and often decisions—about who gets preauthorization for health care, lands a job interview, or secures a loan.

AI projects our futures “based on our past” and “the past of others like us," explains Vallor. A university might reject your application because in the past, others “like you” underperformed. The AI system “will predict that you will be in the future essentially who you have been.”

Among other things, that thwarts our ability to hold people accountable for discrimination. The AI decision makers are opaque, proprietary algorithms. We often can’t uncover their reasons for decisions. AI bypasses “the space of reasons,” says Vallor, eroding our ability to “act as a moral community.”

Meanwhile, tech companies are baffled by their own inventions, and we’ve seen many attempts to install guardrails fail. Some prime examples: blocking neo-Nazi content also inadvertently censored Holocaust history; racial bias accidentally crept into a bail-granting system, even with race removed from its training data; a recruitment tool for tech slyly favored men over women, even in the absence of gender in the training, by identifying proxies such as the applicant having led the women’s chess club.

Vallor scissors up AI leaders for diverting funds away from solving those problems and urgent threats like climate change. They overemphasize preventing the coming “artificial general intelligence” (AGI) from going rogue, she argues. Fears of rogue AGI are fantasies that “reveal only the narrow slice of human experience, told in our stories of conquest, domination, and empire."

The language that AI leaders like Geoffrey Hinton and Sam Altman use to describe the coming "superhuman AI," which will exceed us at "calculation, prediction, modeling, production, and problem-solving," is unsuitable, says Vallor. The fearsome qualities of “superhuman” AI more closely resemble a virus than any human being, writes Vallor: “If this is what you think it is to be superhuman . . . you have completely forgotten yourself."

Vallor also eviscerates the "long termism" and "effective altruism" ideologies circulating in Silicon Valley for justifying present suffering for future benefits. AI systems are "as morally reliable as your friendly neighborhood psycho,” she writes, noting that AskDelphi condoned eating babies if you're really, really hungry. Yet Altman has written on his blog that we must "merge" with AI to remain the dominant species, and he suggested handing over the project of eliminating bias in AI systems to the systems themselves, to remove the "emotional load."

Vallor believes such views display a "lack of confidence in our own moral capacities." They are a "backward-looking estimation of humanity's own worth."

Responsible AI requires "uncommon moral and political expertise,” says Vallor, but AI developers “may have no idea how to identify what constitutes a fair algorithm.” Consequently, this "tiny subset of a homogenous tech monoculture" is building AI that diminishes human agency.

We are increasingly forced to "reverse adapt" to a narrow AI. For instance, Amazon rewards human workers for acting like robots. AI tools instruct them to bend just so, to increase box-picking speed.

“The growing trend of reverse adaptation to AI explains why our lives will be worse if we don’t alter the current trajectory of AI development,” Vallor warns.

AI Mirror is a critical read for AI leaders and ethicists, entrepreneurs and investors, journalists and concerned coders. It complements Frances Haugen's The Power of One, on the harms of social media and Jaron Lanier's You Are Not a Gadget, on transcending the human condition rather than seeking salvation in Ray Kurzweil’s “singularity” (or, as Altman calls it, “the merge”).

Vallor believes we can still affect AI’s trajectory, if we reject a rising techno-theocracy with “machine gods made in our own diminished image.” We must employ "values of restraint, restoration, care, and repair”—the antithesis of tech’s traditional ethos to “move fast and break things.”

In light of Sam Altman recently blogging, “It’s usually okay to be wrong if you iterate quickly,” and the recent headlines about many OpenAI employees voicing concerns about the company’s efforts to ensure AI safety, perhaps this is an ideal time to be asking Vallor’s philosophical questions.

In the flurry of writings about the AI industry's race to the bottom, autonomous weaponry, effects on disinformation and promise for abundant energy and health care, Vallor presses the pause button. She suggests that all that may not matter, if AI is permitted to continue taking away our humanity—specifically our abilities to reason and to remake ourselves in a way that is unburdened by what has been.