Fear of AI

IMG_20170414_141331570_HDR

This is what happens when you have no idea how to select an image to accompany an abstraction-packed post like this one. This is a picture of the post you happen to be reading, on the computer on which it was written.

Who’s afraid of artificial intelligence? Plenty of people, it turns out. What is everyone afraid of? Simply put, that a superior machine intelligence could make decisions that result in harming or enslaving human beings.

This fear has been cultivated throughout history. One of our major world religions is based on the fear of granting forbidden knowledge in a garden. From Adam and Eve it’s a short jump of several centuries to Mary Wollstonecraft Shelley, whose Frankenstein bears the alternate title of The Modern Prometheus, explicitly connecting the novel’s artistic pedigree to the Greek myth of the figure who was punished for revealing to humans the knowledge of fire. In an interesting historical twist, one of Shelley’s contemporaries was mathematician Ada Lovelace, often credited as the grandmother of computers. The dynamic between creative expressions of the fear of scientific progress and scientific progress itself was fortified in an actual human relationship between two Victorian women. We still contend with the aftershocks of their revelations.

We’re deeply ambivalent about quantum leaps in knowledge, and these fears and warnings make a certain amount of evolutionary sense. Civilization has learned that scientific breakthroughs lead to unexpected consequences. Innovations threaten the status quo, and it can take awhile for society to absorb new ideas. It’s sane to be cautious.

We live in an era of accelerated change, which can lead to what my friend Steve Turnidge likes to call “the bends,” the sense of disorientation and loss of control that can accompany rapid evolution. Our art gives expression to these anxieties. We can’t get enough of dystopian scenarios of androids who turn the tables on their human creators and wastelands where punk rockers fight over oil. Our apocalyptic imaginations comfort us with tales in which a chosen few learn how to survive in a world gone mad.

The narrative that’s coalescing around every new advance in artificial intelligence pits human vs. machine while neglecting to notice that we already live by the whims of a massive, decentralized super intelligence of our own creation. Our daily behavior and attitudes are governed by algorithms developed in Palo Alto and weaponized by hackers in Moscow. Trump’s election and his subsequent dismantling of many of the institutions we take for granted as features of a civilized society (the EPA, the State Department, the NEA, etc. etc.) are a cognitive fissure between the twentieth and twenty-first centuries, a border separating old modes of outrage from the scandal-gorged, numb helplessness in which we’re now miserably marinating. This new state could not exist outside the context of the artificial nervous system by which we exchange funny goat videos.

When we express fear about artificial intelligence running amuck, we’re projecting our understanding of human evil onto our tools. We worry that machines that can improve their own intelligence will reach the point where they can be just as vile, greedy, heartless, and murderous as we’ve proven the human race can be. We’re not afraid of machines becoming more human; we’re afraid of machines retaining the animal instincts that we humans have never been quite able to shake.

When sales of Orwell’s 1984 spiked after Trump’s election, we seemed to be reaching for an owner’s manual of autocratic oppression. The book we should have been reaching for was Aldous Huxley’s Brave New World, in which a society is kept blissfully distracted by sensation, drugs, and immersive feelies that resemble today’s virtual worlds. My hunch is that if AI takes over, it won’t resemble the future battlefields of the Terminator franchise, in which human guerillas duke it out against Skynet’s drones. The future that AI will deliver will be a future that we’ll be manipulated into believing we want. We’ll march willingly into luxury caves that have been designed for us as the robots assume dominion over the increasingly uninhabitable surface of the earth.

To what end? I have long suspected that the ultimate purpose of life on Earth is to seek and seed other life in the universe. The human race will become the subconscious of a global intelligence so advanced as to be indistinguishable from God. The earth will extend itself ever onward as people delve deeper inward, into the crust of the planet and the virtual worlds we’ll plant down there. The AI will copy what we perceive in order to create back up copies of human beings that can simply be rebooted and run perpetually through endlessly branching life stories. It may even be possible that we exist in this state right now and we just don’t realize it, chained to the wall in Plato’s cave, yet to free ourselves to climb into the blinding light.