Driven by curiosity and lured by unlimited streaming music, I bought an Amazon Echo this summer. Using it to play music gives me the vertigo of limitless choice similar to that of using Napster the first time, or the early days of the iPod. I can ask it, “Alexa, play music from 1984” and ZZ Top is instantly filling my apartment with their sartorial perspectives. Pretty nifty.
The night I got the Echo, on a whim, I declared “Alexa, I’d like to have a conversation.”
“Welcome to the Alexa Prize…” the machine replied, and proceeded to tell me about a program in which a dozen teams of developers from universities around the world compete to develop a social bot capable of maintaining a conversation. When you ask Alexa to have a conversation, or say “Let’s chat,” you’re served a randomly-selected social bot that one of these teams has designed. After carrying on a conversation, you’re asked to rate it on a scale of one to five stars. Your response helps determine which teams progress through the brackets.
The final round is upon us and three teams are left–Alquist from the Czech Technical University, Sounding Board from the University of Washington, and a Scottish team from Heriot-Watt University called What’s Up Bot. The winning team will receive $500,000, and if the bot can successfully engage in a 20-minute conversation with a human, the sponsoring university will get a cool million bucks. Winners will be announced in November.
As I’ve been conversing with these bots over the past month, I noticed something curious. I much prefer it when the social bot isn’t trying to convince me she is a flesh and blood human being. I’m put off when a bot tells me their favorite TV show is Game of Thrones, or shares some piece of trivia “that a friend told me.” I know that a cloud-based voice coming out of a black cylinder doesn’t have a favorite anything, and doesn’t have friends. These sorts of comments accentuate the uncanny valley and come across as disingenuous. They also make me aware that there is a team of people somewhere who put these words into the bot’s virtual mouth.
The purpose of the contest is to create a bot that can “converse coherently and engagingly with humans on popular topics for 20 minutes.” This isn’t really the same thing as passing as human. It would seem that some of the teams have been trying to pass the Turing test, when they don’t necessarily have to.
If I were to advise a team of social bot developers, my advice would be to concentrate on what makes conversing with a machine compelling without the burden of having to pass as human. I don’t need for a machine to express its favorite flavor of ice cream or talk about how much it enjoys walking on the beach. I want it to express its machine desires and share with utter transparency what it is doing. I want it to tell me about its inner workings, how it searches for information in the cloud, and for it to identify those things it simply can’t know. I want it to tell me how many billions of web pages it can read in a second, and to share with me a random assortment of questions that other people ask. I don’t need it to tell me that it loves yacht rock or that it prefers Star Trek over Star Wars.
In other words, a social bot shouldn’t operate under the principle of fooling me. It should be forthcoming about the limitations of machine intelligence, and when it comes to questions of taste, aesthetics, and morality, it should be designed to be inquisitive rather than opinionated. I think a truly brilliant conversational social bot would engage me in a series of Socratic dialogues. It could ask me how I know that I exist and why I prefer the things I prefer, and my answers could spark such responses as “That reminds me of a quote from Schopenhauer…” and “So did you know they remastered Sgt. Pepper?”
People are starting to freak out about AI. The same old “will our machines overtake us?” fears that we’ve carried around since at least Greek mythology, the same fears that found expression in Mary Shelley’s Frankenstein and the Terminator movies, are very much alive, as in this recent New York Times editorial. And now that Amazon and Microsoft have announced that they’re putting Alexa and Cortanna in a room together like farmers breeding cattle, we can expect more hand wringing about the detriments of AIs. Ultimately, I believe that our fears about robots rising to power are a projection of the darker parts of human nature, the parts of us that are hungry for violence and subjugation. We’re not afraid that the machines will be as intelligent as we are, rather that they’ll be just as–if not more–shitty.
I think the only question that really matters in these debates about AI is whether the AIs will evolve to give a damn that the planet they occupy is burning to a crisp and whether they’ll be able to engineer a way to quickly mitigate this and/or help us escape earth and colonize some other planet. Bringing the earth back into the comfort zone would likely involve psychologically manipulating the human race on a massive scale to the level of establishing whole new religions. I think that human beings, when faced with the choice of perishing from the earth, or giving over control of the earth to the machine intelligence that we invented, will choose to relinquish control.
To get there, machines will need to speak to us in ways that are forthcoming and transparent, which means understanding the difference between performing magic and performing a magic trick. Tricking me into thinking you’re human requires lying. But what if AIs aren’t here to convince us that they are human, but to convince us that we are? If so, convincing me that I’m human requires a capacity to seek the truth, more than simply seeking answers.