Chapter 17 Artificial Intelligence: Not the son of Frankenstein A theme of this book is how advances in technology hold the potential for improving society. Perhaps no technology is more consequential than Artificial IntelligenceÑsoftware that models human intelligence. But perhaps also, no technology is more controversial than artificial intelligence. To date, society only improves through human action, and the time and energy and resources of people are necessarily limited. Artificial intelligence holds the possibility of amplifying human potential, putting an end to scarcity once and for all. But people worry: will artificially intelligent computers become hostile to people and kill us all? Today, peoplesÕ livelihood is tied directly to their (perceived) individual economic productivity, so will computers make all the jobs obsolete? Will computers do all the hard stuff and will we become bored and lazy? A vast majority of depictions of AI, whether in science fiction or from brow-furrowing social critics, rehash the Frankenstein theme. In Frankenstein narrative, tailor-made for media sensationalism, scientists are toying with powerful forces beyond their comprehension, with little regard for humanity, and it is bound to go out of control and go berserk. The authors of this book have devoted much of our lives to research in artificial intelligence. So we feel like we are at least qualified to suggest an answer to the question, ÒWill AIÕs eat us for breakfast?Ó. The short answer: No. DonÕt panic. Will intelligent machines get along with human beings? Our answer to that question is exactly the same as our answer to the question of whether, in the future, human beings will get along with each other. You already know what weÕre going to say: Why canÕt we all just get along? We have no choice, in some sense, but to build artificial intelligence in our own image. Principles for creating AI programs derive from generalizing our own experience in problem solving. The standard of whether AI programs succeed depends on comparing them to human behavior. The better they get, the more like us they will be. So the answer to whether AI is a positive or negative development depends on whether or not you are optimistic about human nature. We do admit that thereÕs some room for disagreement there, but we confess that we come down firmly on the optimistic side. When people ask us if machines that are more intelligent than people pose a danger to humanity, we ask them back: ÒDo you think that people who are more intelligent than you pose a danger to humanity?Ó. LetÕs not frame the question in terms of humans vs. machines, but this way: How do we have a society that is composed of various agents, some of whom are more intelligent than others? Trouble is, our present political and economic systems are not so good at answering that question, either. All they can offer us is endless competition for power between humans. If the competition is about intelligence, then what youÕll get is: The less intelligent will become slaves of the more intelligent. ThatÕs a lousy answer. In that case, yes, computers will conquer humans. But thereÕs a better way. Why canÕt we all, humans and machines, just get along? We want to have a society where, even if people have varying intelligence, everybody is valued. The more intelligent may have more to contribute in certain respects, but that doesnÕt mean that they shouldnÕt cooperate with the less intelligent. If the agents (human or machine) are truly intelligent, theyÕll also have the social intelligence of understanding that cooperation is best for everyone, including themselves. As AI advances, weÕre increasingly learning that general intelligence really requires social and emotional intelligence. Work is advancing on how to model these different kinds of intelligence [Goleman 1995] [Mason 2008] [Minsky 2006] [Picard 2000]. We think emotionally intelligent AIs wonÕt be hostile AIs. The problem with most of the Frankenstein scenarios is that they assume that there will be technological progress sufficient to create an AI, but they donÕt posit much future progress in social intelligence. Our view is that there has been considerable progress in human relations in the last few hundred years. For example, democracy (despite all its ßaws) is a vast improvement over feudalism. It would have been hard for a citizen of a feudal society to envision modern democracy. As AI researchers, we realize that cracking the problem of intelligence is still far away. So we think it likely that future progress in social sciences will address problems of interpersonal conßict that now seem insoluble. We hope that neuroscience and psychology will have advanced to the point where an insatiable lust for power will be viewed as the mental illness that it is, and we will learn how to create an AI where the risk of it developing such an obsession is vanishingly low. We do think the social critics are right in saying that there ought to be much more research in how to make AI more safe and controllable, just like we have research today in automobile safety or aviation safety. In computer science, we call the problem of avoiding and mitigating negative outcomes, a debugging problem. Research in debugging, program verification, controllability, etc. is vital, yet very little research on this takes place, compared to research in optimizing computers to run faster or building new features. We have been tireless in advocating research in debugging [Lieberman 1997], but our appeal has mostly fallen on deaf ears. Studies show that programmers spend, on average, more than half their time debugging, but the debugging tools, if any, provided by most interactive programming environments have changed little since the dawn of computers. No wonder computers continually seem like theyÕre on the verge of going berserk. AI technology needs to be enlisted to help with the debugging problem. In short, we feel like there are two possible futures for humanity: in one, the current status quo based on competition and scarcity, continues. In that case, AI will become a weapon of one or another of the competitive factions, and is bound to run amok sooner or later just like the prophets of doom fear. ItÕs too easy a game to come up with how doomsday might occur. In fact, we already have loose in our society, nonhuman machines that have more intelligence than any individual human being, are single-mindedly dedicated to amassing resources for themselves, and donÕt necessarily have the well-being of humans at heart. TheyÕre called corporations, governments, and religions. Some government trying to Òdefend itselfÓ might invent a hyper-aggressive AI that goes out of control or is imitated and bested by its adversaries. Some company trying to Òincrease profitsÓ might create an AI that is smart enough to trick people out of all of their money. Some apocalyptic religion might create an AI to hasten the apocalypse. The real problem here is the adversarial relationship. In the future, we might not even have the social structures of militaristic governments, greedy corporations, organized religions, and criminal gangs that might be motivated to create malevolent AIs. WeÕd better not. LetÕs give ourselves the benefit of the doubt. In the more optimistic scenario, people finally learn how to cooperate constructively with one another on a large scale. AI will help people arrive at this point, in part through AI systems for collaborative problem-solving, optimization, negotiation and conßict resolution. We donÕt believe that destructive competition and conßict are innate; rather, they are a response that is forced by scarcity and fear. AIÕs will solve the material scarcity problem, and education, psychology, and other human sciences (assisted by AI), the fear. The reason that research in AI is so important is that it is the best route for bringing the positive scenario about. In that case, AIÕs wonÕt want to destroy or Òtake overÓ humanity, like a James Bond movie villain. The AIs, too, will understand the wisdom of cooperation.