Note: This post is not a review nor critique of the book Our Final Invention. I reference it merely as a launch pad for discussing some common topics.
A recent book (see reviews here and here) once again raised the spectre of a hyper-intelligent AI that could wipe out the human race. Worrying about the destruction of mankind by sentient machines is fast becoming a national pastime. The logic follows from the observation that so many decisions are already managed by algorithms collectively known as AI. As computing power continues its steady climb, it is only a matter of time before we create machines vastly smarter than ourselves. Surely we need to start preparing for this inevitable rise of the machines so as not to be taken by surprise. Otherwise we could end up as slaves or batteries to our robot overlords. And now that Google has hired Ray Kurzweil, former inventor and current futurist, to make this sci-fi story a reality, we should all brace ourselves for this inevitable war.
These sensational “predictions” should stay in the realm of science fiction as there are myriad problems with this line of reasoning. Consider first that we already live in a world where mutually assured self destruction has been a reality for half a century. Humans have been killing humans for millennia using technology and machines. As we move into the drone age, where humans can kill humans remotely, we have already created a hostile environment for humanity independent of sentient machines. Therefore it is difficult to see how the current state of affairs is any better than the dystopian future presented to us. In other words how is putting the safety of the world in the “hands” of a machine more dangerous than a human? Machines aren’t aggressive, aren’t malicious, aren’t paranoid, don’t experience fear, don’t have agendas, don’t hold grudges, nor become drunk from power. It is possible that someone could in fact introduce these traits into the AI that controls the nuclear stockpile, but why would anyone do that? The history of our use of machines has been precisely the opposite: to remove these human traits from operational processes. Perhaps our fear of machines stems precisely from this recognition of human fallibility since we are inherently irrational and inefficient. By extension we could be completely wiped out by a logical robot army that optimizes away the human race in the name of efficiency.
The reality is that machines and AI represent a much safer world.
Machines traditionally offer consistency and efficiency over their human counterparts. Like many machines, AI algorithms (which loosely encompasses machine learning, evolutionary algorithms, artificial neural networks, and statistical methods) are used to compensate for error or limitation in human perception and judgement. Autonomous cars are already far safer than a human driver because they are programmed to drive defensively and don’t suffer from boredom, distraction, fatigue, etc. This isn’t surprising since trains run by computers are more reliable and safer than their human counterparts, as are airplanes, which only rely on humans for the tricky bits. Similarly we trust elevators to behave predictably (unlike the ones in the Hitchhiker’s Guide). Manufacturing processes run by machines are far more efficient and safer than humans performing the same tasks. Automation and smart machines offer the possibility of a safe and reliable world that is free of much drudgery. (This is not to say that I’m an AI apologist like Kurzweil.)
The second point and the reason for the title is related to intelligence. Just because algorithms can make better decisions than humans doesn’t mean that they are intelligent and certainly not sentient. It simply means that for some process, a machine/algorithm can process data more effectively than a human in the same task. Nobody would claim that a calculator is intelligent just because it can compute arithmetic faster than a human. The same is true of machine learning algorithms. This is what is meant by narrow, or weak AI. Yet the fact that few people understand how these algorithms work may make them seem like magic (and closer to general, or strong AI). They are indeed like magic, but only in the sense of smoke and mirrors. To the untrained eye algorithms with futuristic sounding names like random forest, support vector machines and deep learning are able to emulate intelligence. Yet they are simply different ways of associating and separating data optimized against a particular dataset. By construction they are incapable of doing anything but this one task because that is all they’ve been optimized for. Achieving general intelligence and competency over an arbitrary number of capabilities would require a level of sophistication and complexity that just doesn’t exist. Sure there are examples like Watson that are able to understand human language and answer seemingly disparate questions (and even crack jokes), but again trivia is a very specific domain. It is the magic of complexity interacting with our own imaginations that spur us into extrapolating a future dominated by sentient machines.
More immediately it is already possible to construct a world where an individual’s life is completely dictated by machines and algorithms. However this is materially different from a single entity that is sentient and controlling our decisions. In the status quo just because one is given recommendations on romantic partners, which restaurants to take them to, and the directions to get there does not mean one is bound to these recommendations. Our society’s rigid adherence to rules already gives us a doomsday scenario where people can fall into the cracks of the system and struggle to survive. It isn’t about sentient machines controlling our actions, rather it’s about our own complacency in dealing with a complex world. In Daniel Kahneman’s book Thinking Fast and Slow he discusses the credulity of the reactive and intuitive “System 1” and the laziness of the analytical “System 2”. Problems arise when people stop thinking and blindly follow rules and systems. Whether a rule takes the form of a recommendation or a poorly conceived corporate policy is irrelevant.
While it is exciting to conceive of a world where we are conquered by our creations, the reality is that we are already far more dangerous to ourselves than the indeterminate future of sentient machines. The key is for all of us to break the cycle of intellectual laziness and embrace creativity and new ideas. This is the essence of the slow brood, which opens up the possibility of an enlightened world where technology continues to facilitate the advancement of humanity and enjoyment of life.