Note: This post is not a review nor critique of the book Our Final Invention. I reference it merely as a launch pad for discussing some common topics.
A recent book (see reviews here and here) once again raised the spectre of a hyper-intelligent AI that could wipe out the human race. Worrying about the destruction of mankind by sentient machines is fast becoming a national pastime. The logic follows from the observation that so many decisions are already managed by algorithms collectively known as AI. As computing power continues its steady climb, it is only a matter of time before we create machines vastly smarter than ourselves. Surely we need to start preparing for this inevitable rise of the machines so as not to be taken by surprise. Otherwise we could end up as slaves or batteries to our robot overlords. And now that Google has hired Ray Kurzweil, former inventor and current futurist, to make this sci-fi story a reality, we should all brace ourselves for this inevitable war.
These sensational “predictions” should stay in the realm of science fiction as there are myriad problems with this line of reasoning. Consider first that we already live in a world where mutually assured self destruction has been a reality for half a century. Humans have been killing humans for millennia using technology and machines. As we move into the drone age, where humans can kill humans remotely, we have already created a hostile environment for humanity independent of sentient machines. Therefore it is difficult to see how the current state of affairs is any better than the dystopian future presented to us. In other words how is putting the safety of the world in the “hands” of a machine more dangerous than a human? Machines aren’t aggressive, aren’t malicious, aren’t paranoid, don’t experience fear, don’t have agendas, don’t hold grudges, nor become drunk from power. It is possible that someone could in fact introduce these traits into the AI that controls the nuclear stockpile, but why would anyone do that? The history of our use of machines has been precisely the opposite: to remove these human traits from operational processes. Perhaps our fear of machines stems precisely from this recognition of human fallibility since we are inherently irrational and inefficient. By extension we could be completely wiped out by a logical robot army that optimizes away the human race in the name of efficiency.
The reality is that machines and AI represent a much safer world.
Machines traditionally offer consistency and efficiency over their human counterparts. Like many machines, AI algorithms (which loosely encompasses machine learning, evolutionary algorithms, artificial neural networks, and statistical methods) are used to compensate for error or limitation in human perception and judgement. Autonomous cars are already far safer than a human driver because they are programmed to drive defensively and don’t suffer from boredom, distraction, fatigue, etc. This isn’t surprising since trains run by computers are more reliable and safer than their human counterparts, as are airplanes, which only rely on humans for the tricky bits. Similarly we trust elevators to behave predictably (unlike the ones in the Hitchhiker’s Guide). Manufacturing processes run by machines are far more efficient and safer than humans performing the same tasks. Automation and smart machines offer the possibility of a safe and reliable world that is free of much drudgery. (This is not to say that I’m an AI apologist like Kurzweil.)
The second point and the reason for the title is related to intelligence. Just because algorithms can make better decisions than humans doesn’t mean that they are intelligent and certainly not sentient. It simply means that for some process, a machine/algorithm can process data more effectively than a human in the same task. Nobody would claim that a calculator is intelligent just because it can compute arithmetic faster than a human. The same is true of machine learning algorithms. This is what is meant by narrow, or weak AI. Yet the fact that few people understand how these algorithms work may make them seem like magic (and closer to general, or strong AI). They are indeed like magic, but only in the sense of smoke and mirrors. To the untrained eye algorithms with futuristic sounding names like random forest, support vector machines and deep learning are able to emulate intelligence. Yet they are simply different ways of associating and separating data optimized against a particular dataset. By construction they are incapable of doing anything but this one task because that is all they’ve been optimized for. Achieving general intelligence and competency over an arbitrary number of capabilities would require a level of sophistication and complexity that just doesn’t exist. Sure there are examples like Watson that are able to understand human language and answer seemingly disparate questions (and even crack jokes), but again trivia is a very specific domain. It is the magic of complexity interacting with our own imaginations that spur us into extrapolating a future dominated by sentient machines.
More immediately it is already possible to construct a world where an individual’s life is completely dictated by machines and algorithms. However this is materially different from a single entity that is sentient and controlling our decisions. In the status quo just because one is given recommendations on romantic partners, which restaurants to take them to, and the directions to get there does not mean one is bound to these recommendations. Our society’s rigid adherence to rules already gives us a doomsday scenario where people can fall into the cracks of the system and struggle to survive. It isn’t about sentient machines controlling our actions, rather it’s about our own complacency in dealing with a complex world. In Daniel Kahneman’s book Thinking Fast and Slow he discusses the credulity of the reactive and intuitive “System 1” and the laziness of the analytical “System 2”. Problems arise when people stop thinking and blindly follow rules and systems. Whether a rule takes the form of a recommendation or a poorly conceived corporate policy is irrelevant.
While it is exciting to conceive of a world where we are conquered by our creations, the reality is that we are already far more dangerous to ourselves than the indeterminate future of sentient machines. The key is for all of us to break the cycle of intellectual laziness and embrace creativity and new ideas. This is the essence of the slow brood, which opens up the possibility of an enlightened world where technology continues to facilitate the advancement of humanity and enjoyment of life.
As the author of ‘Our Final Invention,’ the book you cite at the beginning of your blog entry, I’d like to address a few of your points:
1) You wrote, “…we have already created a hostile environment for humanity independent of sentient machines. Therefore it is difficult to see how the current state of affairs is any better than the dystopian future presented to us.”
In other words, advanced AI won’t make anything worse. I disagree.
Advanced AI, in the form of data mining tools, is what has given the NSA awesome powers of surveillance, which it’s used to abuse the US constitution’s first and fourth amendments. In the near term advanced AI will be weaponized in autonomous killer-drones and battlefield robots – killing machines that leave humans out of the loop by design.Worldwide, 56 nations are developing battlefield robots, and AI will be their most critical component.
In upcoming decades machines as smart as humans (Artificial General Intelligence, or AGI) will be created by developers with the deepest pockets, which include the NSA, DARPA, Google, and IBM. Two of those organizations lead the world in the development of war technology. By buying Boston Dynamics and seven other robot companies, Google shows it wants to get into the battle-bot biz.
So while there will be positive outcomes to advanced AI like the self-driving cars you cite, the negative ones will threaten life, limb, and our very existence.
2) “Automation and smart machines offer the possibility of a safe and reliable world that is free of much drudgery.”
A recent article in the MIT Technology Review proposes that 45% of all jobs will be able to be automated in 15-20 years. Economist Paul Krugman and author Martin Ford argue that information technologies already replace white collar jobs as well as well as blue collar manufacturing jobs and significantly contribute to the US’ current 8% unemployment rate.
Most people would like to avoid hard, repetitious work, but most would like to earn enough to live, too. ‘Freedom from drudgery’ is a worn out manufacturing trope that you should substantiate or discard.
3) “Just because algorithms can make better decisions than humans doesn’t mean that they are intelligent and certainly not sentient.”
AI development won’t freeze. The data processing, narrow AI applications you reference, including Watson, are snapshots in a fast moving, and accelerating, branch of technology. As you know, whatever limitations you claim about simple applications and cognitive architectures today won’t be true tomorrow.
The rest of your blog entry stumbles off the logic track, and I’ve tried hard to make sense of it. In your penultimate paragraph you argue that there is currently no superintelligence that governs our lives (no one I’ve read or spoken with claims there is) but we can be guided in restaurant choices and the like by narrow AI. Falling through the economic cracks, according to you, is somehow linked to our adherence to rules. I don’t get that one, or the reference to Kahneman. And what is the “slow brood”(I get it now, it’s a phrase you coined but didn’t define here)?
Here’s what I find ‘intellectually lazy’, a castigation you broadly level in your last paragraph. I find it lazy that you wrote a whole quasi-essay about the ideas you claim are embodied in my book ‘Our Final Invention’ without reading the book. If you had you’d have discovered that fifty percent of it deals with human foibles and cognitive biases as they apply to AI and other technologies – the problems with us, not just the machines. Our innovation always runs far ahead of our stewardship. And that’s as dangerous as malicious superintelligence will be long before scientists even create it.
LikeLike
James,
Thanks for taking the time to write such a thoughtful response. This post wasn’t aimed directly at you nor your book. Rather I’ve been thinking about these topics for quite a while, and reading about your book prompted me to finally post some thoughts regarding it. It’s unfortunate that you think that my comments are leveled squarely at you.
That said, I will gladly take the opportunity to engage in more discussion. Regarding your first point, you are conflating the technology of the NSA with AGI and human-level intelligence, which is equally unsubstantiated. As someone who is knowledgeable with many so-called AI and machine learning methods, this is a bit of a stretch. I see no problem in having a difference of opinion here, as we certainly do.
When artificial human-grade intelligence will come about is besides the point to me. The core of this discussion boils down to whether you believe that certain technologies/scientific research are inherently bad or whether it is the use of the technology that is bad. Arguably designing machines and tools for the express purpose of killing results in bad technology, but the underlying technologies are certainly not bad, like the Internet or geospatial satellites.
Regarding my reference to Kahneman and adherence to rules, I was continuing to illustrate that humanity already has significant challenges (and that we are predisposed to it). The advancement of AI won’t really change that. We will continue to develop more sophisticated killing machines because humans are inclined to kill. One reason is because of the systems and rules we’ve created. Many people blindly follow orders, don’t question, and are complacent to injustice. I cited Kahneman because I think part of the reason humans are inclined to do these things is characterized nicely by Kahneman’s model of System 1 and System 2.
So your book seems to explore similar topics, which is great to hear. Again, it’s unfortunate that you read this as a targeted critique. As that was not my intention, I’ll update the top of the post with a disclaimer.
Warm Regards,
Brian
LikeLike