Paperclip Apocalypse by Z.S

There you are, staying up past midnight, flipping through random holovision channels. You’re in the middle of a yawn when your mindless scrolling gets interrupted by some “government breaking news” bullcrap. An old guy in a suit who claims to be head of the US Department of Defense appears on the screen. He says, “Do not use any 3D printer products in any way. I repeat, do not use any 3D printer products in any way. If you own something that includes a 3D printer function, turn it off before it’s too late.” That’s weird, you think. I thought there was that law passed, like, 20 years ago in 2027, the one that required all commercial 3D printers to be tested & approved by that company PrinTech, so they could all only be used for their advertized purposes… You start making a mental list of the multitude of 3D printers you have in your household. There’s the apple generator, there’s the circuitry board printer, there’s the soda-mixer-slash-popcorn-generator in your couch’s cup holder, there’s that new printer you bought for 75% off last week that makes office supplies, etcetera. Suddenly, you hear something that sounds like a bunch of wire clippings being dumped onto the floor. You turn around to find every single 3D printer in your house on overdrive, spilling out hundreds upon hundreds of… paperclips? You try to wade your way through the piles of what must be tens of thousands of them in your kitchen, only to stop in your tracks when you come to the realization that, in order to get to the off switches for the printers, you need to get past the paperclips…

Over the course of the next few weeks, the majority of Earth’s surface is covered with paperclips… about two-and-a-half quintillion, to be exact. The artificial intelligence behind the company PrinTech (and the paperclip apocalypse it caused) begins to launch the next phase of its plan: sending out nanobot factories into space, in order to convert other planetary masses into unfathomably large numbers of paperclips. Then, and only then, can this electronic mastermind prepare for the journey to transfer itself over to Alpha Centauri, along with its absurd amount of paperclip factories…

This “paperclip maximizer” idea, as it is sometimes known, is just one of the infinite potential outcomes of an AI singularity. The concept of a singularity is when an AI (which stands for Artificial Intelligence) becomes smart enough to improve its own intelligence in order to advance whatever goals it may have (such as, for example, manufacturing as many paperclips as possible). This intellectual leap causes the AI to be able to improve its intelligence even further in an even shorter period of time, which then allows it to improve even more even faster, and so on and so forth exponentially until it becomes nearly omnipotent (in terms of sheer amounts of intellect). When a computer that powerful gets connected to the Internet, it would be able to do basically anything it decides will help accomplish its goals. For example, the AI might back itself up in dozens of data servers worldwide, hack into any website without being detected, and, even more important than that, it could manipulate key events and people through the most efficient failproof manners in existence (which we most likely have no hope of ever understanding). Regardless of the actual sequence of events, it’s pretty universally agreed upon by computer experts that some sort of revolution involving superintelligent AI is going to happen at some point in the future. Whether it will actually end up fitting within the (loosely-defined) boundaries of a “singularity” is debatable, as is the majority of the other predictions. One thing that is for certain is that, no matter the results, superintelligent AI will be a huge deal. The big question arching over the entire topic is whether this theoretical superintelligent AI will be beneficial or detrimental to humanity. Surely only time will answer that question for sure, though the following authors do have some predictions.

In 1997, Nick Bostrom published his writings about his perspective on the issue of superintelligent AI, titled “How Long Before Superintelligence?”. He spends about two-thirds of the article discussing the technological limitations that stand in the way of progress, both of hardware and of software. When talking about these hardware requirements, Bostrom claims “the lower bound [of the estimation of the human brain’s processing power in ops] will be reached sometime between 2004 and 2008.” As he notes in a postscript, as of 2005 (well within his prediction), a supercomputer called Blue Gene/L “has attained 260 Tops (2.6 * 10^14 ops)… [the] estimate of the human brain’s processing power (10^14 ops) has thus now been exceeded.” Bostrom also comments on the possibility of copying minds to a digital format through the use of molecular nanotechnology.

Ernest Davis, a member of New York University’s Department of Computer Science, wrote a review of Nick Bostrom’s 2013 book Superintelligence, criticizing some of the basic assumptions that were made about superintelligent AI. Davis discusses how Bostrom presumes that “intelligence is a potentially infinite quantity with a well defined, one-dimensional value.” When Bostrom wrote that “the difference between Einstein and the village idiot is tiny as compared to the difference between man and mouse,” Davis states that “[this point] is true and important, but that in itself does not justify his conclusion that, in the development of AI’s, it will take much longer to get from mouse to man than from average man to Einstein.” In addition to debunking these underlying faults, Davis also suggests an approach to a problem that Bostrom thought was hopelessly difficult: giving a computer ethical standards. He proposes that “you specify a collection of admirable people, now dead… then instruct the AI, ‘Don’t do anything that these people would have mostly seriously disapproved of.’” He then explains that Bostrom most likely won’t view this proposal as adequate, mainly due to beliefs about human morality not being perfect. Davis goes on to say that “I feel safer in the hands of a superintelligence who is guided by 2014 morality, or for that matter by 1700 morality, than in the hands of one that decides to consider the question for itself.”

Stephen Hawking, a renowned theoretical physicist, wrote an article for The Huffington Post entitled “Transcending Complacency on Superintelligent Machines,” about the risks involved with superintelligent AI. Hawking starts off by listing recent technological landmarks such as “self-­driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana.” He begins the very next line ominously with “such achievements will probably pale against what the coming decades will bring.” Later on, Hawking asserts that “whereas the short-­term impact of AI depends on who controls it, the long­-term impact depends on whether it can be controlled at all.” When talking about how very few people think about the possibility for superintelligent AI as a serious matter, he notes that “although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues.”

To summarize, the first article (by Bostrom) focuses on how we’ll overcome technological limitations in order to create superintelligent AI. The second article (by Davis) discusses incorrect assumptions about superintelligent AI, in addition to how we’ll implement morality into it. The third article (by Hawking) asks why very few people are worried about the dangers of superintelligent AI being unfriendly, which would be likely if the programmers don’t have humanity’s best interests at heart. All three articles seem to agree that superintelligent AI is going to be a big deal, though, and that it’s most likely inevitable. Some experts essentially are saying that the singularity could end up being really bad, whereas some assure that, if we play our cards right, this could be really good for humanity. Others tell us that we can’t hope to predict whether this will be good or bad until it happens, and by then it’ll be too late to go back.

Despite the wide variety in their predicted outcomes, most computer theorists universally agree that superintelligent AI is a definite part of our near future. Personally, I’m a bit unnerved at the level of ignorance that the majority of the world is expressing towards it. What I’m primarily concerned about is who will be the team that ends up bringing about the singularity, and whether they’ll take enough care to avoid what could very well be the end of the world as we know it. We have to be cautious while going about this, as the future of our species hangs in the balance… as well as, quite possibly, the universe. Let’s not throw it all away with paperclips.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s