Saturday, March 8, 2014

AI: Artificial Invasion



The end of the world:  a topic that isn't necessarily the best topic for small talk at a party, but still merits attention nonetheless.  So then, among the experts at institutes such as the Future of Humanity Institute at the University of Oxford, UK (whoa, that place isn't real, is it?  Yes, it is.), what outcome has the largest probability of decimating the human race?


Zombie outbreak?
















Nope.



Nuclear war?





















Not really.



An coup d'état perpetrated by various forms of artificial intelligence?


















Bingo.


Unfortunately, there is not much attention given to the subject. “I think there’s more academic papers published on either dung beetles or Star Trek than about actual existential risk,” says Stuart Armstrong, a philosopher and Research Fellow at the institute.

Especially for AI, the focus and discussion does not exist in abundance.  Regardless, people such as Armstrong still stress the importance of the issue.  In his research, he gives a statistical example detailing the possibility of total human extinction in various end-of-the-world scenarios.

“One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk... You take a nuclear war for instance, that will kill only a relatively small proportion of the planet. You add radiation fallout, slightly more, you add the nuclear winter you can maybe get 90%, 95% – 99% if you really stretch it and take extreme scenarios – but it’s really hard to get to the human race ending. The same goes for pandemics, even at their more virulent."


But it isn't the Terminator that Armstrong fears.  He fears the AI that is "smarter than us – more socially adept...better at politics, at economics, potentially at technological research.”

Armstrong goes on to discuss the myriad factors that accelerate the "Robot Uprising."  One factor, which can be seen in modern society, is the presence of machines in the workforce.  Armstrong expands the concept tenfold, saying that “You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you’d get performance beyond what I’ve just described.”

Okay, so robots take our jobs.  But where does the human extinction part come into play?

“Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent... Well it will realize that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent."  

Wow.

But perhaps a safeguard, such as the "Three Laws of Robotics" from iRobot, would keep us mouth-breathers safe? 

“It turns out that that’s a more complicated rule to describe, far more than we suspected initially. Because if you actually program it in successfully, let’s say we actually do manage to define what a human is, what life and death are and stuff like that, then its goal will now be to entomb every single human under the Earth’s crust, 10km down in concrete bunkers on feeding drips, because any other action would result in a less ideal outcome."

Oh.


With the rapid expansion of machinery in the workforce, home, and even battlefield, researchers such as Armstrong are becoming more verbose about their extrapolations.   If the "AI Apocalypse" comes, and humanity has no defense, then it'll be:




before you can say "The Humans Are Dead."


(Sorry, I couldn't help myself.)








No comments:

Post a Comment