Pages

August 9, 2013

You Should Be Afraid of Artificial Intelligence

I, for one, do not welcome our new robot overlords.
Let me elaborate.
Writing about Artificial Intelligence is a challenge. By and large, there are two directions to take when discussing the subject: focus on the truly remarkable achievements of the technology or dwell on the dangers of what could happen if machines reach a level of Sentient AI, in which self-aware machines reach human level intelligence).
Robot

This dichotomy irritates me. I don’t want to have to choose sides. As a technologist, I embrace the positive aspects of AI, when it helps advance medical or other technologies. As an individual, I reserve the right to be scared poop-less that by 2023 we might achieve AGI (Artificial General Intelligence) or Strong AI — machines that can successfully perform any intellectual task a person can.
Not to shock you with my mad math skills, but 2023 is 10 years away. Forget that robots are stealing our jobs, will be taking care of us when we’re older, asking us to turn and cough in the medical arena.
In all of my research, I cannot find a definitive answer to the following question: 
So, yes, I have control issues. I would prefer humans maintain autonomy over technologies that could achieve sentience, largely because I don’t see why machines would need to keep us around in the long run.How we can ensure humans will be able to control AI once it achieves human-level intelligence?

It’s not that robots are evil, per se. (Although Ken Jennings,Jeopardy champion who lost to IBM’s Watson might feel differently.) It’s more that machines and robots are currently, and for the moment, predominantly, programmed by humans who always experience biases.
In a report published by Human Right’s Watch and Harvard Law School’s International Human Rights Clinic, "Losing Humanity, The Case Against Killer Robots", the authors write: “In its Unmanned Systems Integrated Roadmap FY2011-2036, the U.S. Department of Defense wrote that it ‘envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.’”
The "unmanned systems" refer to fully autonomous weapons that can select and engage targets without human intervention.
Who is deciding when a target should be engaged? Come to think of it, who’s deciding who is a target? Do we really want to surrender control for weaponized AI to machines, in the wake of situations like the cultural morass of the Trayvon Martin shooting? How would Florida’s Stand Your Ground Law operate if controlled by weaponized AI-police enforcement hooked into a city’s smart grid?
Short answer: choose Disneyland.

FUD Versus FAB


wearable-cameras-2
Image: Steve Mann

The term, FUD stands for “Fear, Uncertainty and Doubt.” It’s a pejorative phrase with origins in the tech industry, where companies use disinformation tactics to spread false information about competitors.
FUD has evolved, however, to become a tedious phrase leveled at anyone questioning certain aspects of emerging technology, often followed by accusations of Ludditism.
But I think people have the wrong picture of Luddites. In the New York TimesPaul Krugmanrecently wrote on this idea, noting the original Luddite movement was largely economically-motivated, in response to the Industrial Revolution. The original Luddites weren’t ignorant regarding the technology of the day, or at least its ramifications (loss of work). They took up arms to slay the machines they felt were slaying them.
Not too far a stretch to say we’re in a similar environment, although the stakes are higher — strong AI arguably poses a wider swath of technological issues than threshing machines.
So, as a fan of acronym-creation I’d like to posit the following phrase to counter the idea of FUD, especially relating to potentially human-ending technology without standards governing its growth:

FAB: Fear, Awareness and Bias

The acronym distinguishes a blind and reactionary fear used to proactively spread false information, from a warranted and human fear based in the bias that it’s okay to say we don’t want to be dominated, ruled, out-jobbed or simply ignored by sentient machines.
Does that mean I embrace relinquishment, or abandoning AI-related research? Not altogether. The same Watson that won on Jeopardy is also now being utilized in pioneering oncological studies. Any kneejerk reaction to stop work in the AI space doesn’t make sense (much less, it’s impossible) .
But the moral implications of AI get murky when thinking about things like probabilistic reasoning, which helps computers move beyond Boolean decisions (yes/no) to make decisions in the midst of uncertainty — for instance, whether or not to give a loan to an applicant based on his or her credit score.
It is tempting to wonder what would happen if we spent more time focusing on helping each other directly, versus relying on machines to essentially grow brains for us.

FAB Ideas

memories-maya
Image: Clyde DeSouza

“Nuclear fission was announced to the world at Hiroshima.” James Barrat is author of Our Final Invention: Artificial Intelligence and the End of the Human Era, which expounds a thorough description of the chief players in the larger AI space, along with an arresting sense of where we’re headed with machine learning — a world we can’t define.
For our interview, he cited the Manhattan Project and the development of nuclear fission as a precedent for how we should consider the present state of AI research:
We need to develop a science for understanding advanced Artificial Intelligence before we develop it further. It’s just common sense. Nuclear fission is used as an energy source and can be reliable. In the 1930s the focus of that technology was on energy production, initially, but an outcome of the research led directly to Hiroshima. We’re at a similar turning point in history, especially regarding weaponized machine learning. But with AI we can’t survive a fully realized human level intelligence that arrives as abruptly as Hiroshima.
Barrat also pointed out the difficulty regarding AI and anthropomorphism. It’s easy to imbue machines with human values, but by definition, they’re silicon versus carbon.
A recent article in The Boston Globe by Leon Neyfakh provides another angle to the concern over autonomous machines. Take Google’s Self-Driving Car — what happens when a machine breaks the law?“Intelligent machines won’t love you any more than your toaster does," he says. "As for enhancing human intelligence, a percentage of our population is also psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.”

Gabriel Hallevy, a professor of criminal law at One Academic College in Israel and author of upcoming book When Robots Kill: Artificial Intelligence Under Criminal Law, adds to Barrat’s assessment: Machines need not be evil to cause concern (or in Hallevy’s estimation, be criminally liable).
The issue isn’t morality, but awareness.
Hallevy notes in "Should We Put Robots on Trial," “An offender — a human, a corporation or a robot — is not required to be evil. He is only required to be aware of what he’s doing…[which] involves nothing more than absorbing factual information about the world and accurately processing it.”

3 comments:

  1. Wow...all I could think of reading this was the Terminator movie! Prime example...the drones...don't they remind you of the Hunter-Killers? We all know how well they are working out... -_- Then the whole "robots arn't evil, should they be put on trail, can they break the law?" What was that Will Smith movie? OH YA! IRobot! Things that make you go hmmm...?

    ReplyDelete
  2. every one should get a personal emp device!

    ROFL captcha says "please show us your not a robot" !!!! lol

    ReplyDelete
    Replies
    1. Personal EMP devices aren't sold, especially to the public, because of the high amount of mischief & damage that could be done to the big corporations. One such device could be used to knock down an ATM, or a bank's datacenter ("Goldeneye" anyone??). Not to mention that EMP devices work "at 360 degrees", so they would knock down everything. I suggest investing some times researching "HERF Canons"(High-Energy-Radio-Frequency), which are more accurate, and could be done "DIY" for about 200 bucks (have a look at "Mad Projects for the Evil Genius" for more infos on "Next-Generation" weaponry (Sonic, electro-magnetic, microwaves, rail-guns, etc)

      Delete