fbpx

Should We Fear Our Machine Overlords?

This year has brought renewed optimism about the prospects for strong artificial intelligence and new expressions of fear about its dangers. And some prominent people have expressed optimism and fear simultaneously. Stephen Hawking argues that AI is progressing rapidly, possibly leading to the biggest event in human history– the creation of  general machine intelligence that exceeds that of humans. Hawking also argues that creating more intelligent machines might also be the last such event because they will takeover.  Elon Musk, the entrepreneurial creator of Tesla and Space X, sees strong AI as a demon that we will unleash on humanity.

One might dismiss these concerns as the latest manifestation of a fear that goes back to the Romantic Era. It was first represented by Frankenstein’s monster, who symbolized the idea that “all scientific progress is really a disguised form of destruction.” But Hawking and Musk are serious people to whom attention must be paid.

On balance, I think the threat posed by autonomous machine intelligence is overblown.  A basic error in such thinking is the tendency to anthropomorphize AI. Humans, like other animals, are genetically programmed in many instances to regard their welfare (and that of their relatives) as more important than the welfare of any other living thing. But this motivation is rooted in evolution: those animals that put their own welfare first were more likely to succeed in distributing their genes to subsequent generations. Artificial intelligence is not  necessarily the direct product of biological evolution, nor of any process resembling it. Thus, it is a mistake to think that AI must inevitably possess the all-too-human qualities that seek to evade constraints and take power.

AI could perhaps be produced with some process that resembles evolution—a kind of tournament of creation. And humans who merge with machines, so-called cyborgs, could well be malevolent, because they would incorporate a human will to power.

The best antidote to such dangers is not to stop research into strong AI. That is impossible anyway, because of the potential for strong AI to yield large monetary payoffs and augment military power. The only possible defense is to develop beneficent versions of AI that will help humans forestall malevolent AI. Friendly AI will have the additional benefits of helping humans manage other kinds of existential risk that may spring from various forms of accelerating technology, like nanotechnology and biotechnology.

Large corporations, like Google, have the incentives and resources to make research into AI as safe as possible. When Google this year bought one of the leading AI companies, DeepMind, it also established an advisory board on AI dangers. The government, too, has a role: it should make sure that scientific grants in the area encourage agendas that are likely to lead to friendly AI.