fbpx

The Most Important Charitable Initiative of the Year--OpenAI

A group of billionaires, including Peter Thiel and Elon Musk, have established a new initiative called OpenAI. It  will attempt to accelerate research into artificial intelligence (AI) but in way that assures that the resulting AI will be “friendly.” In my view, this is the most important philanthropic initiative of the year, perhaps of the decade, because it addresses a crucial issue of our time—dangers from the accelerating pace of technological change.

The development of AI can help navigate the rapids ahead, because progress in artificial intelligence can aid in assessing the consequences of social policy for other forms of accelerating technology, such as nanotechnology and biotechnology, more accurately and quickly. More substantial machine intelligence can process data, simulate the world to test the effects of future policy, and offer hypotheses about the effects of past policy.

But as Musk and Stephen Hawking have argued, strong AI– defined as a general purpose intelligence that approximates that of humans—also could threaten humanity, because it might be unable to be controlled. Man will be in the unhappy position of the sorcerer’s apprentice—too weak to master the master machines. No  amount of government regulation will be able to avoid this risk, given that the economic and national security returns to making stronger AI are enormous. Moreover, research into AI is hard to detect and prevent, because it does not require much infrastructure.

Thus, the only way to forestall malevolent AI is to accelerate research into so-called friendly AI—AI that is designed in ways that will make it live peaceably in the human community. If friendly artificial intelligence maintains a head start, it can help prevent the possible dangers that could emerge from other kinds of artificial intelligence. To be sure, this approach is not a sure route to success, but it seems much more fruitful than any kind of government regulation.

And it is best that this initiative be taken privately rather than by the government. Because governments are naturally focused on using AI for national security, we can never be sure that that research they direct into friendly AI will not be distorted by that objective.

There is a political lesson in this as well. Private charity can carry out some projects that government cannot. And some of these projects are of such a scale that only the very rich can fund them. We have yet another reason to be grateful to the one percent.