Elon Musk is a visionary entrepreneur but a bad social planner. Over the weekend he addressed the Governors’ Association and called on their members to sponsor regulatory bodies to direct the development of artificial intelligence. He argued that AI is the “biggest risk we face as a civilization.” But our AI policy should be the opposite of what Musk supports. Federal and state governments should not regulate AI, but should help accelerate it. That is essential to our national security and offers the best hope of stopping malevolent AI, not that I believe the risk is as great as Musk apparently does.
Musk’s central premise is correct: AI is now making huge progress. In 2011 IBM’s Watson beat the best players at Jeopardy, showing that AI can now play in the more fluid world of natural language, not just in games with very formal moves. Just this year, Google’s AlphaGo beat the world’s best Go player. This is startling development, occurring long before most predictions. Unlike chess, Go does not have clear strategies that can be programmed: even great players have a hard time explaining why they move as they do. Google did not program in strategic heuristics, but learned from 30 million Go games and simulations of games how to play better than champions. Thus, as Andrew McAfee and Erik Brynjolfsso note, the victorious program reflected Michael Polyani’s famous paradox about humans: We know more than we can tell. And this kind of data mining can give AI an intuitive, not a formally rule-based judgment in many other areas. Lawyers, beware: the machines are coming!