When the computer AlphaGo defeated Lee Sedol — who is perhaps the world’s top Go player — by a match score of 4-1 last week, Google’s DeepMind division showed that artificial intelligence (AI) is beginning to deliver on some of its promises. Go is an Asian board game far more complex than chess, and it has been viewed as the last great game challenge for AI. When DeepMind set out to conquer Go, some people thought the project would take 10 years. In fact, it only took the team about one year.
Unlike chess, in which most of the great games and moves in history can be programmed into a database and searched during the match, Go has too many combinations to work that way. There are something like 10^170 possible positions on a standard 19-by-19 board — a number larger than the square of all the atoms in the universe. So Go provided a better challenge for real machine learning; rather than mere databases and brute force computation, a computer could use both human programming and self-teaching to acquire both strategic and tactical agility. DeepMind’s deep minds built a multilayered algorithm based on a value network, a policy network, and Monte Carlo simulations. This algorithm would not just do as it was told — it could also improve.
The great Yale computer scientist David Gelernter, author of a new book called “Tides of Mind: Uncovering the Spectrum of Consciousness,” is impressed:
AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paper proposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake.Despite AlphaGo’s very real achievement, I still question the timeline for human-level machine intelligence — what’s known as “AGI,” or artificial general intelligence. AlphaGo is, after all, a machine with one task, a singular focus, and the task, despite its complexity, is well-defined — a grid, two colors of stones, allowable moves, a clear objective. Can we so cleanly define life? Next, consider that the human brain runs on just 20 watts, less than a standard light bulb. On the other hand, AlphaGo, with its 2,200 powerful computer chips — 1920 CPUs and 280 graphics processors — consumes around one megawatt (MW), or 50,000 times more energy than Lee Sedol. So, yes, computers can beat humans at Go. But it took humans to build AlphaGo, to strategize about its learning architecture, to write and tweak its software — and, not to mention, to supply it with 50,000 times more energy. It’s a remarkable feat that, after building this complex system, AlphaGo can learn and improve on its own at its assigned task. Ask AlphaGo to write a paragraph, identify faces, or even play chess, however, and it wouldn’t know where to begin. It is utterly amazing, and yet is still just a highly engineered, single-purpose, energy-hogging Go player. Geoffrey Hinton, one of the key figures in deep learning research, is also cautious. “My belief,” he says,
is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses — 10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.The brain uses 1/50,000 the energy of AlphaGo, and yet it has a million times more synaptic connections than our best artificial neural network — thus giving it something like 50 billion times the general purpose thinking ability per watt. Fifty billion is a large gap, even for computation that advances at something like the pace of Moore’s law. And yet, the AlphaGo experience may propel research in exciting directions. It will no doubt help us develop other single-purpose AI applications. From there, we may begin to put multiple single-purpose AIs together and slowly approach a more general kind of intelligence. Or perhaps we will learn the limitations of these single-purpose approaches and begin the search for a new paradigm altogether. I think AI research in the coming decades will both yield spectacular advances and fall far short of duplicating those things, both good and bad, that make us particularly human.
The post Google, Go, Gelernter, and the future of artificial intelligence appeared first on Tech Policy Daily.