What’s the existential risk of AI? The question seems to spark a lot of conversations these day, usually on the presumption that that there is a great risk.
But Berkeley-based AI researcher Jeff Hawkins doesn’t see existential risk ahead. He recently sat for a rousing conversation with the ever-interesting Sam Harris on his Making Sense podcast.
Harris pounded away at Hawkins. We’re going to build machines that think, or at least compute, Harris argued, many times faster than humans can — so fast that even the machines’ human masters won’t understand what’s going on in the mind they’ve programmed. With such complexity, Harris went on, there must be countless ways to stumble into grave danger despite our best effort.… Read the rest “Nothing to worry about in AGI :: AI researcher Jeff Hawkins insists there’s no existential risk in artificial general intelligence!”