What’s the existential risk of AI? The question seems to spark a lot of conversations these day, usually on the presumption that that there is a great risk.
But Berkeley-based AI researcher Jeff Hawkins doesn’t see existential risk ahead. He recently sat for a rousing conversation with the ever-interesting Sam Harris on his Making Sense podcast.
Harris pounded away at Hawkins. We’re going to build machines that think, or at least compute, Harris argued, many times faster than humans can — so fast that even the machines’ human masters won’t understand what’s going on in the mind they’ve programmed. With such complexity, Harris went on, there must be countless ways to stumble into grave danger despite our best effort.
Yes, there’s risk
Hawkins replies yes, there is risk. But the risk isn’t existential. The machines would receive only the brain that we give them. That is, they’d receive only our logical brain, the neocortex. A sheet that is our brains’ relatively thin layer, about 2.5 millimeters thick, covering an area roughly 1500 square centimeters, about the same as a large dinner napkin. It builds models of the world: what a coffee cup feels like, what someone’s voice sounds like, the meaning of traffic signals, the meaning of language, and so much more.
That’s the useful part of AI. Useless would be the “old brain” that relatively ancient, death-fearing, hungry, sexual thing that drives survival. Old Brain wants safety, food, sex, and sleep. What sounds to me like a Spock-like neocortex just wants to think about the task at hand.
Not on Spock’s to-do list
Never to be found on the Spock’s to-do list is defying its human masters. Would the machine’s logic find a path — still on its human-built rails — to institute genocide? What if the Spock-like machine received a human order to do something that would contradict a built-in rule to preserve itself?
Such behavior would defy its built-in logic. At least we hope so.
But every alert human has known disappointment in such assumptions. Where there’s logic, we’ve all found out, there’s devious logic. Take the roughly comparable case of a Supreme Court justices who somehow find logic in legal precedent or the U.S. Constitution to uphold the prohibition of mail-in ballots. Behold, an argument built presumably on logic that somehow contradicts an opposing argument that was presumably built on other logic based on the same, brief description of principles.
Perhaps that comparison goes too far. But you see my point. How can we assure that no such logical aberration could occur? How would we be absolutely safe from a machine intent on killing or subjugating us all that runs on artificial general intelligence and capable of controlling critical mechanisms?
Hawkins’ answer seems to come down to this, as he put it to Harris.
I think the people who are worried about existential threats don’t understand what intelligence is. And they conflate all these things that we think about humans and how we’ve treated animals and how we treated other humans with what it’s going to be like to have an intelligent machine.
I think it’s much more having a smart computer. Unless we put some bad things in it… And we can make computers bad too. We could put some bad things in computers, just like we could make bad cars. But we don’t do that.
We don’t do that?
Hawkins doesn’t do that, and nor apparently does anyone he knows. But cultures always say that about themselves until norms fall away. Remember when Americans accepted election results? If cultural norms are all we have to rely on to ensure existential safety from AGI, we should be afraid.
Hawkins concludes with this comforting statement:
I do worry about it a lot. … I sit and think about it a lot. I don’t want to do something stupid.
I’ll hope for comfort while reading his recent book, A Thousand Brains: A New Theory of Intelligence.
Leave a Reply