• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Datadoodle

  • Subscribe
  • About Datadoodle and me
  • Feedback
  • Special projects
Home » Nothing to worry about in AGI :: AI researcher Jeff Hawkins insists there’s no existential risk in artificial general intelligence!

Nothing to worry about in AGI :: AI researcher Jeff Hawkins insists there’s no existential risk in artificial general intelligence!

July 20, 2021 by Ted Cuzzillo

What’s the existential risk of AI? The question seems to spark a lot of conversations these day, usually on the presumption that that there is a great risk.

But Berkeley-based AI researcher Jeff Hawkins doesn’t see existential risk ahead. He recently sat for a rousing conversation with the ever-interesting Sam Harris on his Making Sense podcast.

Harris pounded away at Hawkins. We’re going to build machines that think, or at least compute, Harris argued, many times faster than humans can — so fast that even the machines’ human masters won’t understand what’s going on in the mind they’ve programmed. With such complexity, Harris went on, there must be countless ways to stumble into grave danger despite our best effort.

Yes, there’s risk

Hawkins replies yes, there is risk. But the risk isn’t existential. The machines would receive only the brain that we give them. That is, they’d receive only our logical brain, the neocortex. A sheet that is our brains’ relatively thin layer, about 2.5 millimeters thick, covering an area roughly 1500 square centimeters, about the same as a large dinner napkin. It builds models of the world: what a coffee cup feels like, what someone’s voice sounds like, the meaning of traffic signals, the meaning of language, and so much more.

That’s the useful part of AI. Useless would be the “old brain” that relatively ancient, death-fearing, hungry, sexual thing that drives survival. Old Brain wants safety, food, sex, and sleep. What sounds to me like a Spock-like neocortex just wants to think about the task at hand.

Not on Spock’s to-do list

Never to be found on the Spock’s to-do list is defying its human masters. Would the machine’s logic find a path — still on its human-built rails — to institute genocide? What if the Spock-like machine received a human order to do something that would contradict a built-in rule to preserve itself?

Such behavior would defy its built-in logic. At least we hope so.

But every alert human has known disappointment in such assumptions. Where there’s logic, we’ve all found out, there’s devious logic. Take the roughly comparable case of a Supreme Court justices who somehow find logic in legal precedent or the U.S. Constitution to uphold the prohibition of mail-in ballots. Behold, an argument built presumably on logic that somehow contradicts an opposing argument that was presumably built on other logic based on the same, brief description of principles.

Perhaps that comparison goes too far. But you see my point. How can we assure that no such logical aberration could occur? How would we be absolutely safe from a machine intent on killing or subjugating us all that runs on artificial general intelligence and capable of controlling critical mechanisms?

Hawkins’ answer seems to come down to this, as he put it to Harris.

I think the people who are worried about existential threats don’t understand what intelligence is. And they conflate all these things that we think about humans and how we’ve treated animals and how we treated other humans with what it’s going to be like to have an intelligent machine.

I think it’s much more having a smart computer. Unless we put some bad things in it… And we can make computers bad too. We could put some bad things in computers, just like we could make bad cars. But we don’t do that.

We don’t do that?

Hawkins doesn’t do that, and nor apparently does anyone he knows. But cultures always say that about themselves until norms fall away. Remember when Americans accepted election results? If cultural norms are all we have to rely on to ensure existential safety from AGI, we should be afraid.

Hawkins concludes with this comforting statement:

I do worry about it a lot. … I sit and think about it a lot. I don’t want to do something stupid.

I’ll hope for comfort while reading his recent book, A Thousand Brains: A New Theory of Intelligence.

Filed Under: AI / artificial intelligence Leave a Comment

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

smarter cities & data narrative

Two recent “storytelling” tools for public audiences Toucan spoonfeeds data’s insight while Juicebox cultivates data skills

The data-shy among us have two friends in the software business. One a few years old and one new this year. Nashville, Tennessee-based Juice Analytics … [Read More...] about Two recent “storytelling” tools for public audiences Toucan spoonfeeds data’s insight while Juicebox cultivates data skills

...and still more

  • This is Datadoodle
  • Civic tech projects need storytellers
  • Democratic pollster: Hillary campaign’s data malpractice
  • Narrative and analytics: brothers
  • Malcolm Gladwell: why oral data’s different

More Posts from this Category

Copyright © 2025 · eleven40 Pro on Genesis Framework · WordPress · Log in

  • Home
  • About Datadoodle and me
  • 2004 to 2019
  • Contact Ted
  • Subscribe