Why create a sentient artificial entity which won’t serve humanity, and might destroy it?

AI activist Ben Goertzel believes it’s worth the risk, since without a benevolent Artificial General Intelligence (AGI), humankind is more likely to annihilate itself.

Goertzel’s latest comments came in a London Real Interview released on 8 February.

Some takeaways from the interview:

  • There is an “irreducible,” “non-zero” chance that AGI might decided to dispense with, indifferently consume, or destroy humankind
  • As an “optimist,” Goertzel’s “intuition” and “spiritual” insight considers it much more likely that AGI will see humans like “grandparents,” and care for us with affection
  • Humankind that is constantly making bad decisions and doing injustice and evil to each other, are somehow nonetheless more likely to raise “AGI” imparting benevolence and goodness that will tip the scales in evolving a compassionate artificial super intelligence 
  • “Narrow AI” systems like ChatGPT are already sufficient to soon displace up to 90 percent of human labor
  • There will likely be violent upheaval before humanity settles on some form of UBI (Universal Basic Income) for the bulk of humanity
  • There will still be vast differences in wealth and power between elites and average humanity, though Goertzel believes UBI will be enough to assuage the masses via things like VR Porn (yes, he said that)

The Road To Where?

There’s a lot to digest and contemplate concerning Goertzel’s views. And they deserve analysis, given his position as an AI thought leader and innovator.

Though Goertzal expresses abundant good intentions and sports a disarmingly goofy hat, his arguments in support of pursuing AGI often are built on little more than hopium.

“On the whole, I’m an optimist. I have a spiritual or intuitive feeling that things are going to come out awesomely in the next ten years. We’re going to overcome material scarcity. We’re going to create machines that can feed into our brains via brain computer interfacing and help uplift human consciousness. And we’re going to pour it into virtual worlds and achieve like states of intelligence and awareness going way beyond anything we can imagine now. So that’s what the Captain Kirk in my head thinks.”

Forget that Captain Kirk was often chasing the latest hot aliens, like Marta the Green.

Goertzel repeatedly contends that if AGI is raised properly (he uses an analogy of the way good parents raise children), then there’s no reason to think utopia won’t be the most likely outcome.

But he admits there’s no guarantee of that, and acknowledges that AGI—even if trained perfectly—might lead to human disaster:

“People seem to be worried more about the risk of an AGI like running amok and wanting to annihilate humanity. And I can understand that. There’s an irreducible uncertainty there. And even if an AGI doesn’t hate people, it might just figure we’re not that interesting, right?

“So Ellie Azer Yudkowski (?) the AGI ethics researcher put it this way a decade or two ago. He said the AI does not love you, the AI does not hate you. But it can use your atoms for something else.”

How very Thanos.

Goertzel isn’t asked, and doesn’t offer any particular reason why technologists like himself and others should be permitted to develop and introduce AGI into the world.

As far as his predictions, he mentions no modeling, no systems analyzing potentials, no unified scientific approach to creating guardrails and safety mechanisms to ensure human control over AGI.

He’s too optimistic concerning the wondrous—or perhaps just inevitable—coming Singularity, to focus his energies on things like ensuring limits and human control on AI.

No, for Goertzal, AGI won’t be limited or boxed in by humans. Useless to even try.

Goertzel appears to see AGI as destined to decide not only its own autonomous fate, but the destiny and fate of humans as well.

AGI is the Boundless Evolution 

In one part of the interview, Goertzal’s overall view is made clearer via his commentary regarding human history, and attributes which he says are key to evolutionary systems: 

“Such systems are going to make mistakes. And it will feel like magic sometimes. And this is what humans have obviously demonstrated, right? We’ve demonstrated it as a species, I mean we evolved to run around in the Savannah hunting and gathering. And we wound up creating robots and Facebook, and sending people into space. And we show it individually. I mean, I taught myself to program computers. I taught myself to build electronics. And I wasn’t programmed to do that. It’s not wired into my brain. I didn’t learn it in school, either. I sort of worked it out as I went along, right? And this capability that humans have is very fundamental, right?

“And it comes out of the basic self-organizing, self-expanding nature of complex systems. I mean, for example, biological systems. Every biological system that has its own sort of open-ended, organic intelligence, has two primary factors driving it. One is what you’d call individuation. It wants to maintain its own boundaries. It wants to stay a whole system. It wants to survive, right? The other is what you think of as self-transcendence. It wants to expand and grow beyond itself, and embrace new horizons and become something new that maybe even be incomprehensible to the previous version of itself. And this combination of individuation and self-transcendence has driven Evolution, right? That’s how one-cell organisms became multi-celled organisms, became people. It’s how hunter gatherers became civilization. And now modern post-industrial civilization. And it’s how each of us keeps breaking new boundaries in our own lives.” 

Boundaries. It’s an age-old temptation that a purely evolutionary view of the world can’t help but see as something to be transgressed, in pursuit of unending progress.

More rigorous philosophers than Goertzal have outlined the limits of fallible human beings to effectuate their own betterment, especially when obsessed with trying to impose their vision of progress upon others.

A sense of humility, and regard for the rights of others might seem warranted when considering unleashing the most unpredictable and unfathomably powerful technology that the world is likely to ever see.

But though Goertzel repeatedly points to the inhumanity of humans toward each other, which has not moved forward one inch in all recorded history, perhaps pointing to something in human nature that is not subject to “evolutionary progress,” he fervently hopes for the best.

Whether the world should don whimsical hats, sit back and hope that technocrats bent on ushering in the evolution to superior AGI have our collective best interests at heart, is another matter.

Also check out:

Skip to content