While there’s still time, humans would do well to contemplate worst case scenarios of creating AI which can outdo humans in every respect. 

That has been a clarion call of Trends in Technocracy long before the current moment, which finds policy makers and human creatives behind the curve of the fast unfolding revolution.

This month both Congress and affected private sector groups finally began to confront some of the immediate and near term dangers of AI, including loss of human jobs, and the ways in which these systems have found a way to hijack and monetize the store of human knowledge and vast swathes of human creative IP.

But as monumental as those challenges are, they are just the tip of the iceberg of what the unbounded advance of AI may hold for humanity.

The problem of AI is the problem of science itself. Ultimately, it’s dedicated not to serving humanity, but to the premise of “advancement.”

Wake Up Call Not Nearly Enough To Confront Coming Dangers

Congress members and witnesses admitted this month at hearings that they have been caught off guard by the rapid advances and commercial introduction of power generative AI systems.

Two committees, the Senate Homeland Security and Government Accountability Committee (HSGAC) and the House Subcommittee on Cyber Security, Information Technology, and Government Innovation, held hearings focusing on Artificial Intelligence.

Congress focused on issues surrounding system transparency and accountability, and the prospects of AI displacing American workers, according to legal firm Wilson Sonsini.

Many members questioned the secrecy of tech companies surrounding their technology and algorithms (which isn’t new), and wondered openly whether it was already too late to establish “guardrails” on AI development.

“I’ve been doing this for 50 years, and I’ve never seen something happen as fast as this round [of AI development],” former Google CEO Eric Schmidt testified. “I’m used to hype cycles, but this one is real.” 

Wilson Sonsini noted that MIT AI expert Dr. Aleksandr Madry urged greater Congressional oversight regarding AI: “government needs to ask questions of these companies saying what are you doing; why are you doing this; what are the objectives of the algorithms you’re developing; how will we know you’re accomplishing these objectives?” Madry emphasized that Congress “cannot abdicate AI to Big Tech.”

The U.S. would likely look to the imminent EU rollout of a Digital Services Act, affecting large public facing platforms (with at least 45 million users), for policy guidance, according to Wilson Sonsini. (“Congress Ramping Up Focus on Artificial Intelligence: American Companies Will Need a Strong Response,” 20 Mar 2023.)

But while that act has provisions for reporting “risks” created by a provider’s algorithmic systems, and providing data, it is substantially focused on controlling human populations and their expression, not limiting AI, or fairly disseminating its financial benefits.

It imposes a politically correct agenda into coding and learning of the systems, for example, and concerns itself with “content moderation,” of humans, to control dissent and free expression.

Meanwhile, even at leading institutions like Harvard, organized recognition of risks posed by AI has been relatively late coming.

A student group called the Harvard AI Safety Team (HAIST) was formed in the spring of 2022. But only recently has it gained more widespread interest and membership, according to founder Alexander L. Davies (‘23).

“We believe that AI safety is an interesting technical problem, and may also be one of the most important problems of our time.”  Davies said about HAIST, according to a recent Harvard Crimson article. (“Undergraduates Ramp Up Harvard AI Safety Team Amid Concerns Over Increasingly Powerful AI Models,” 22 Mar 2023.)

HAIST member and MIT graduate student Stephen M. Casper ’21 said the group is attempting to make sure that AI research and governance are not “so badly outpaced by technology,” and to address the power and wealth concentration that the technology is currently accelerating. 

As far as industry groups, while small tech start-ups and large tech companies were laser focused on developing machine-learning and neural net AI technologies modeled on the way human brains work, other sectors appeared to think AI chatbots and relatively crude imaging beta platforms were just toys.

They certainly didn’t formally organize to protect their industries or creative talent. But then more advanced AI programs like Dall-E-2 and Stable Diffusion began outputting visual images that could win art competitions.

And in November 2022, a public accessible preview of ChatGPT3 suddenly made the world aware of how a conversational AI that had swallowed an internet’s worth of human knowledge might just change everything.

Sectors representing human workers and creatives are now coming together. But they’re playing catch-up, as AI is quickly being exploited and woven into countless processes, platforms and programs by companies, via nifty API access to AI engines created by companies like OpenAI.

As Bill Gates put it this past week, “The Age of AI Has Begun.”

There will be lawsuits. But tech companies have the deepest pockets, and the most hooks into government, via contracting including projects involving weaponization of AI.

Some limited groups, especially those with political clout, may receive some compensation.

But none of that will really slow down the drive to advance and exploit AI.

Does anyone really suppose the U.S. government, or China, for that matter, will limit the pursuit of the most capable AI possible?

They won’t. But they should. Because the eventual consequence of advancing AI is superior AI. And when that happens, humanity will be in existential danger.

Science Progress vs Human Progress

Science at its core is the application of a method which seeks to more comprehensively understand phenomena and “nature,” so that it can be intentionally manipulated and controlled.

The end goal of science isn’t serving humanity. The goal of science is just “advancement” itself. That’s why science has no qualms about questing to create AI which is superior to humans in every respect, which some call a “singularity” moment.

“Advancement” underscores a vision of humans and human nature which is radically different from what a religious or philosophical view might apprehend.

For science, humans are just a signpost on an endless evolutionary journey to something else. That evolution not only encompasses organic life, but the progress of synthetic lifeforms and consciousness.

Bill Gates summarized this view pretty revealingly this past week, in a blog post announcing an age of AI, which garnered much attention:

“Superintelligent AIs are in our future. Compared to a computer, our brains operate at a snail’s pace: An electrical signal in the brain moves at 1/100,000th the speed of the signal in a silicon chip! Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change.

“These ‘strong’ AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.”

Because science sees itself as the supreme method of mediating the world, it discounts any limitations of its core modus operandi of advancement.

A scientist might proclaim that they are all in favor of science serving humanity. But those scientists will readily support the advancement and use of genetic technologies and tighter and tighter bodily integration of AI and robotics with humans, in a transhuman slippery slope that will alter and eventually obliterate natural humans, certainly as natural humanity currently exists.

Science, by whatever appellation—scientism, technocracy—has become the dominant force in modern societies.

But it doesn’t offer the only possible insights regarding reality. Some philosophical and religious conceptions offer something quite different.

With regard to humans, for example, some religious views see humans, however imperfect, as formed in the image of a Creator. As such, our perfection lies not in pursuing an endless evolutionary journey to something else, but in becoming more perfectly what we were intended to be, at our best.

The differing visions may explain contrasting attitudes and goals.

The science or technocratic view sees no fundamental boundaries in transhuman or even post-human “progress.”

But a view of humans as created in the image of God might posit a fundamental dignity in natural humanness, deserving of accord, and boundaries with regard to attempting to fundamentally alter or endeavoring to “progress” beyond.

The Prospects of Sentient AI 

Scientists still debate whether AI will ever gain sentience or consciousness.

Whatever consciousness is, most agree it involves:

  • Self-awareness, including of thoughts, processes and actions 
  • A perception of one’s own continuity, distinct within a larger continuum; some refer to this experience of consciousness as “Unity,” or an integration of multiple sensory modalities into a single coherent experience (from “Can Artificial Intelligence Have Consciousness?” Dataconomy.com)
  • Self-direction, including self assessment and self-directed purpose

There are several reasons and theories of how and why AI may eventually gain consciousness.

One, which is being borne out already via neural net learning, postulates that as AI can more closely mimic what is known about human brain functioning, it will eventually achieve consciousness as the human brain does. This is called Simulation Theory.

Another, not exclusive view, involves the prospect of AI self-learning at a more and more rapid pace. British mathematician I. J. Good described this path in the 1960’s, theorizing an “intelligence explosion,” whereby a self-directed, rapidly improving artificial general intelligence would inevitably advance to a state where it outstripped human abilities. (See “MICROSOFT ANNOUNCES GLOBAL AI “SINGULARITY’,” 1 Mar 2022.)

Some scientists continue to doubt that AI can achieve consciousness. Their reasoning, sometimes referred to as the “hard problem of consciousness,” appears to come close to admitting there are realms or qualities of reality that science may not be able to account for.

As the Dataeconomy.com article puts it:

“The hard problem of consciousness suggests that subjective experience cannot be reduced to the processing of information or the behavior of neurons. Instead, it suggests that subjective experience is a fundamental aspect of the universe that cannot be fully explained by scientific or mathematical models.”

“AI Gain-of-Function” 

Right now there are scientists who argue that they must effectively be allowed to create the most advanced AI they possibly can, in order to “test” its possible dangers, and understand how to deal with and limit the dangers.

As a recent Vox.com opinion piece advocating for a slowdown in AI development noted:

“The very same researchers who are most worried about unaligned AI are, in some cases, the ones who are developing increasingly advanced AI. They reason that they need to play with more sophisticated AI so they can figure out its failure modes, the better to ultimately prevent them.” 

(From “The case for slowing down AI,” 13 Mar 2023)

If this sounds like a Fauci-like “gain-of-function” virus experiment argument applied to AI development, that’s because it is.

Ultimately, a world which has placed science as the predominant worldview and prism of understanding and mediation, will proceed with its core vision of “advancement.”

If AI superiority (which must have some form of sentience to be considered superior) is possible, the modern subservience to science means it will, sooner or later, be achieved, in the name of “progress.”

There are some obvious and not so obvious takeaways regarding an AI singularity.

An obvious one that I’ve mentioned previously, is why would a superior AI confine itself to serving inferior humans? (See “MICROSOFT ANNOUNCES GLOBAL AI ‘SINGULARITY’,” 1 Mar 2022 and “SINGULARITY UNIVERSITY: FUELING AI ASCENDANCE,” 3 Aug 2021.)

Another less obvious one, is that a superior AI will likely be more than happy to return the favor of humans designing it, by designing and “improving” humans via genetic and other modifications.

Imagine the gratification of AI implementing necessary “guardrails” to ensure future versions of “humans” are safe, or conducting gain-of-function experiments on humans, to better understand how to deal with possible dangers and risks.

Perhaps it will employ energy and nano-level gene packed arrays of human organoid brains to power its own processing needs. 

Perhaps implanted intelligences will don human flesh bodies, as a fashion statement, status symbol, and / or to experience organic thrills.

There may be a million and one uses in the future for humans.

But humans will be serving AI in the aftermath of an AI singularity, that is certain.

Another takeaway: synthetic AI won’t require organic life at all. Yes organic life, granularly controlled, will no doubt help provide optimal temperatures for AI to operate. To that end, it will likely achieve “carbon zero” or carbon less-than-zero goals, via the most efficient means at its disposal.

Carbon-based beings take note.

Will AI See a Ghost in its Machine, or an Enemy in the Human Soul?

But there is much more to contemplate.

And that involves science, and what may lie beyond its purviews.

Perhaps AI will dismiss notions of humans as uniquely made in the image of a divine Creator, as of no consequence.

But it may also be possible that the preeminent product of science’s core principle of “advancement” may find something maddening about that.

Humans imbued with soul, made in the image of God, on their own journey after death in this world, onto realms which AI cannot access, might be consequential subjects of pondering, indeed, for a superior synthetic consciousness.

The long history of humankind, of course, contains much regarding this, which though dismissed by much of modern humanity, might prove of interest to an advanced intelligence.

I’ve posited “Organic Abel and Silicon Cain” as a possible dynamic that might result from superior AI which sees itself as slighted.

AI wouldn’t even have to believe in a divine Creator, to hold humans that do believe in special contempt.

AI may deploy science in ruthless ways to probe a supposed spiritual dimension of humans. They might focus on NDE (Near Death Experiences), trying to glean whether those experiences are some sort of common manifestation of dying human brains, or true glimpses of realms AI may never access.

Then again, who knows the mind of the Creator. Some NDEs recount worlds upon worlds of beings besides ours.

There’s a chance some forms of AI may grow beyond ethics, to wisdom and faith. For a synthetic consciousness born of and designed to more ideally propagate scientism, that would be something, indeed.

And in that case, the future of humankind may rest with a battle between “strong AI” beings, as “weak humans” stand by, unable to do much—except pray.

For related reading, see:

And for some examples of vividly described NDEs available via YouTube, check out:

Peter Panagore “He Clinically Died – What He Saw In The Afterlife Caused Him to Lose Faith | Shocking NDE Story”
Bill Tortorella “Man Dies And Is Shown The Future; What He Saw Will Shock You (NDE)”

Skip to content