The quest of technology companies and AI advocates to create artificial intelligence that can outstrip human abilities, is finally undergoing some higher profile scrutiny.

We might say, with some irony, thank you, ChatGPT.

This past week two top technocrats, Elon Musk and Apple cofounder Steve Wozniak, signed on to “Pause Giant AI Experiments: An Open Letter,” issued by a group called Future of Life.

The letter, backed by a considerable list of tech company, institutional, and political notables, is calling for a “six-month moratorium” on further development of sophisticated deep-learning neural net AI systems like ChatGPT-4.

The letter notes:

“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

As of 31 March, the group’s site says it has collected over 50 thousand signatures. Some other high profile signers besides Musk and Wozniak include:

  • Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.
  • Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
  • Valerie Pisano, President & CEO, MILA (Quebec Artificial Intelligence Institute)
  • Emad Mostaque, CEO, Stability AI
  • John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks
  • Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute
  • Alan Frank Thomas Winfield, Bristol Robotics Laboratory, UWE Bristol, UK, Professor of Robot Ethics
  • Marcus Frei, NEXT. robotics GmbH & Co. KG, CEO, Member European DIGITAL SME Alliance FG AI, Advisory Board 
  • Jennifer F. Waldern, Microsoft, Senior Data Scientist, Researcher
  • Robert O’Callahan, Google, Senior Staff Software Engineer, Distinguished Engineer (Mozilla), PhD in Computer Science (Carnegie Mellon)
  • Christopher Reardon, Meta, Head of Design, Responsible AI, Head of Design for Responsible AI at Meta. Led the team that developed AI System Cards. Member of the Integrity Institute.

Missing so far from the list are names of anyone from OpenAI, including CEO Sam Altman.

A number of professionals from Microsoft, which has partnered with OpenAI, and already integrated ChatGPT into its Bing search engine and Office 365 productivity platform, have signed the letter.

Altman recently pushed back at concerns expressed by Musk about the risks of AI.

In a podcast interview with Lex Fridman, Altman said Musk seemed to have “sincere” concerns, but also appeared to have other motives, implying that Musk was competitively trying to undercut OpenAI.

Altman also played down AI risks in the interview, saying:

“I think a lot of the predictions, this is true for any new field, but a lot of the predictions about AI in terms of capabilities, in terms of what the safety challenges and the easy parts are going to be have turned out to be wrong.”

Altman sought to assure the public that OpenAI’s technology, at least so far, does not really approach the definition of a true “strong AI” Artificial General Intelligence (AGI) system, though his company from the first has been focused on creating AGI.

“Strong AI” and AGI are terms used to signify a kind of Artificial Intelligence which can substantially compete with or outdo humans in terms of wide-ranging intellectual ability and autonomy.

Is One Upgrade Cycle Enough Time?

Though the letter itself marks a significant development, the pause it proposes is barely a blip in time, compared to the pace of regulatory and political bodies, and the general political process that the letter’s authors say themselves is so critical concerning the future of humanity.

Some politicians in Congress have weighed in on the letter, Fox News reported. (“CONGRESS WEIGHS IN: Should tech companies pause ‘giant AI experiments’ as Elon Musk and others suggest?” 30 Mar 2023.)

Representative Brian Mast (R-FL) commented, “I think Elon Musk is rightfully being cautious. I appreciate that he’s looking to put the brakes on, and I agree with it.”

But it’s evident many politicians don’t appreciate the nature and scale of the threats that would be posed by the advent of a sentient, superior AI.

Representative Martin Frost (D-FL), for example, said he’d have to “Look into it more,” concerning the subject. This, after ChatGPT has been in a public preview since November of 2022, and in the midst of major reports by Goldman Sachs and others forecasting the possible loss of millions of human jobs to current AI technology and automation. (See “NEAR-TERM AI JOB CASUALTIES,” 28 Mar 2023.)

“The innovation and advancement is important,” said Marcus Molinaro, (R-NY). “So are guardrails to protect privacy, and to protect our personal safety. And to protect from abuse as well. And I would hope we could find some area of common ground to establish the appropriate guardrails.”

That kind of political boilerplate sounds like the representative in question hasn’t read the letter, let alone examined the ramifications of AGI in any depth.

TRENDPOST: Readers of TRENDS IN TECHNOCRACY in The Trends Journal have been well informed regarding the dangerous pursuit of an AI Singularity seeking to usher in sentient artificial intelligence that can outstrip humans in every respect.

In August of 2021, we called attention to the dangers that a competitive quest for “superior AI” posed, and noted the lack of oversight or understanding by political representatives:

“SU [Singularity University] continues to fuel the development of AI that can outthink and outstrip humankind. And practically no regulatory bodies are currently standing in their way, asking tough questions about their purposes or anti-human agenda…

“…The people profiting and advancing their careers by creating bleeding edge AI claim their work is meant to benefit, and is indeed already greatly benefiting the world.

“But their claims are contradictory. Many of them also admit they believe AI is destined to surpass the abilities of humankind, and supersede or even replace natural humans. And the dark truth is, many of them see that as desirable. In their own words, they look forward to a world where AI and humans will perhaps merge in what they call ‘The Singularity.’” 


In article after article since early 2021, we have detailed this quest, outlined its history, examined its intellectual and philosophical conceits, and forecast its existential dangers to natural humanity.

We have long predicted that if allowed to develop, interim benefits to humans will not prevent an eventuality where ever more rapidly advancing AGI ends up dictating its own objectives, and the fate of humanity as well.

The twin technocratic threats of Heritable Genome Editing (HGE) and Artificial General Intelligence (AGI) are both manifestations of a scientism which views “progress” past natural humans as an evitability, and as a good to be pursued.

With respect to AGI, as a recent article pointed out, government initiatives that have called for greater transparency in AI systems will not solve the problems. (“AI will soon become impossible for humans to comprehend—the story of neural networks tells us why,” 31 Mar 2023.)

Neural nets, a central technology of systems like ChatGPT, are designed to emulate human brains, learning and evolving in ways that can’t be fully tracked or understood. 

AI systems are self-learning right now. And making them even more able to self-evolve, adapt and make autonomous decisions is a large part of the bleeding edge of innovation that AI companies—and governments—are questing after.

The Open Letter points to the current AI arms race:

“[R]ecent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

But it doesn’t explicitly mention that governments around the world are fueling that race, for military advantage, as part of weaponizing robotic and AI systems.

The Trends Journal previously reported on Future Of Life efforts dating back to 2015, which have sought restrictions on weaponization of AI. (See “AI BEING TRAINED TO FIGHT FOR WOKEISM AND WAR,” 13 Dec 2022). 

Unfortunately, those efforts have not deterred development or weaponization of AI.

As we have recently noted, some AI scientists argue that they need to create the most advanced AI possible, in order to understand how to counteract risks.

We’ve termed this “AI Gain Of Function” experimentation, and predict that if allowed to continue, AGI will escape into the wild just as surely as man-made viruses like COVID have.

Re-Think The Technocratic Worldview Before It’s Too Late

Our deepest concern has to do with the complete modern subservience to the technocratic worldview.

According to this view, the pursuit of ever more sophisticated manipulation and control of the physical world, is the only consequential endeavor, and relevant way of mediating our existence.

As we have previously examined, the core animus of this worldview is not “human progress,” but “science progress.”

Scientism sees natural humanity as just an evolutionary sign post, not deserving of any special dignity or preservation, as made in the image of a Creator.

It considers genetic manipulation and design of humans, and the emergence of superior AI as inevitabilities of the continued progress of science.

This worldview is fundamentally transhuman. And that is what anyone sincerely concerned about about advancing AI and fast evolving heritable gene editing technologies must ultimately confront. 

Humanity will not be safe from transhuman scientism until and unless the essential dignity and inherent rights of natural humanity are respected and legally protected.

There can either be natural humanity and boundaries on scientific development, or unbound science and a post-human future—but not both. 

To sum it up, unless a moratorium period somehow leads to humans examining and re-considering the premises of modern scientism, the “progress” of leaving humanity behind will resume.

For some of our touchstone articles regarding AGI:

And for earlier TJ articles that have proved prescient in predicting AI trends, see:

Skip to content