In connection with an Orwellian named “AI Global Safety Summit” held this past week in the U.K., thousands of AI industry workers and experts have signed onto a letter arguing that AI is destined to become a “force for good” in the world, and not disaster.

Nevermind that AI is already outmoding human workers en masse. Or that a handful of corporations that committed staggering IP theft by training AI models without permission or compensation on IP protected content, are reaping the lion’s share of AI profits.

Don’t look too closely at how AI is already being used to enhance lethality on the battlefields of Ukraine and Gaza.

And don’t question whether young people who are currently being enticed to use AI rather than their own brains to write, create visual art and music, or even find a date or companion, will be positively affected by the “AI, WE OWN YOU” takeover.

A group of (personally invested) “experts” is saying it’s all going to be okay.

U.K. Prime Minister Rishi Sunak, eager to position his country as a leading AI technology hub, commented concerning the AI summit:

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears. So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring. Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

‘We will keep you safe.’

In an era where the world has lurched, thanks to the power abuses of governments and elitist institutions and players, from one war and financial crisis to another, with mad science manufactured pandemics and manipulated social chaos thrown into the mix, those words should send chills down the spine of anyone paying attention.

The summit held on the first two days of November made news for an agreement between leading tech companies and governments to test new “frontier models” of sophisticated AI general intelligence systems.

But the testing addresses little regarding the checklist of dangers and ongoing damages noted above.

According to Reuters, leaders from the United States, China and the EU agreed to share a “common approach” to determining potential risks and ways to counter them. (“At UK’s AI Summit developers and govts agree on testing to help manage risks,” 2 Nov 2023.)

But the news outlet noted that China did not actually sign the testing agreement. 

And was there anything in the agreement about banning the weaponization of AI for use in warfare? Was there a ban on creating a general AI intelligence so sophisticated that it surpasses human intelligence in every respect? Did they agree to hold tech companies to account for massive IP theft? Was there any  plan to address massive displacement and obsolescence of workers?

None of the above.

The Rapid “AI WE OWN YOU” Takeover: NOT an Accident

Over the past year, so-called generative AI has gone from being a niche technology that most people never (knowingly) interfaced with, to a worldwide phenomenon.

That revolution was touched off in November 2022 by a free public preview of the natural language “ChatGPT” generative AI platform. In just ten days, millions of users had signed up to try ChatGPT.

Since then, the use of generative AI by individuals and businesses has exploded.

Core AI backbone platforms developed by a few large companies including OpenAI / Microsoft, Google, Amazon / Anthropic, IBM (Watson), and Meta (Facebook / Instagram), are powering thousands of apps offering all kinds of services and functionality.

AI is automating tasks in workplaces, swallowing and analyzing business data to improve performance and model strategies, powering security and surveillance systems, acting as a legal / education / medical advisor, and being utilized in advanced science research to model new processes, genetic sequences, proteins and materials, and on and on.

And when people are done with work, AI is increasingly there in the games we play, and even in the conversations and (virtual relationships) we experience.

Again, this is barely a year into the public generative AI revolution.

Of course, backend AI has powered the algorithms of companies like Amazon, Google, Facebook and Twitter for more than a decade.

And generative AI had seen growing use by techies for years before late 2022.

We noted the gathering AI trends in multiple articles, and emphasized how a new generation of youth would be especially targeted and impacted:

What’s becoming clearer every day is that the “AI sensation” was anything but a spontaneous, unforeseen event.

On the contrary, it was in the works and being carefully game-planned for mass, overwhelming public blitzkrieg saturation, for years before a company that just happened to be named OpenAI, opened the floodgates.

AI Surveillance State

In these pages of The Trends Journal, we have long predicted and tracked the way authorities have sought to put new technology to the worst possible citizen surveillance and control use cases.

Rhetoric of some leaders and AI experts at the summit only underscored how AI is being insinuated into everyday life of citizens, and sold as a constant companion and helpmate.

Concerning the defense of AI signed onto by over 1100 AI industry figures in the leadup to the summit, Rashik Parmar, CEO of The Chartered Institute for IT, commented: 

“Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity. AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.”

The Trends Journal predicted the rollout of AI into productivity software, and how it would likely be used to monitor and limit the kinds of content users could create. (See, for example, “READY FOR AN AI COPILOT?” 21 Mar 2023, “YOU WILL OWN NO SOFTWARE AND BE HAPPY—PART ONE” 18 Oct 2022 and “YOU WILL OWN NO SOFTWARE AND BE HAPPY—PART TWO” 1 Nov 2023.)

In the barely 8 months since AI copilots first appeared in software like Microsoft 365 and Google Apps, the concept has gone from an experimental feature, to being envisioned as a cradle to grave facet of human existence.

That should be enough to give anyone pause.

TRENDPOST: As we have previously pointed out, we called in these pages for a ban on AI development seeking a so-called “Singularity” event—a point where AI surpassed human intellectual abilities in every respect—long before leading AI experts publicly echoed our concerns. (See “AUTOMATING OUT OF WORLD CRISIS?” 12 Jul 2022.)

The kind of safety measures being promoted by tech companies and governments are largely measures meant to protect and extend their power, control and profits, via AI.

They do not represent any true altruism or concern for enlarging freedom, dignity and opportunities for average people, who are being positioned as mere consumers, subjects and “AI prompters” in a new AI ascendant reality.

Given the narrow control and exploitation of AI by a few very large corporate leviathans, no one should expect their version of an AI revolution to widely benefit humankind.

Nor will it lessen power, wealth and other disparities that have only grown wider in the face of financial crises, wars and pandemics that have marked an increasingly unstable, retrograde 21st century.

Skip to content