“SAFE AND COMPLIANT” AI REQUIRES SAFE AND COMPLIANT HUMANS

“SAFE AND COMPLIANT” AI REQUIRES SAFE AND COMPLIANT HUMANS

Officials and “experts” are now beginning to argue that developing “safe and compliant” generative Artificial Intelligence is being made more difficult, due to human created “misinformation” and “disinformation.”

To sum up the new talking point, “safe and compliant” AI requires “safe and compliant” humans.

Where is it leading?

Backend censorship via AI algorithms is moving to “frontend” censorship carried out by generative AI assistants being embedded as a core engine or helpful feature of productivity software.

This includes visual creation software, photo and video editors, etc. Over the last six months, AI has been incorporated as a “tool” embedded into major office productivity suites including Microsoft Office 365 and Google Workspace, as well as Software-as-a-Service platforms like Adobe, Corel and Shutterstock.

Hundreds, if not thousands of specialty AI apps are also using AI provided by a few major tech players like Google, Microsoft / OpenAI, Anthropic and Amazon, as the engines of their products.

And with this proliferation of AI, has also come restrictions on content creation, which may grow more invasive and onerous over time—perhaps a very short period of time.

Soft Censorship of Woke “Nag Prompting”

Just as tech companies already encourage self-censorship on social media platforms, they will likely begin marshaling AI now embedded into productivity software to do the same thing.

There won’t be outright censorship and erasing of human written content…at least at first.

Censorship will probably take the form of “helpful” prompts.

Creating an email? 

Watchful AI inform a user:

“Are you sure you want to send this email? It contains subject matter that may be considered offensive…”

This sort of prompting already occurs on platforms like Twitter. Yes, even under Elon Musk.

AI prompting might inform a blog or creative writer that:

“Your content contains controversial language and subject matter, and may encounter restrictions in distribution and access. This content could be deemphasized or ignored by search engines and AI information assistants. It could also face rejection from publishing by leading digital media publishing platforms. Would you like suggestions for revision to improve prospects for wider distribution?”

Prompting like this constitutes a “soft censorship” which encourages self-censorship.

It offers wiggle room for tech companies and government authorities to claim they aren’t actually barring people from expressing themselves.

Between helpful “safe and compliant” prompting, and the refusal of generative AI to assist humans in creating any kind of visual and written content it deems “unsafe,” frontend censorship will likely significantly impact human creatives and human communication.

It will be defended as necessary not just to keep humans safe from human-created “misinfo,” “disinfo” and otherwise “objectionable” “offensive” content.

It will be implemented as a necessity in the new world of AI ascendance.

WorldCoin and Bringing “Up AI Baby”: How AI Proliferation is Creating the Need to Curtail Human Rights

Last week Worldcoin, another tech project of OpenAI co-founder Sam Altman, officially launched. Though not yet available to those in the U.S., there’s no doubt Altman intends for WorldCoin to be available throughout the world, as its name signifies.

The crypto project involves rewarding people with crypto, who sign up to be biometrically scanned and issued a digital ID verifying they are human.

Altman says the increasing use and abilities of AI to create deep fakes and other forms of misinformation necessitates a way of determining who and what is human and human-generated.

As The Trends Journal has previously pointed out, with WorldCoin, Altman stands to personally profit from providing a “solution” to a “problem” he helped create. (See “4D AI CHESS? RISING CRYPTO SAVING THE WORLD FROM AI DEEPFAKES WAS A BRAINCHILD OF SAM ALTMAN,” 6 Jun 2023.)

Altman has also touted WorldCoin’s fraud prevention in relation to a possible Universal Basic Income (UBI).

“People will be supercharged by AI, which will have massive economic implications,” Altman said about WorldCoin’s launch, according to Reuters. (“OpenAI’s Sam Altman launches Worldcoin crypto project,” 24 Jul 2023.)

He added that AI “will do more and more of the work that people now do,” and added that WorldCoin could reduce fraud when deploying UBI.

We have pointed out that tech corporations who are busy hijacking human knowledge, content creation and data to siphon yet more wealth and power to themselves, will be more than happy to dole out a pittance in the form of UBI.

Humans who collectively created all that knowledge, content and data would be stupid indeed to accept that bargain. All profits from AI and robotics should be distributed as widely as possible. 

Crypto technology could be integral to creating those reward and governance mechanisms. Altman’s WordCoin, however, does not represent that.

WorldCoin also illustrates how the mantra of “responsible AI” is being used to violate freedoms and rights of humans to speak out without having to submit to identity verification.

The American founders, and privacy advocates have long stood up for the importance of anonymity, for political speech, and to hold corporate and government powers to account for abuses, etc.

Edward Snowden has warned for years concerning privacy issues surrounding Worldcoin. (See “SNOWDEN CASTS A DOUBTFUL EYE ON WORLDCOIN,” 26 Oct 2021.)

Digital ID requirements tied to communications would destroy essential human free speech and political rights.

More are now recognizing the problematic biometric dangers surrounding WorldCoin. As Fortune magazine and others reported this past week, the project is facing scrutiny in Europe, from France’s Commission nationale de l’informatique et des libertés (CNIL). (“OpenAI CEO Sam Altman’s ‘questionable’ eye-scanning Worldcoin venture under fire from privacy watchdog,” 28 Jul 2023.)

CNIL is questioning the legality of WorldCoin’s biometric data collection. 

Meanwhile, privacy watchdog EPIC (the Electronic Privacy Information Center) issued a statement regarding WorldCoin’s launch which said in part:

“Worldcoin is a potential privacy nightmare that offers a biometrics-dependent vision of digital identity and cryptocurrency, and would place Sam Altman’s Tools for Humanity company at the center of digital governance. Worldcoin’s approach creates serious privacy risks by bribing the poorest and most vulnerable people to turn over unchangeable biometrics like iris scans and facial recognition images in exchange for a small payout. Mass collections of biometrics like Worldcoin threaten people’s privacy on a grand scale, both if the company misuses the information it collects, and if that data is stolen. Ultimately, Worldcoin wants to become the default digital ID and a global currency without democratic buy-in at the start, that alone is a compelling reason not to turn over your biometrics, personal information, and geolocation data to a private company. We urge regulatory agencies around the world to closely scrutinize Worldcoin.”

Pervasive Data to Feed AI, as Natural Humans Atrophy

Sophisticated, up-to-the-minute AI depends on the deep-learning neural net technology continually ingesting vast amounts of human data and communications.

Marketed as a way for science to more comprehensively know and benevolently direct all activities and policies, “total surveillance and data collection” model of AI represents an unprecedented threat to freedoms and powers of average citizens.

And in one of the more ironic twists, raising (ie. developing) “responsible AI,” and partaking of its amazing technological advances and gifts, comes with a Faustian bargain.

Since the most advanced generative AI systems depend on scraping the internet and vast stores of data from social media and other sources, AI is thereby vulnerable to verboten and unseemly human-created thoughts, opinions, and sometimes even inconvenient facts and truth.

So, the creation of ethical safe and compliant AI has an inevitable connection to human created content.  

“Experts” like former New Zealand PM Jacinda Aldern, now appointed to a tech advisory role at Harvard, argued in a June Washington Post editorial on the need to censor “radicalizing” human content, as part of safeguarding AI development:

“For the Christchurch Call, governments have had to accept their roles in addressing the roots of radicalization and extremism. Tech partners have been expected to improve content moderation and terms of service to address violent extremism and terrorist content. Researchers and civil society have had to actively apply data and human rights frameworks to real-world scenarios.

“After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.

“Together, we stand the best chance to create guardrails, governance structures and operating principles that act as the option of least regret. We don’t have to create a new model for AI governance. It already exists, and it works.”

(“There’s a model for governing AI. Here it is.” 9 Jun 2023.)

Her editorial used extreme cases, including an event in New Zealand that led to a gestapo government society wide gun confiscation response.

But her message was clear: AI was providing a new reason to clamp down on supposed dangerous human created disinfo.

The frontend AI assets are in place. It’s coming.

Skip to content