GOVERNMENT AI REGULATION WILL FOCUS ON CENSORING AMERICANS AND WOKE OBJECTIVES, NOT THE REAL DANGERS OF AI

GOVERNMENT AI REGULATION WILL FOCUS ON CENSORING AMERICANS AND WOKE OBJECTIVES, NOT THE REAL DANGERS OF AI

If it didn’t before, the rapid evolution of generative AI technology has the full attention now of world elites.

Sam Altman, CEO of OpenAI, the company that broke the Internet (very probably in more ways than one) when it released an advanced form of its natural language AI, ChatGPT3, in late 2022, attended this year’s Bilderberg meeting.

The meeting, which took place between 18 and 22 May in Lisbon, Portugal, listed AI at the top of its itinerary:

  • A.I.
  • Banking system
  • China
  • Energy transition
  • Europe
  • Fiscal challenges
  • India
  • Industrial policy and trade
  • NATO
  • Russia
  • Transnational threats
  • Ukraine
  • U.S. leadership

What is a group of 1 percenters and corporate powers doing, mixing with political leaders in secret, and deciding crucial policies that are supposedly the province of electorates, accountable governmental frameworks and public scrutiny? 

Good question.

Many people who actually believe in small “d” democracy see the annual conference, whose proceedings take place under a veil of secrecy, as a poster child of dangerous elitist influence and megalomania.

But in the modern era of what World Economic Forum (WEF) founder Klaus Schwaab has termed “stakeholder capitalism,” the will of voters and elected leaders has been subjected to increasingly outsized influence by unelected bodies, NGOs, corporate powers and mega billionaires.

Bilderberg makes the bizarre justification that politicos who attend, “[t]hanks to the private nature of the Meeting,” take part “as individuals rather than in any official capacity, and hence are not bound by the conventions of their office or by pre-agreed positions.”

That might sound to many like circumventing democratic frameworks and accountability of politicians to their electorates. 

Here’s another whopper straight from Bilderberg’s “rules”:

“The Bilderberg Meeting is a forum for informal discussions about major issues. The meetings are held under the Chatham House Rule, which states that participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s) nor any other participant may be revealed.”

Whatever is being privately “discussed” concerning the rapid rise of AI, it’s hardly wild conspiracy thinking to suppose that it’s likely to be along the lines of preserving and enhancing the powers of those who like to go to secret conferences and bend the course of history to their favor.

And don’t count on hearing much concerning the orchestration details.

Senators Introduce Bill to Create AI Regulatory Agency After Altman Testifies

Before heading off to Bilderberg, Altman testified before the Senate Judiciary Committee concerning AI. He was characterized as “pleading” for the government to step in and regulate the metastasizing industry, though his actual comments seemed more measured.

“I do think some regulation would be quite wise on this topic,” Altman told the committee. “People need to know if they’re talking to an AI, if content they’re looking at might be generated or might not.”

He said during questioning on the societal ramifications of AI, “There will be an impact on jobs. We try to be very clear about that.”

He added, “My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways.” 

Altman suggested mandating independent audits of OpenAI and other leading edge AI companies and projects.

Following the hearing, Senator Michael Bennet (D-CO) introduced an update to prior “Digital Platform Commission” legislation that would include authority for the proposed regulatory agency to oversee AI, as reported by CNN. (“US senator introduces bill to create a federal agency to regulate AI,” 18 May 2023.)

For “systemically important” AI platforms, the bill contains requirements for algorithmic audits and public risk assessments of the harms that AI tools could cause.

Under the bill, the commission would also have broad oversight authority over social media sites, search engines and other sites.

Trends In Technocracy has pointed out that the government’s idea of AI and internet oversight may be more concerned with accruing yet more power to politically censor Americans, than holding corporations to account for hijacking human creative content and hoarding wealth and power.

AI regulation proposals of the Biden Administration have focused on things like embedding woke viewpoints into the systems, and preventing so-called “misinformation”—a loaded word employed to de-legitimize dissidents and viewpoints that conflict with objectives and narratives of a troubling government-corporate nexus.

Biden’s AI Bill of Rights, and Senator Bennet’s legislation are dangerous frameworks for bending technology into political tools, imposing censorship and political correct strictures on AI, and the internet more broadly.

There should be AI regulation.  But it should focus on:

  • Ensuring AI tools support freedom of political and artistic expression, according to the long established bedrock First Amendment and other rights of Americans
  • Requiring the financial benefits of AI be widely—and directly—distributed to members of society, and not hoarded by a small cabal of tech companies. Current IP laws geared toward humans and traditional companies are wholly inadequate, considering the novel capabilities of AI systems which can swallow and synthesize virtually all human knowledge, continually scrape websites and social media to keep updated, etc. AI was trained on all human knowledge, and all humans should reap the benefits that result. Crypto technology could be used to accomplish goals of distributing benefits and ensuring widespread public governance and participation
  • Forbidding any development of so-called “Singularity” or “strong AI” which can act with self-autonomy and self-awareness that approximates or surpasses human capabilities. If humans don’t limit AI development, AI will supersede and pose existential threats to humanity.

For related reading, see:

Skip to content