|
Citizens are becoming digital serfs on SAAS (Software as a Service) plantations, with AI drivers ensuring no thought or dissident vision steps out of line.
What began with algorithmic censorship on social media platforms is now moving to practically every piece of software.
AI surveillance and control, embedded into SaaS, is systematically destroying political and creative freedom of users.
As a result, those wishing to create images or write stories or blog articles, are finding that their software will not permit certain things to be visually created, or talked about.
The programs are also often biased to permit and promote certain topics and viewpoints, which align with the objectives of the service providers, while skewing others.
With SaaS becoming a defacto standard for how software is accessed and used, there are fewer and fewer alternatives to these digital gulags.
How did it happen? What are some of the ways freedom of expression is being censored and controlled?
Understanding and countering the transformation of productivity and communication software to serve corporate and government masters, is crucial, if core human freedoms are to persevere in the AI-driven so-called “fourth industrial revolution.”
From Disks To Cloud: The SaaS Takeover
The shift from software being accessed by installing programs on local hard drives, to being accessed as cloud services via web browser portals and app stores, accelerated in the early 2000s.
Google, more than any other company, spurred the shift. Users were enticed to sign up not only for web-based email, but a “Google Apps” suite including a basic word processor, spreadsheet program, calendar, and even cloud storage.
Email and social media, along with commerce platforms, became the gateways for SaaS. But since 2010, practically every kind of productivity software has moved to an SaaS model.
Businesses and home users gained, via ease of access and upkeep, with quick rollouts of new features and security updates.
Software companies gained even more, via a lucrative “subscription” model, which guaranteed income streams, and made software distribution and updates more cost-effective.
From Subscription To Suppression
A Pew Research study released on 6 October purportedly found that 61 percent of all U.S. adults prefer that tech companies restrict offensive content and “false” information on social media platforms, even if it limits freedom of information.
The study focused with an ominous tenor on the growth of alternative social media platforms including Parlor, Bitchute, Truth Social, Gab, Rumble and others, suggesting that these sites and services were breeding grounds for extremist opinions.
The study grudgingly acknowledged that sites in question adhere to U.S. laws, allowing the exact same free speech rights guaranteed by the Bill of Rights.
The study also admitted that a primary reason Americans seeking out alternative social platforms are doing so is to exercise their Constitutional free speech rights.
According to Pew:
“When users of alternative social media sites were asked to describe, in their own words, the first thing that comes to their mind in connection with these sites, 22% mentioned something related to the concept of freedom of speech, anti-censorship and an alternative to more established social media – far more common than any other type of response.”
Meanwhile, The Associated Press, in a 13 October piece, touted a poll they conducted that found people believe “misinformation” is spurring extremism, racism and hate.
What sorts of things did the AP identify as rife with misinformation? Why of course, information contradicting government and big pharma COVID policies and treatments, views contradicting government Russia-Ukraine narratives, and the massive January 6th protest against 2020 election manipulation and fraud:
“Whether it’s lies about the 2020 election or the Jan. 6, 2021, attack on the U.S. Capitol, COVID-19 conspiracy theories or disinformation about Russia’s invasion of Ukraine, online misinformation has been blamed for increased political polarization, distrust of institutions and even real-world violence.”
Even the AP had to admit that while people appeared to lament “misinformation,” most respondents believed that others, and not they themselves, were the culpable ones:
“[L]ess than half said they are that worried that they were responsible for spreading it. That’s consistent with previous polls that have found people are more likely to blame others than accept responsibility for the spread of misinformation.”
Sort of like the AP and MSM not admitting the propaganda 24/7 beam in their own eye (“MATTHEW 7:5 KJV “Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to out the mote out of thy brother’s eye”).
The MSM is predictably churning out support for the Biden administration’s “misinformation” war.
And as conservatives censored on elite controlled tech platforms have finally made efforts to build alternatives, the liberal line regarding censorship is shifting.
Not long ago, liberals were defending tech giants for censoring and banning (mostly conservative) users, because they were exercising their own rights as private entities.
Now they’re advocating that upstart platforms be prevented from allowing legally protected speech.
From the beginning, the Biden administration has led the attack, by attempting to redefine any unwanted or damaging speech or information as “dangerous” and therefore not protected by the First Amendment.
And now the battle being most visibly fought on social media platforms is spreading to AI-enhanced SaaS productivity software.
The same AI algorithms that cut their teeth flagging dissident political and “offensive” content on Twitter, YouTube and Facebook are being embedded into creative imaging and writing software.
Authoritarian AI Writing and Art Creation Assistants
OpenAI, originally a non-profit AI development project co-founded by Elon Musk in 2015, has evolved into a for-profit, Microsoft backed leader in AI-powered productivity software for creative content producers.
Via Jasper.ai, a writing assistant platform, and DALL-E, which leverages AI to create stunning visual art and images from natural language inputs, are two flagship SaaS services of OpenAI.
Just don’t ask Jasper for help with writing certain politically verboten content. And don’t ask DALL-E to produce images that transgress woke ideology.
Any attempts to write or visualize forbidden content is flagged by the AI in real time, blocked and reported.
As might be expected, Jasper, DALL-E, and competing SaaS products (some which depend on OpenAI’s GPT-3 AI natural language generators), have their own political and sometimes painfully absurd biases.
These biases ban some kinds of offensive content, while allowing other kinds, often based on pretty predictable political and “woke” ideology preferences.
In the next section, some examples gleaned from testing different SaaS AI productivity services will be presented. The examples aren’t a scientifically rigorous analysis, just an end-user sampling.
But the programs cry out for rigorous research into their biases, and the ways they are attempting to control and direct the very process of human creativity.
And that’s the greatest danger of the new digital productivity gulags being wielded by OpenAI, Microsoft, Google, Facebook, Amazon—and the government authorities behind them.
By establishing a paradigm where users utilize software residing on remote, corporate-controlled servers, and leveraging AI to sift and surveil content creation in real-time, technocratic elites are fast gaining control over what creative content can be produced, and what is forbidden.
In 1984, George Orwell had the insight that by controlling language and literally reducing available words and language, authorities could literally make it impossible for dissidents to express opposition to their power and control.
With AI-driven “protections” from “offensive” and political content, productivity SaaS is 2022’s latest innovation in “Newspeak.”
Next Week: “YOU WILL OWN NO SOFTWARE AND BE HAPPY—PART TWO” will report samples that demonstrate some of the biases and restrictions of AI-powered SaaS productivity software.