|
READY FOR AN AI COPILOT?
Next-level content creation censorship is now being rolled out to Microsoft 365 SaaS word processing, email, and other apps via a new ChatGPT feature called “Copilot.”
Though Microsoft is trying to spin the announcement as a boon to productivity, its press release should give pause to anyone who values journalistic and creative freedom.
Simply put, Microsoft’s Copilot is built to steer a politically correct course every time.
Not only that, but under the guise of building a “business specific” knowledge base, the software will be culling comprehensive datasets of company information and inputs from things like documents and emails for its AI to train on.
Though Microsoft provides assurances concerning privacy of the system, it essentially brings the power of all-knowing, all snooping AI to the table.
An Exercise in Artful Propaganda
The prospects of ChatGPT rendering creative content producers obsolete is already an issue, as The Trends Journal predicted well before the program burst into widespread public consciousness and use in November 2022. (See “YOUR AI LOVER DOESN’T CARE ABOUT YOU (AND THAT’S WHY IT’S SO SEDUCTIVE)” 10 May 2022, “AI IS LEARNING YOUR JOB” 24 May 2022, “YOU WILL OWN NO SOFTWARE AND BE HAPPY—PART ONE” 18 Oct 2022 and “YOU WILL OWN NO SOFTWARE AND BE HAPPY—PART TWO” 1 Nov 2022.)
Tech companies are already facing lawsuits from creatives who argue that generative AI systems were trained to comprehensively exploit the intellectual property of millions of average human content creators, in ways that go far beyond the way humans learn and use information.
Against that backdrop, the Microsoft 365 press release about Copilot takes pains to present its new AI integration as a boon and assist to creators, not something that will progressively render humans obsolete:
“Humans are hard-wired to dream, to create, to innovate. Each of us seeks to do work that gives us purpose — to write a great novel, to make a discovery, to build strong communities, to care for the sick. The urge to connect to the core of our work lives in all of us. But today, we spend too much time consumed by the drudgery of work on tasks that zap our time, creativity and energy. To reconnect to the soul of our work, we don’t just need a better way of doing the same things. We need a whole new way to work.
“Today, we are bringing the power of next-generation AI to work. Introducing Microsoft 365 Copilot — your copilot for work. It combines the power of large language models (LLMs) with your data in the Microsoft Graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.”
What Microsoft doesn’t mention is that tech companies are in an “AI space race” to develop the most sophisticated and powerful AI. Already since November 2022, ChatGPT3 has become a next generation, more capable version, ChatGPT4.
Google, Amazon, Facebook, Apple (and Chinese companies like Baidu) all are working feverishly to produce the most powerful generative AI.
Programs like Dall-E AI image creator and ChatGPT can already produce art and written content that is putting human artists and journalists out of jobs. (See “JOBS BEING TAKEN OVER BY AI RIGHT NOW,” 28 Feb 2023.)
It’s not hard to predict that more capable versions of these systems will continue to entice businesses to replace human labor, not augment or empower average workers.
But there are other dark prospects of Copilot, which Microsoft tries to couch as positives, but which represent dystopian dangers to creative and journalistic freedom of expression.
Under a subhead “Committed to building responsibly,” the press release states:
“At Microsoft, we are guided by our AI principles and Responsible AI Standard and decades of research on AI, grounding and privacy-preserving machine learning. A multidisciplinary team of researchers, engineers and policy experts reviews our AI systems for potential harms and mitigations — refining training data, filtering to limit harmful content, query- and result-blocking sensitive topics, and applying Microsoft technologies like InterpretML and Fairlearn to help detect and correct data bias. We make it clear how the system makes decisions by noting limitations, linking to sources, and prompting users to review, fact-check and adjust content based on subject-matter expertise.”
Don’t expect Copilot to share your verboten point of view, or assist with attempts to write about topics or events, drawing on information that Microsoft deems biased, harmful or not “factual.”
In other words, all the biases and censorship limitations that users probing OpenAI’s ChatGPT over the past several months have exposed, are now being integrated into Microsoft’s flagship productivity app.
Microsoft’s press release also signaled that its AI system would swallow a company’s business data the way ChatGPT scraped the internet:
“AI-powered LLMs are trained on a large but limited corpus of data. The key to unlocking productivity in business lies in connecting LLMs to your business data — in a secure, compliant, privacy-preserving way. Microsoft 365 Copilot has real-time access to both your content and context in the Microsoft Graph. This means it generates answers anchored in your business content — your documents, emails, calendar, chats, meetings, contacts and other business data — and combines them with your working context — the meeting you’re in now, the email exchanges you’ve had on a topic, the chat conversations you had last week — to deliver accurate, relevant, contextual responses.”
The press release provides assurances regarding privacy of company data. But introducing the power of AI to comprehensively digest and provide natural language and other forms of access to the entire corpus of a company’s data, creates a treasure trove and ready query system for surveillance.
With off the rails government-tech surveillance and censorship that has been thoroughly exposed over the past decade, from the revelations of Edward Snowden, to the Twitter Files, there’s little doubt that AI powered surveillance is fast becoming part of the equation.
Microsoft capped its press release by proclaiming “AI will create a brighter future of work for everyone.”
If it sounded like something out of an Orwell novel, the take-away is that any budding Orwell’s out there using 365 will now have Big Brother as a Copilot, watching everything they attempt to say and create.
TRENDPOST: Before most others were aware of the censorship agenda and how it was likely to be rolled out via major SaaS apps like Microsoft 365 and Google Apps, The Trends Journal alerted readers in detail.
We predict that generative AI will massively obsolete average workers, and shift even more wealth and censorship power to tech corporations.
And a de facto government-corporate system will mean that government authorities will also reap profits and greater control. Officials and agencies will demand oversight, snooping and interventions over content production occurring via these AI systems.
The control framework has already been substantively outlined by the Biden Administration, as The Trends Journal reported in “BIDEN ADMINISTRATION SUBVERTS CONSTITUTION WITH ‘AI BILL OF RIGHTS’,” 11 October 2022.
And as far as sharing in profits, contracting awards, regulations and allowances for these tech companies will continue to be paved with inside deals, privileged info and other ways of greasing the palms and portfolios of authorities.