Hordes of would-be “authors” are continuing to find their voice…well, their generative text-based AI voice.
They’re learning how easy it is to produce articles, promotional materials, blog posts, lyrics, short stories and even novels using Artificial Intelligence writing assistants like ChatGPT, Bard AI and others.
The best of these “prompt engineer” talents will produce some good stories, and even some great ones.
But where there’s money to be made and costs to be cut, human authors, just like workers in many other fields, will be facing increasing pressure. Not only from human competitors or even AI itself.
No, large publishers, who happen to be tech companies themselves, are the ones who will be looking to avoid splitting royalties with human authors, by assuming a larger and larger direct role in creating AI books.
Companies like Amazon, Google and Apple have vast libraries that happen to be the perfect data sets to train AI for writing works of all kinds.
Of course, The Trends Journal has been forecasting to our readers concerning threats to human creatives in many articles over the past several years.
We specifically predicted that tech companies like Amazon would have ambitions to directly get into the AI authorship business, in “IS AMAZON LOOKING TO CANCEL HUMAN AUTHORS?” (27 Jun 2023).
Now a 2 August New York Times article has confirmed our prediction.
According to the Times, large self-publishing platforms like world leading Amazon, are quietly ramping up the use of AI in ways that compete with and may eventually supplant human talents, in relation to promotion, voice talent for audiobooks, artwork, and even writing books:
“Some in the publishing world are already experimenting with artificial intelligence programs in areas such as marketing, advertising, audiobook production and even writing, weighing their promise of supporting work done by humans against the threat that the machines may take over some of those jobs entirely.”
(“A.I.’s Inroads in Publishing Touch Off Fear, and Creativity,” 2 Aug 2023.)
This mirrors what we said even more starkly months ago:
“[I]magine Amazon being able to train a generative text based AI system on all the digital content on its platform (and beyond, by scouring internet content, digitized works of history, etc.).
“It could very possibly create the most sophisticated “storytelling” AI yet seen, and use that AI to generate content to sell against human authors that publish their content on Amazon’s Kindle book platform.”
Outmoded Human Authors, Suppressed Human Thoughts and Political Speech
Our prediction went further than what the recent NYT story has confirmed, though.
We have noted that in controlling the lion’s share of the generative AI market, the largest tech companies will inevitably exert tremendous propaganda power via AI.
And as Missouri vs Biden, the Twitter Files and more recently, the Facebook Files have shown, tech companies have been very useful conduits for Government speech and thought control.
Several of the largest tech companies have already agreed to work with the U.S. government on ensuring “safe” AI.
But by safe, here are some things they don’t mean:
- They don’t mean prohibiting the development of sophisticated AI for autonomous weapons, and to act as assisting agents on battlefields.
- They don’t mean prohibiting AI from being used for biometric and data-driven surveillance of citizens, effectively obliterating Constitutional privacy protections.
- They don’t mean holding tech companies to account for hijacking vast amounts of copyright protected works by human authors, and more generally, large portions of human created knowledge, to narrowly benefit themselves.
- They don’t mean implementing frameworks for ensuring that humans widely and equitably share in the profits of the AI revolution
- They don’t mean barring unbound experimentation at the bleeding edges of AI, trying to create a “Singularity” event, where AI achieves, via comprehensive knowledge, self-direction, self-learning and self-awareness, a decisive overall superiority to human intelligence.
These truly dangerous uses and ambitions for AI are being very actively pursued, and allowed.
The Biden Administration “AI Bill of Rights,” which is the blueprint for tech companies to “voluntarily” create “safe” AI, reads like a woke handbook.
It stipulates for AI to conform to woke ideology, period.
Ask Bard AI about the Hunter Biden laptop, COVID vaccines, transgenderism or Russia-Ukraine, and you will quickly gain an idea of what “safe” AI means to authorities implementing the guardrails.
It follows that as tech companies now look to control, not just the distribution of content, but the creation of content, they gain even more dangerous levels of financial and political power.
They will sell against human authors—including authors “co-writing” with AI—with their own AI written content.
Guess which will have access to the latest and greatest AI content creation engines.
With new powers to directly create content using ever more sophisticated AI, they will have much more leeway to squeeze out and marginalize “problematic” content.
It won’t happen overnight. And plenty of enterprising humans are already making a killing by getting in first on how to use AI to beat out the competition.
But ultimately, it will be a lose-lose for the vast majority of humans.
Synthetic Arts, Entertainment…and Reality
The current Hollywood writers and actors strike is substantially centered around issues involving AI.
Among other things, human writers are demanding that studios limit the use of generative AI in the creation of scripts.
And actors are demanding that they retain rights to their personas, and whether and how studios might use digital AI powered versions of them.
But the day of AI movie stars, not (directly) modeled or based on any particular human, could happen any day.
Many see it as not so different from what has come before.
Humans have proved quite capable throughout history of worshiping idols.
Works of fiction are wholly predicated on creating imaginative humans and other creatures that have no specific actual human analogue.
And the 20th century, via the new medium of “moving pictures,” but also through widely produced and distributed pulp fiction and comics, witnessed an unprecedented creation of new galaxies and mythologies of heroes and villains, inextricably tied to and enabled by new and maturing technologies.
Star Wars, Lord of the Rings, the Marvel and DC universes, and Disney have been some of the most prominent of the new mythologies.
But the 21st century seems destined to usher in an age of Synthetic Everything, and that includes arts, entertainment…and reality itself.
Of course, creating synthetic beings, whether in the virtual world, or the actual world (via combinations of robotics and AI) must trace back to modeling on real world assets.
That AI action hero who becomes a phenom on the level of Tom Cruise or Harrison Ford or Scarlett Johansson, won’t be identifiable as any particular human.
It will just have attributes that are analogues to what makes a “movie star” a movie star.
AI will have digested and analyzed the entire history of heroes, from all cinema, all novels, all mythology of all peoples.
AI will also have vast amounts of data concerning its prospective audience, their habits, activities, preferences, and at some point, very likely even their thoughts.
That will come via Brain Computer Interfaces (BCIs) that many humans will only too happily wear or have implanted. Like COVID vaccines, they may even be mandated, under a rubric of safety.
“None of us is safe from unmonitored dangerous thoughts and ideas, unless all of us are safe!”
However the technical means, via a vigilant and ongoing symbiosis, AI will not only know our desires, but gain considerable power to influence and shape those preferences and desires.
So, as we go to future movies, or step into the kinds of worlds already being teased to use by those incessant Meta commercials about the coming metaverse age, it is likely that AI will be the “creative” force behind the Synthetic experiences to come.
Humans should not let this kind of synthetic future, which cedes a vast amount of power and wealth to a very few technological elites, happen.
But the seductions and incentives to take present and near-term advantage of AI and related technologies are already proving very hard to resist.
A new report by GHD Digital predicts that generative AI will grow from a current 50 billion dollar market value to over 1.5 trillion, by 2033.
GHD Digital President Kumar Parakala said regarding the research:
“Artificial intelligence is already revolutionizing the way we live, work, and interact. I believe the recent advancements in generative AI are a turning point for humankind. Still in its infancy, the progress being made every week is exponential. True visionary leaders understand that focusing on AI goes beyond acknowledging its current state of maturity; it is about recognizing its immense potential to shape the future. The question remaining is ‘how can we maximize this potential while managing the various issues and risks in an ambiguous and complex ecosystem?’”
(“Global market for generative AI expected to reach $1.5 trillion
USD in revenue by 2033, up from $50 billion in 2023, 2 Aug 2023.)
One way governments may look to “maximize potential” is to find creative ways to get their cut of the AI pie, especially since there may be a lot of upheaval for human workers over the next decade.
The “Superhuman” newsletter by Zain Kahn recently featured a graph by Time, illustrating how fast AI is now learning and displacing human abilities in different tasks:
Source: Time, Contextual AI
Right now, synthetic personages are becoming a rage, Khan has noted.
To give one example, an AI created social media influencer named Lil Miquela was recently named one of Time Magazine’s top 25 most influential “people” on the internet.
The synthetic influencer model, who has worked with Chanel, Prada, Samsung, and modeled with Bella Hadid for Calvin Klein, will make an estimated 10 million this year.
The prospect of treating AI entities—whether commercial robotic workers with certain levels of AI powered autonomy and intelligence, or “retail” AI home companions and workers—being subject to employment or other taxes, is probably coming in the near future.
Conferring “rights” and citizenship on AI is already a thing. Saudi Arabia famously awarded citizenship to Sophia, the AI-powered humanoid ambassador created by Hanson Robotics.
The UN conferred a title on Sophia, when she spoke before the world assembly.
It’s not hard to see how elevating machines to personhood will have a requisite financial obligation to governments looking to preserve and grow their own power to dole out benefits, and reap the political rewards.
Obsolescent serfdom, playing second fiddle to AI and elites that control the technology, is no way for average humans to live.
And maybe that’s the whole point.