|
It’s hard to believe that “Frontend” generative AI programs have been in wide use by consumers and workers for less than a year.
But despite the rapid and enormous societal and workplace transformation occurring before our eyes, many don’t want to face where it’s leading.
Some are touting an age of “AI Augmentation.”
It won’t last long.
For all those amazing “prompt engineers” out there: how soon do you think AI will be able to out-prompt you?
If AI can code (and it can), it can devise prompts at the behest of far fewer supervising humans.
The point is that no matter what humans can currently do to stay on top and ahead of the AI curve, and ahead of their fellow human competition, it’s only a matter of time before AI wins every race.
And make no mistake, though “home brew” AI systems can be run on consumer level computers or distributed AI equalitarian networks (incentivised by crypto technology), corporations and governments will reserve the most powerful and capable AI technologies to themselves.
As Gerald Celente put it in his phrase “AI: We Own You,” average people will be allotted their metered use of AI—and virtually all other cloud SaaS (Software-as-a-Service) programs and Metaverse experiences—but technocratic masters of the universe will profit and control the most.
“Layoff Generation”
A new research study has provided some mixed assessments regarding AI and worker displacement.
Titled “The Layoff Generation: How Generative Ai Will Reshape Employment and Labor Markets,” the study, conducted by researchers at The University of Illinois at Urbana-Champaign, acknowledges displacement is already happening, and concerns are growing:
“‘[W]ith the ascent of LLMs, emergent concerns regarding job displacement have arisen. Preliminary data indicates a notable correlation between the incorporation of LLMsand a surge in layoff rates, especially in sectors characterized by routine textual, coding, translate and other repetitive tasks (Chen et al., 2021; Carlini et al., 2021; Felten, 2023;
“‘Yilmaz et al., 2023; Jiao et al, 2023; Webb, 2020). As businesses stand to benefit from reduced operational costs and enhanced efficiency, there emerges a discernible tension between technological progression and job security for certain cohorts of workers.’”
But the study also employed a novel “AI Augmentation Index” of analysis that supposedly reveals a brighter picture for how the AI revolution will play out.
The scientific sounding “index” appears to at least partly be predicated on advocacy rather than analysis, however.
The authors admit:
“This tool is designed to gauge the susceptibility of individual
professions, companies, and entire industries to LLM influences. It’s paramount that decision-makers harness such tools to astutely navigate this evolving landscape, championing a vision where technology complements, rather than supplants, human expertise.”
“Championing a vision where technology complements” does not seem to be a cold-eye assessment of the prospects of AI to obsolete human workers, including H-1B work visas, and monthly increments and decrements of job postings on the widely used LinkedIn platform.
The study’s analysis admits a very mixed picture, even at this early stage of wide “frontend” AI use, though it seeks to spin the findings as positive…well, because they are not wholly negative:
“Based on current labor market dynamics, our overarching conclusion indicates that companies and industries with higher AIAugmentation tend to exhibit lower layoff rates. However, in specific sectors, a contrasting trend emerges, where increased AI Augmentation correlates with higher layoffs. This counters the prevailing narrative suggesting a looming ‘layoff generation’ due to generativeAI.”
The paper differentiates between generative AI uses that are geared to replace humans, vs. AI uses which are meant to “augment” human workers.
But setting out such distinctions is very arguably an artificial, temporary lens for viewing what’s currently happening, versus what will progressively happen.
In other words, there’s no reason to assume AI will not objectively become sophisticated enough to replace what it can currently “augment.”
The authors do acknowledge that the ability to automate AI is a key component that flips the switch from “augmenting” humans to “replacing” them:
“While distinguishing between augmentation and replacement, we consider automation to be the key component in the framework. Our research contributes a novel angle to the discourse on whether automation complements or replaces labor, a question that has been considered but not fully answered in the existing literature. (Lawrence et al., 2017) postulate that automation may lead to job losses, and the extent of these losses depends on whether automation supplements or replaces labor. Similarly, (Eloundou et al., 2023) present two key concepts to explain the labor market impacts of Large Language Models (LLMs). They discuss the skill-biased concept of technological change, which posits that technological
advances favor more skilled and educated workers, and the automated task model, which Electronic copy available at: https://ssrn.com/abstract=4534294 suggests that generative AI is used to automate specific tasks rather than entire jobs. (Noyand Zhang, 2023) provides an answer to the question of how generative AI systems affect employee productivity in the context of intermediate-level professional tasks, in particular the aspect of whether they replace the employee’s efforts or complement the employee’s skills.”
The study falls back on “ethical and legal considerations” as a substantial reason why AI will not eventuate in a worst-case scenario of decimating human jobs.
That’s because there’s no technological reason why AI will not eventually displace more and more human workers. The study authors merely assert—and advocate—for “ethical and legal” limits that will win the day for human labor.
Framework Inadequate and Misguided
As we’ve pointed out, mega tech companies have already hijacked the whole of human creative content, interaction and communication, and accumulated knowledge, via internet scraping, to train its generative AI systems.
Right now, they are obscenely profiting, while the vast portion of humanity is paying to have collective human knowledge regurgitated back to them by all knowing AI.
Human creatives, and indeed, the whole of humanity, have already lost, and have been wholly uncompensated by the greatest technological theft and profitizing model ever devised.
Add to that, the fact that people using the web are less and less inclined to click through to websites to find information.
They just ask ChatGPT via Microsoft’s Bing search engine, or Bard AI, via Google, for what they want to know, and generative AI spits back a direct answer, with a few references and links.
The whole current internet website and search engine profit model that millions of small and medium sized content creators rely on, is under assault, as we predicted in “CREATIVE CONTENT INFRINGEMENT OF DEEP LEARNING AI HAS MONUMENTAL IMPLICATIONS” (7 Feb 2023), and which has since been noted as a trend by mainstream media. (See “GOOGLE SAYS ITS AI WILL “SCRAPE THE INTERNET,” AS TJ FORECAST CONFIRMED: GENERATIVE AI HURTING WEBSITE TRAFFIC,” 11 Jul 2023.)
“Ethical and legal” frameworks are hopelessly behind on providing any help.
To sum up the current reality, the view that AI will be contained and directed to “augmenting” and primarily benefiting any wide swath of humanity is already a pipe dream, based on facts on the ground.